Katrin Hartwig, M.Sc.

Kontakt: +49 (0) 6151 / 1620944 | hartwig(at)peasec.tu-darmstadt.de

Technische Universität Darmstadt, Fachbereich Informatik, Wissenschaft und Technik für Frieden und Sicherheit (PEASEC)  Pankratiusstraße 2, 64289 Darmstadt, Raum 114

Katrin Hartwig ist wissenschaftliche Mitarbeiterin am Lehrstuhl Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) im Fachbereich Informatik der Technischen Universität Darmstadt. Ihre wissenschaftlichen Interessen liegen an der Schnittstelle von Informatik und Psychologie, besonders im Bereich Fake News, Usable Security und Mensch-Computer-Interaktion.

Sie studierte Psychologie in IT (B.Sc.) an der Technischen Universität Darmstadt und den Master in Informatik (M.Sc.). Neben dem Studium arbeitete sie als Softwareentwicklerin in der medizinischen Bildverarbeitung. In ihrer Masterarbeit befasste sie sich mit Aspekten der Usable Security im Rahmen des Sonderforschungsbereichs CROSSING.

Publikationen

2021

  • Katrin Hartwig, Christian Reuter (2021)
    Transparenz im technischen Umgang mit Fake News
    Technik & Mensch (2):9–11.
    [BibTeX] [Abstract] [Download PDF]

    In den letzten Jahren haben sich soziale Medien wie Facebook und Twitter immer mehr zu wichtigen Informationsquellen entwickelt, welche die Verbreitung von nutzergenerierten Inhalten unterstützen. Durch die hohe Verbreitungsgeschwindigkeit, geringen Aufwand und (scheinbare) Anonymität nimmt gleichzeitig die Verbreitung von Fake News und ähnlichen Phänomenen zu. Bereits in den vergangenen Jahren aber insbesondere auch im Kontext der COVID-19 Pandemie hat sich gezeigt, dass Fake News und unbeabsichtigte Fehlinformationen ernsthafte und sogar lebensbedrohliche Konsequenzen mit sich tragen bringen können. Technische Unterstützungsmaßnahmen haben insbesondere in sozialen Medien ein großes Potenzial um Fake News effektiv zu bekämpfen. Hier sind zwei maßgebliche Schritte notwendig: (1) Fake News automatisiert detektieren und (2) nach der erfolgreichen Detektion sinnvolle technische Gegenmaßnahmen implementieren [2].

    @article{hartwig_transparenz_2021,
    title = {Transparenz im technischen {Umgang} mit {Fake} {News}},
    url = {http://www.peasec.de/paper/2021/2021_HartwigReuter_TransparenzFakeNews_TechnikMenschVDI},
    abstract = {In den letzten Jahren haben sich soziale Medien wie Facebook und Twitter immer mehr zu wichtigen Informationsquellen entwickelt, welche die Verbreitung von nutzergenerierten Inhalten unterstützen. Durch die hohe Verbreitungsgeschwindigkeit, geringen Aufwand und (scheinbare) Anonymität nimmt gleichzeitig die Verbreitung von Fake News und ähnlichen Phänomenen zu. Bereits in den vergangenen Jahren aber insbesondere auch im Kontext der COVID-19 Pandemie hat sich gezeigt, dass Fake News und unbeabsichtigte Fehlinformationen ernsthafte und sogar lebensbedrohliche Konsequenzen mit sich tragen bringen können. Technische Unterstützungsmaßnahmen haben insbesondere in sozialen Medien ein großes Potenzial um Fake News effektiv zu bekämpfen. Hier sind zwei maßgebliche Schritte notwendig: (1) Fake News automatisiert detektieren und (2) nach der erfolgreichen Detektion sinnvolle technische Gegenmaßnahmen implementieren [2].},
    number = {2},
    journal = {Technik \& Mensch},
    author = {Hartwig, Katrin and Reuter, Christian},
    year = {2021},
    keywords = {Crisis},
    pages = {9--11},
    }

  • Katrin Hartwig, Christian Reuter (2021)
    Nudging Users Towards Better Security Decisions in Password Creation Using Whitebox-based Multidimensional Visualizations
    Behaviour & Information Technology (BIT) . doi:10.1080/0144929X.2021.1876167
    [BibTeX] [Abstract] [Download PDF]

    Nudging users to keep them secure online has become a growing research field in cybersecurity. While existing approaches are mainly blackbox based, showing aggregated visualisations as one-size-fits-all nudges, personalisation turned out promising to enhance the efficacy of nudges within the high variance of users and contexts. This article presents a disaggregated whitebox-based visualisation of critical information as a novel nudge. By segmenting users according to their decision-making and information processing styles, we investigate if the novel nudge is more effective for specific users than a common black-box nudge. Based on existing literature about critical factors in password security, we designed a dynamic radar chart and parallel coordinates as disaggregated visualisations. We evaluated the short-term effectiveness and users‘ perception of the nudges in a think-aloud prestudy and a representative online evaluation (N=1.012). Our findings suggest that dynamic radar charts present a moderately effective nudge towards stronger passwords regarding short-term efficacy and are appreciated particularly by players of role-playing games.

    @article{hartwig_nudging_2021,
    title = {Nudging {Users} {Towards} {Better} {Security} {Decisions} in {Password} {Creation} {Using} {Whitebox}-based {Multidimensional} {Visualizations}},
    url = {http://www.peasec.de/paper/2021/2021_HartwigReuter_WhiteboxMultidimensionalNudges_BIT.pdf},
    doi = {10.1080/0144929X.2021.1876167},
    abstract = {Nudging users to keep them secure online has become a growing research field in cybersecurity. While existing approaches are mainly blackbox based, showing aggregated visualisations as one-size-fits-all nudges, personalisation turned out promising to enhance the efficacy of nudges within the high variance of users and contexts. This article presents a disaggregated whitebox-based visualisation of critical information as a novel nudge. By segmenting users according to their decision-making and information processing styles, we investigate if the novel nudge is more effective for specific users than a common black-box nudge. Based on existing literature about critical factors in password security, we designed a dynamic radar chart and parallel coordinates as disaggregated visualisations. We evaluated the short-term effectiveness and users' perception of the nudges in a think-aloud prestudy and a representative online evaluation (N=1.012). Our findings suggest that dynamic radar charts present a moderately effective nudge towards stronger passwords regarding short-term efficacy and are appreciated particularly by players of role-playing games.},
    journal = {Behaviour \& Information Technology (BIT)},
    author = {Hartwig, Katrin and Reuter, Christian},
    year = {2021},
    keywords = {HCI, Projekt-CROSSING, Security, UsableSec, Ranking-ImpactFactor, Selected, Ranking-CORE-A, AuswahlUsableSec},
    }

  • Katrin Hartwig, Christian Reuter (2021)
    Fake News technisch begegnen – Detektions- und Behandlungsansätze zur Unterstützung von NutzerInnen
    In: Peter Klimczak, Thomas Zoglauer: Wahrheit und Fake News im postfaktischen Zeitalter. Springer, , 133–150.
    [BibTeX] [Download PDF]

    @incollection{hartwig_fake_2021,
    series = {ars digitalis},
    title = {Fake {News} technisch begegnen – {Detektions}- und {Behandlungsansätze} zur {Unterstützung} von {NutzerInnen}},
    isbn = {978-3-658-32956-3},
    url = {http://www.peasec.de/paper/2021/2021_HartwigReuter_FakeNewstechnischbegegnen_WahrheitundFake},
    booktitle = {Wahrheit und {Fake} {News} im postfaktischen {Zeitalter}},
    publisher = {Springer},
    author = {Hartwig, Katrin and Reuter, Christian},
    editor = {Klimczak, Peter and Zoglauer, Thomas},
    year = {2021},
    keywords = {Crisis, HCI, SocialMedia, Peace},
    pages = {133--150},
    }

  • Anjuli Franz, Gregor Albrecht, Verena Zimmermann, Katrin Hartwig, Christian Reuter, Alexander Benlian, Joachim Vogt (2021)
    Still Plenty of Phish in the Sea — A Review of User-Oriented Phishing Interventions and Avenues for Future Research
    Proceedings of the Symposium on Usable Privacy and Security (SOUPS) .
    [BibTeX]

    @inproceedings{franz_still_2021,
    title = {Still {Plenty} of {Phish} in the {Sea} — {A} {Review} of {User}-{Oriented} {Phishing} {Interventions} and {Avenues} for {Future} {Research}},
    booktitle = {Proceedings of the {Symposium} on {Usable} {Privacy} and {Security} ({SOUPS})},
    author = {Franz, Anjuli and Albrecht, Gregor and Zimmermann, Verena and Hartwig, Katrin and Reuter, Christian and Benlian, Alexander and Vogt, Joachim},
    year = {2021},
    keywords = {Projekt-CROSSING, Security, UsableSec, Ranking-CORE-B},
    }

  • Elise Özalp, Katrin Hartwig, Christian Reuter (2021)
    Trends in Explainable Artificial Intelligence for Non-Experts
    In: Peter Klimczak, Christer Petersen: AI – Limits and Prospects. Bielefeld: Verlag Transcript.
    [BibTeX]

    @incollection{ozalp_trends_2021,
    address = {Bielefeld},
    title = {Trends in {Explainable} {Artificial} {Intelligence} for {Non}-{Experts}},
    booktitle = {{AI} - {Limits} and {Prospects}},
    publisher = {Verlag Transcript},
    author = {Özalp, Elise and Hartwig, Katrin and Reuter, Christian},
    editor = {Klimczak, Peter and Petersen, Christer},
    year = {2021},
    }

  • Katrin Hartwig, Lukas Englisch, Jan Pelle Thomson, Christian Reuter (2021)
    Finding Secret Treasure? Improving Memorized Secrets Through Gamification
    The 2021 European Symposium on Usable Security (EuroUSEC) 2021.
    [BibTeX]

    @inproceedings{hartwig_finding_2021,
    address = {2021},
    title = {Finding {Secret} {Treasure}? {Improving} {Memorized} {Secrets} {Through} {Gamification}},
    booktitle = {The 2021 {European} {Symposium} on {Usable} {Security} ({EuroUSEC})},
    author = {Hartwig, Katrin and Englisch, Lukas and Thomson, Jan Pelle and Reuter, Christian},
    year = {2021},
    keywords = {Security, UsableSec, Projekte-CROSSING, Projekte-ATHENE-SecUrban},
    }

  • Katrin Hartwig, Christian Reuter (2021)
    Nudge or Restraint: How do People Assess Nudging in Cybersecurity – A Representative Study in Germany
    The 2021 European Symposium on Usable Security (EuroUSEC) .
    [BibTeX]

    @inproceedings{hartwig_nudge_2021,
    title = {Nudge or {Restraint}: {How} do {People} {Assess} {Nudging} in {Cybersecurity} - {A} {Representative} {Study} in {Germany}},
    booktitle = {The 2021 {European} {Symposium} on {Usable} {Security} ({EuroUSEC})},
    author = {Hartwig, Katrin and Reuter, Christian},
    year = {2021},
    keywords = {Security, UsableSec, Projekte-CROSSING, Projekte-ATHENE-SecUrban},
    }

    2019

  • Katrin Hartwig, Christian Reuter (2019)
    Fighting Misinformation on Twitter: The Plugin based approach TrustyTweet
    SCIENCE PEACE SECURITY ’19 – Proceedings of the Interdisciplinary Conference on Technical Peace and Security Research Darmstadt, Germany.
    [BibTeX] [Abstract] [Download PDF]

    Finding a responsible way to address fake news on social media has become an urgent matter both in political and social contexts. Existing studies focus mainly on how to detect and label fake news. However, approaches to assist users in making their own assessments are largely missing. In this article we present a study on how an indicator-based white-box approach can support Twitter-users in assessing tweets. In a first step, we identified indicators for fake news that have shown to be promising in previous studies and that are suitable for our idea of a white-box approach. Building on that basis of indicators we then designed and implemented the browser-plugin TrustyTweet, which aims to assist users on Twitter in assessing tweets by showing politically neutral and intuitive warnings without creating reactance. Finally, we present the findings of our evaluations carried out with a total of 27 participants, which result in further design implications for approaches to assist users in dealing with fake news.

    @inproceedings{hartwig_fighting_2019,
    address = {Darmstadt, Germany},
    title = {Fighting {Misinformation} on {Twitter}: {The} {Plugin} based approach {TrustyTweet}},
    url = {https://tuprints.ulb.tu-darmstadt.de/id/eprint/9164},
    abstract = {Finding a responsible way to address fake news on social media has become an urgent matter both in political and social contexts. Existing studies focus mainly on how to detect and label fake news. However, approaches to assist users in making their own assessments are largely missing. In this article we present a study on how an indicator-based white-box approach can support Twitter-users in assessing tweets. In a first step, we identified indicators for fake news that have shown to be promising in previous studies and that are suitable for our idea of a white-box approach. Building on that basis of indicators we then designed and implemented the browser-plugin TrustyTweet, which aims to assist users on Twitter in assessing tweets by showing politically neutral and intuitive warnings without creating reactance. Finally, we present the findings of our evaluations carried out with a total of 27 participants, which result in further design implications for approaches to assist users in dealing with fake news.},
    booktitle = {{SCIENCE} {PEACE} {SECURITY} '19 - {Proceedings} of the {Interdisciplinary} {Conference} on {Technical} {Peace} and {Security} {Research}},
    publisher = {TUprints},
    author = {Hartwig, Katrin and Reuter, Christian},
    editor = {Reuter, Christian and Altmann, Jürgen and Göttsche, Malte and Himmel, Mirko},
    year = {2019},
    keywords = {Crisis, HCI, SocialMedia, Peace},
    pages = {67--69},
    }

  • Katrin Hartwig, Christian Reuter (2019)
    TrustyTweet: An Indicator-based Browser-Plugin to Assist Users in Dealing with Fake News on Twitter
    Proceedings of the International Conference on Wirtschaftsinformatik (WI) Siegen, Germany.
    [BibTeX] [Abstract] [Download PDF]

    The importance of dealing withfake newsonsocial mediahas increased both in political and social contexts.While existing studies focus mainly on how to detect and label fake news, approaches to assist usersin making their own assessments are largely missing. This article presents a study on how Twitter-users’assessmentscan be supported by an indicator-based white-box approach.First, we gathered potential indicators for fake news that have proven to be promising in previous studies and that fit our idea of awhite-box approach. Based on those indicators we then designed and implemented the browser-plugin TrusyTweet, which assists users on Twitterin assessing tweetsby showing politically neutral and intuitive warnings without creating reactance. Finally, we suggest the findings of our evaluations with a total of 27 participants which lead to further design implicationsfor approachesto assistusers in dealing with fake news.

    @inproceedings{hartwig_trustytweet_2019,
    address = {Siegen, Germany},
    title = {{TrustyTweet}: {An} {Indicator}-based {Browser}-{Plugin} to {Assist} {Users} in {Dealing} with {Fake} {News} on {Twitter}},
    url = {http://www.peasec.de/paper/2019/2019_HartwigReuter_TrustyTweet_WI.pdf},
    abstract = {The importance of dealing withfake newsonsocial mediahas increased both in political and social contexts.While existing studies focus mainly on how to detect and label fake news, approaches to assist usersin making their own assessments are largely missing. This article presents a study on how Twitter-users'assessmentscan be supported by an indicator-based white-box approach.First, we gathered potential indicators for fake news that have proven to be promising in previous studies and that fit our idea of awhite-box approach. Based on those indicators we then designed and implemented the browser-plugin TrusyTweet, which assists users on Twitterin assessing tweetsby showing politically neutral and intuitive warnings without creating reactance. Finally, we suggest the findings of our evaluations with a total of 27 participants which lead to further design implicationsfor approachesto assistusers in dealing with fake news.},
    booktitle = {Proceedings of the {International} {Conference} on {Wirtschaftsinformatik} ({WI})},
    publisher = {AIS},
    author = {Hartwig, Katrin and Reuter, Christian},
    year = {2019},
    keywords = {Crisis, HCI, Projekt-CRISP, Student, Ranking-CORE-C, Ranking-VHB-C, SocialMedia, A-Paper, Ranking-WKWI-A, Peace, Projekt-ATHENE-FANCY},
    pages = {1858--1869},
    }

  • Christian Reuter, Katrin Hartwig, Jan Kirchner, Noah Schlegel (2019)
    Fake News Perception in Germany: A Representative Study of People’s Attitudes and Approaches to Counteract Disinformation
    Proceedings of the International Conference on Wirtschaftsinformatik (WI) Siegen, Germany.
    [BibTeX] [Abstract] [Download PDF]

    Fake news has become an important topic in our social and political environment. While research is coming up for the U.S. and European countries, many aspects remain uncovered as long as existing work only marginally inves-tigates people’s attitudes towards fake news. In this work, we present the results of a representative study (N=1023) in Germany asking participants about their attitudes towards fake news and approaches to counteract disinformation. More than 80\% of the participants agree that fake news poses a threat. 78\% see fake news as harming democracy. Even though about half of the respondents (48\%) have noticed fake news, most participants stated to have never liked, shared or commented on fake news. Regarding demographic factors, our findings support the view of younger and relatively educated people being more informed about fake news. Concerning ideological motives, the evaluation suggests left-wing or liberal respondents to be more critical of fake news

    @inproceedings{reuter_fake_2019-2,
    address = {Siegen, Germany},
    title = {Fake {News} {Perception} in {Germany}: {A} {Representative} {Study} of {People}'s {Attitudes} and {Approaches} to {Counteract} {Disinformation}},
    url = {http://www.peasec.de/paper/2019/2019_ReuterHartwigKirchnerSchlegel_FakeNewsPerceptionGermany_WI.pdf},
    abstract = {Fake news has become an important topic in our social and political environment. While research is coming up for the U.S. and European countries, many aspects remain uncovered as long as existing work only marginally inves-tigates people's attitudes towards fake news. In this work, we present the results of a representative study (N=1023) in Germany asking participants about their attitudes towards fake news and approaches to counteract disinformation. More than 80\% of the participants agree that fake news poses a threat. 78\% see fake news as harming democracy. Even though about half of the respondents (48\%) have noticed fake news, most participants stated to have never liked, shared or commented on fake news. Regarding demographic factors, our findings support the view of younger and relatively educated people being more informed about fake news. Concerning ideological motives, the evaluation suggests left-wing or liberal respondents to be more critical of fake news},
    booktitle = {Proceedings of the {International} {Conference} on {Wirtschaftsinformatik} ({WI})},
    publisher = {AIS},
    author = {Reuter, Christian and Hartwig, Katrin and Kirchner, Jan and Schlegel, Noah},
    year = {2019},
    keywords = {Crisis, HCI, Student, Ranking-CORE-C, Ranking-VHB-C, SocialMedia, A-Paper, Ranking-WKWI-A, Peace},
    pages = {1069--1083},
    }

    Weitere Veröffentlichungen:

    Laura, C. O., Hartwig, K., Distergoft, A., Hoffmann, T., Scheckenbach, K., Brüsseler, M., & Wesarg, S. (2021, February). Automatic segmentation of the structures in the nasal cavity and the ethmoidal sinus for the quantification of nasal septal deviations. In Medical Imaging 2021: Computer-Aided Diagnosis (Vol. 11597, p. 115972J). International Society for Optics and Photonics.

    Oyarzun, C. L., Hartwig, K., Hertlein, A. S., Jung, F., Burmeister, J., Kohlhammer, J., … & Sauter, G. (2020). Web-based Prostate Visualization Tool. Current Directions in Biomedical Engineering, 6(3), 563-566.