Katrin Hartwig, M.Sc.

Wissenschaftliche Mitarbeiterin / Doktorandin

Kontakt: +49 (0) 6151 / 1620944 | hartwig(at)peasec.tu-darmstadt.de

Technische Universität Darmstadt, Fachbereich Informatik, Wissenschaft und Technik für Frieden und Sicherheit (PEASEC)  Pankratiusstraße 2, 64289 Darmstadt, Raum 114

Katrin Hartwig ist wissenschaftliche Mitarbeiterin am Lehrstuhl Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) im Fachbereich Informatik der Technischen Universität Darmstadt. Ihre wissenschaftlichen Interessen liegen an der Schnittstelle von Informatik und Psychologie, besonders im Bereich Fake News, Usable Security und Mensch-Computer-Interaktion. Sie wirkt maßgeblich am BMBF-Projekt NEBULA (Nutzerzentrierte KI-basierte Erkennung von Fake News und Fehlinformationen) mit.

Sie studierte Informatik (M.Sc.) und Psychologie in IT (B.Sc.) an der Technischen Universität Darmstadt. Neben dem Studium arbeitete sie als Softwareentwicklerin in der medizinischen Bildverarbeitung. In ihrer Masterarbeit befasste sie sich mit Aspekten der Usable Security im Rahmen des Sonderforschungsbereichs CROSSING.

Publikationen

2024

  • Katrin Hartwig, Tom Biselli, Franziska Schneider, Christian Reuter (2024)
    From Adolescents‘ Eyes: Assessing an Indicator-Based Intervention to Combat Misinformation on TikTok
    Proceedings of the Conference on Human Factors in Computing Systems (CHI) .
    [BibTeX] [Abstract]

    Misinformation poses a recurrent challenge for video-sharing platforms (VSPs) like TikTok. Obtaining user perspectives on digital interventions addressing the need for transparency (e.g., through indicators) is essential. This article offers a thorough examination of the comprehensibility, usefulness, and limitations of an indicator-based intervention from an adolescents’ perspective. This study (𝑁 = 39; aged 13-16 years) comprised two qualitative steps: (1) focus group discussions and (2) think-aloud sessions, where participants engaged with a smartphone-app for TikTok. The results offer new insights into how video-based indicators can assist adolescents’ assessments. The intervention received positive feedback, especially for its transparency, and could be applicable to new content. This paper sheds light on how adolescents are expected to be experts while also being prone to video-based misinformation, with limited understanding of an intervention’s limitations. By adopting teenagers’ perspectives, we contribute to HCI research and provide new insights into the chances and limitations of interventions for VSPs.

    @inproceedings{hartwig_adolescents_2024,
    series = {{CHI} '24},
    title = {From {Adolescents}' {Eyes}: {Assessing} an {Indicator}-{Based} {Intervention} to {Combat} {Misinformation} on {TikTok}},
    abstract = {Misinformation poses a recurrent challenge for video-sharing platforms (VSPs) like TikTok. Obtaining user perspectives on digital interventions addressing the need for transparency (e.g., through indicators) is essential. This article offers a thorough examination of the comprehensibility, usefulness, and limitations of an indicator-based intervention from an adolescents’ perspective. This study (𝑁 = 39; aged 13-16 years) comprised two qualitative steps: (1) focus group discussions and (2) think-aloud sessions, where participants
    engaged with a smartphone-app for TikTok. The results offer new insights into how video-based indicators can assist adolescents’ assessments. The intervention received positive feedback, especially for its transparency, and could be applicable to new content. This paper sheds light on how adolescents are expected to be experts while also being prone to video-based misinformation, with limited understanding of an intervention’s limitations. By adopting
    teenagers’ perspectives, we contribute to HCI research and provide new insights into the chances and limitations of interventions for VSPs.},
    booktitle = {Proceedings of the {Conference} on {Human} {Factors} in {Computing} {Systems} ({CHI})},
    publisher = {Association for Computing Machinery},
    author = {Hartwig, Katrin and Biselli, Tom and Schneider, Franziska and Reuter, Christian},
    year = {2024},
    keywords = {AuswahlCrisis, HCI, Selected, UsableSec, Security, A-Paper, Ranking-CORE-A*, Projekt-NEBULA, Projekt-ATHENE-PriVis},
    }

  • Katrin Hartwig, Ruslan Sandler, Christian Reuter (2024)
    Navigating Misinformation in Voice Messages: Identification of User-Centered Features for Digital Interventions
    Risk, Hazards, & Crisis in Public Policy .
    [BibTeX] [Abstract]

    Misinformation presents a challenge to democracies, particularly in times of crisis. One way in which misinformation is spread is through voice messages sent via messenger groups, which enable members to share information on a larger scale. Gaining user perspectives on digital misinformation interventions as countermeasure after detection is crucial. In this paper, we extract potential features of misinformation in voice messages from literature, implement them within a program that automatically processes voice messages, and evaluate their perceived usefulness and comprehensibility as user-centered indicators.We propose 35 features extracted from audio files at the character, word, sentence, audio and creator levels to assist (1) private individuals in conducting credibility assessments, (2) government agencies faced with data overload during crises, and (3) researchers seeking to gather features for automatic detection approaches. We conducted a think-aloud study with laypersons (N = 20) to provide initial insight into how individuals autonomously assess the credibility of voice messages, as well as which automatically extracted features they find to be clear and convincing indicators of misinformation. Our study provides qualitative and quantitative insights into valuable indicators, particularly when they relate directly to the content or its creator, and uncovers challenges in user interface design.

    @article{hartwig_navigating_2024,
    title = {Navigating {Misinformation} in {Voice} {Messages}: {Identification} of {User}-{Centered} {Features} for {Digital} {Interventions}},
    abstract = {Misinformation presents a challenge to democracies, particularly in times of crisis. One way in which misinformation is spread is through voice messages sent via messenger groups, which enable members to share information on a larger scale. Gaining user perspectives on digital misinformation interventions as countermeasure after detection is crucial. In this paper, we extract potential features of misinformation in voice messages from literature, implement them within a program that automatically processes voice messages, and evaluate their perceived usefulness and comprehensibility as user-centered indicators.We propose 35 features extracted from audio files at the character, word, sentence, audio and creator levels to assist (1) private individuals in conducting credibility assessments, (2) government agencies faced with data overload during crises, and (3) researchers seeking to gather features for automatic detection approaches. We conducted a think-aloud study with laypersons (N = 20) to provide initial insight into how individuals autonomously assess the credibility of voice messages, as well as which automatically extracted features they find to be clear and convincing indicators of misinformation. Our study provides qualitative and quantitative insights into valuable indicators, particularly when they relate directly to the content or its creator, and uncovers challenges in user interface design.},
    journal = {Risk, Hazards, \& Crisis in Public Policy},
    author = {Hartwig, Katrin and Sandler, Ruslan and Reuter, Christian},
    year = {2024},
    keywords = {Crisis, HCI, SocialMedia, Student, UsableSec, A-Paper, Ranking-ImpactFactor, Cyberwar, Projekt-NEBULA},
    }

    2023

  • Elise Özalp, Katrin Hartwig, Christian Reuter (2023)
    Trends in Explainable Artificial Intelligence for Non-Experts
    In: Peter Klimczak, Christer Petersen: AI – Limits and Prospects of Artificial Intelligence. Bielefeld: Transcript Verlag, , 223–243.
    [BibTeX] [Abstract] [Download PDF]

    In this paper we provide an overview of XAI by introducing fundamental terminology and the goals of XAI, as well as recent research findings. Whilst doing this, we pay special attention to strategies for non-expert stakeholders. This leads us to our first research question: “What are the trends in explainable AI strategies for non-experts?”. In order to illustrate the current state of these trends, we further want to study an exemplary and very relevant application domain. According to Abdul et al. (2018), one of the first domains where researchers pursued XAI is the medical domain. This leads to our second research question: “What are the approaches of XAI in the medical domain for non-expert stakeholders?” These research questions will provide an overview of current topics in XAI and show possible research extensions for specific domains.

    @incollection{ozalp_trends_2023,
    address = {Bielefeld},
    title = {Trends in {Explainable} {Artificial} {Intelligence} for {Non}-{Experts}},
    url = {https://www.transcript-verlag.de/978-3-8376-5732-6/ai-limits-and-prospects-of-artificial-intelligence/?c=313000019},
    abstract = {In this paper we provide an overview of XAI by introducing fundamental terminology and the goals of XAI, as well as recent research findings. Whilst doing this, we pay special attention to strategies for non-expert stakeholders. This leads us to our first research question: “What are the trends in explainable AI strategies for non-experts?”. In order to illustrate the current state of these trends, we further want to study an exemplary and very relevant application domain. According to Abdul et al. (2018), one of the first domains where researchers pursued XAI is the medical domain. This leads to our second research question: “What are the approaches of XAI in the medical domain for non-expert stakeholders?” These research questions will provide an overview of current topics in XAI and show possible research extensions for specific domains.},
    booktitle = {{AI} - {Limits} and {Prospects} of {Artificial} {Intelligence}},
    publisher = {Transcript Verlag},
    author = {Özalp, Elise and Hartwig, Katrin and Reuter, Christian},
    editor = {Klimczak, Peter and Petersen, Christer},
    year = {2023},
    keywords = {HCI, Student, UsableSec, Projekt-CROSSING, Projekt-ATHENE-SecUrban, Projekt-CYWARN},
    pages = {223--243},
    }

  • Katrin Hartwig, Frederic Doell, Christian Reuter (2023)
    The Landscape of User-centered Misinformation Interventions – A Systematic Literature Review
    2023. doi:10.48550/ARXIV.2301.06517
    [BibTeX] [Abstract] [Download PDF]

    Misinformation represent a key challenge for society. User-centered misinformation interventions as digital countermeasures that exert a direct influence on users represent a promising means to deal with the large amounts of information available. While an extensive body of research on this topic exists, researchers are confronted with a diverse research landscape spanning multiple disciplines. This review systematizes the landscape of user-centered misinformation interventions to facilitate knowledge transfer, identify trends, and enable informed decision-making. Over 3,700 scholarly publications were screened and a systematic literature review (N=108) was conducted. A taxonomy was derived regarding intervention design (e.g., binary label), user interaction (active or passive), and timing (e.g., post exposure to misinformation). We provide a structured overview of approaches across multiple disciplines, and derive six overarching challenges for future research.

    @techreport{hartwig_landscape_2023,
    title = {The {Landscape} of {User}-centered {Misinformation} {Interventions} – {A} {Systematic} {Literature} {Review}},
    copyright = {arXiv.org perpetual, non-exclusive license},
    url = {https://arxiv.org/abs/2301.06517},
    abstract = {Misinformation represent a key challenge for society. User-centered misinformation interventions as digital countermeasures that exert a direct influence on users represent a promising means to deal with the large amounts of information available. While an extensive body of research on this topic exists, researchers are confronted with a diverse research landscape spanning multiple disciplines. This review systematizes the landscape of user-centered misinformation interventions to facilitate knowledge transfer, identify trends, and enable informed decision-making. Over 3,700 scholarly publications were screened and a systematic literature review (N=108) was conducted. A taxonomy was derived regarding intervention design (e.g., binary label), user interaction (active or passive), and timing (e.g., post exposure to misinformation). We provide a structured overview of approaches across multiple disciplines, and derive six overarching challenges for future research.},
    institution = {arXiv},
    author = {Hartwig, Katrin and Doell, Frederic and Reuter, Christian},
    year = {2023},
    doi = {10.48550/ARXIV.2301.06517},
    keywords = {HCI, Peace, Projekt-NEBULA},
    }

  • Katrin Hartwig, Christian Reuter (2023)
    Countering Fake News Technically – Detection and Countermeasure Approaches to Support Users
    In: Peter Klimczak, Thomas Zoglauer: Truth and Fake in the Post-Factual Digital Age: Distinctions in the Humanities and IT Sciences. Wiesbaden: Springer Fachmedien Wiesbaden, , 131–147. doi:10.1007/978-3-658-40406-2_7
    [BibTeX] [Abstract] [Download PDF]

    The importance of dealing with fake newsfake news has increased in both political and social contexts: While existing studies mainly focus on how to detect and label fake news, approaches to help users make their own assessments are largely lacking. This article presents existing black-boxblack box and white-boxwhite box approaches and compares advantages and disadvantages. In particular, white-box approaches show promise in counteracting reactance, while black-box approaches detect fake news with much greater accuracy. We also present the browser plugin TrustyTweetTrustyTweet, which we developed to help users evaluate tweets on Twitter by displaying politically neutral and intuitive warnings without generating reactance.

    @incollection{hartwig_countering_2023,
    address = {Wiesbaden},
    title = {Countering {Fake} {News} {Technically} – {Detection} and {Countermeasure} {Approaches} to {Support} {Users}},
    isbn = {978-3-658-40406-2},
    url = {https://peasec.de/paper/2023/2023_HartwigReuter_CounteringFakeNews_TruthFakePostTruth.pdf},
    abstract = {The importance of dealing with fake newsfake news has increased in both political and social contexts: While existing studies mainly focus on how to detect and label fake news, approaches to help users make their own assessments are largely lacking. This article presents existing black-boxblack box and white-boxwhite box approaches and compares advantages and disadvantages. In particular, white-box approaches show promise in counteracting reactance, while black-box approaches detect fake news with much greater accuracy. We also present the browser plugin TrustyTweetTrustyTweet, which we developed to help users evaluate tweets on Twitter by displaying politically neutral and intuitive warnings without generating reactance.},
    booktitle = {Truth and {Fake} in the {Post}-{Factual} {Digital} {Age}: {Distinctions} in the {Humanities} and {IT} {Sciences}},
    publisher = {Springer Fachmedien Wiesbaden},
    author = {Hartwig, Katrin and Reuter, Christian},
    editor = {Klimczak, Peter and Zoglauer, Thomas},
    year = {2023},
    doi = {10.1007/978-3-658-40406-2_7},
    keywords = {Crisis, HCI, SocialMedia, Projekt-CROSSING, Projekt-ATHENE},
    pages = {131--147},
    }

    2022

  • Stefka Schmid, Katrin Hartwig, Robert Cieslinski, Christian Reuter (2022)
    Digital Resilience in Dealing with Misinformation on Social Media during COVID-19: A Web Application to Assist Users in Crises
    Information Systems Frontiers (ISF) . doi:10.1007/s10796-022-10347-5
    [BibTeX] [Abstract] [Download PDF]

    In crises such as the COVID-19 pandemic, it is crucial to support users when dealing with social media content. Considering digital resilience, we propose a web app based on Social Network Analysis (SNA) to provide an overview of potentially misleading vs. non-misleading content on Twitter, which can be explored by users and enable foundational learning. The latter aims at systematically identifying thematic patterns which may be associated with misleading information. Additionally, it entails reflecting on indicators of misleading tweets which are proposed to approach classification of tweets. Paying special attention to non-expert users of social media, we conducted a two-step Think Aloud study for evaluation. While participants valued the opportunity to generate new knowledge and the diversity of the application, qualities such as equality and rapidity may be further improved. However, learning effects outweighed individual costs as all users were able to shift focus onto relevant features, such as hashtags, while readily pointing out content characteristics. Our design artifact connects to learning-oriented interventions regarding the spread of misleading information and tackles information overload by a SNA-based plug-in.

    @article{schmid_digital_2022,
    title = {Digital {Resilience} in {Dealing} with {Misinformation} on {Social} {Media} during {COVID}-19: {A} {Web} {Application} to {Assist} {Users} in {Crises}},
    url = {https://link.springer.com/article/10.1007/s10796-022-10347-5},
    doi = {10.1007/s10796-022-10347-5},
    abstract = {In crises such as the COVID-19 pandemic, it is crucial to support users when dealing with social media content. Considering digital resilience, we propose a web app based on Social Network Analysis (SNA) to provide an overview of potentially misleading vs. non-misleading content on Twitter, which can be explored by users and enable foundational learning. The latter aims at systematically identifying thematic patterns which may be associated with misleading information. Additionally, it entails reflecting on indicators of misleading tweets which are proposed to approach classification of tweets. Paying special attention to non-expert users of social media, we conducted a two-step Think Aloud study for evaluation. While participants valued the opportunity to generate new knowledge and the diversity of the application, qualities such as equality and rapidity may be further improved. However, learning effects outweighed individual costs as all users were able to shift focus onto relevant features, such as hashtags, while readily pointing out content characteristics. Our design artifact connects to learning-oriented interventions regarding the spread of misleading information and tackles information overload by a SNA-based plug-in.},
    journal = {Information Systems Frontiers (ISF)},
    author = {Schmid, Stefka and Hartwig, Katrin and Cieslinski, Robert and Reuter, Christian},
    year = {2022},
    keywords = {Crisis, Student, A-Paper, Projekt-TraCe, Projekt-NEBULA},
    }

  • Katrin Hartwig, Christian Reuter (2022)
    Nudging Users Towards Better Security Decisions in Password Creation Using Whitebox-based Multidimensional Visualizations
    Behaviour & Information Technology (BIT) ;41(7):1357–1380. doi:10.1080/0144929X.2021.1876167
    [BibTeX] [Abstract] [Download PDF]

    Nudging users to keep them secure online has become a growing research field in cybersecurity. While existing approaches are mainly blackbox based, showing aggregated visualisations as one-size-fits-all nudges, personalisation turned out promising to enhance the efficacy of nudges within the high variance of users and contexts. This article presents a disaggregated whitebox-based visualisation of critical information as a novel nudge. By segmenting users according to their decision-making and information processing styles, we investigate if the novel nudge is more effective for specific users than a common black-box nudge. Based on existing literature about critical factors in password security, we designed a dynamic radar chart and parallel coordinates as disaggregated visualisations. We evaluated the short-term effectiveness and users‘ perception of the nudges in a think-aloud prestudy and a representative online evaluation (N=1.012). Our findings suggest that dynamic radar charts present a moderately effective nudge towards stronger passwords regarding short-term efficacy and are appreciated particularly by players of role-playing games.

    @article{hartwig_nudging_2022,
    title = {Nudging {Users} {Towards} {Better} {Security} {Decisions} in {Password} {Creation} {Using} {Whitebox}-based {Multidimensional} {Visualizations}},
    volume = {41},
    url = {https://peasec.de/paper/2022/2022_HartwigReuter_WhiteboxMultidimensionalNudges_BIT.pdf},
    doi = {10.1080/0144929X.2021.1876167},
    abstract = {Nudging users to keep them secure online has become a growing research field in cybersecurity. While existing approaches are mainly blackbox based, showing aggregated visualisations as one-size-fits-all nudges, personalisation turned out promising to enhance the efficacy of nudges within the high variance of users and contexts. This article presents a disaggregated whitebox-based visualisation of critical information as a novel nudge. By segmenting users according to their decision-making and information processing styles, we investigate if the novel nudge is more effective for specific users than a common black-box nudge. Based on existing literature about critical factors in password security, we designed a dynamic radar chart and parallel coordinates as disaggregated visualisations. We evaluated the short-term effectiveness and users' perception of the nudges in a think-aloud prestudy and a representative online evaluation (N=1.012). Our findings suggest that dynamic radar charts present a moderately effective nudge towards stronger passwords regarding short-term efficacy and are appreciated particularly by players of role-playing games.},
    number = {7},
    journal = {Behaviour \& Information Technology (BIT)},
    author = {Hartwig, Katrin and Reuter, Christian},
    year = {2022},
    keywords = {HCI, Selected, UsableSec, Security, A-Paper, Ranking-ImpactFactor, Ranking-CORE-A, Projekt-CROSSING, Projekt-ATHENE-FANCY, AuswahlUsableSec},
    pages = {1357--1380},
    }

    2021

  • Anjuli Franz, Gregor Albrecht, Verena Zimmermann, Katrin Hartwig, Christian Reuter, Alexander Benlian, Joachim Vogt (2021)
    SoK: Still Plenty of Phish in the Sea — A Review of User-Oriented Phishing Interventions and Avenues for Future Research
    USENIX Symposium on Usable Privacy and Security (SOUPS) .
    [BibTeX] [Abstract] [Download PDF]

    Phishing is a prevalent cyber threat, targeting individuals and organizations alike. Previous approaches on anti-phishing measures have started to recognize the role of the user, who, at the center of the target, builds the last line of defense. However, user-oriented phishing interventions are fragmented across a diverse research landscape, which has not been systematized to date. This makes it challenging to gain an overview of the various approaches taken by prior works. In this paper, we present a taxonomy of phishing interventions based on a systematic literature analysis. We shed light on the diversity of existing approaches by analyzing them with respect to the intervention type, the addressed phishing attack vector, the time at which the intervention takes place, and the required user interaction. Furthermore, we highlight shortcomings and challenges emerging from both our literature sample and prior meta-analyses, and discuss them in the light of current movements in the field of usable security. With this article, we hope to provide useful directions for future works on phishing interventions.

    @inproceedings{franz_sok_2021,
    title = {{SoK}: {Still} {Plenty} of {Phish} in the {Sea} — {A} {Review} of {User}-{Oriented} {Phishing} {Interventions} and {Avenues} for {Future} {Research}},
    isbn = {978-1-939133-25-0},
    url = {https://www.usenix.org/system/files/soups2021-franz.pdf},
    abstract = {Phishing is a prevalent cyber threat, targeting individuals and
    organizations alike. Previous approaches on anti-phishing
    measures have started to recognize the role of the user, who,
    at the center of the target, builds the last line of defense.
    However, user-oriented phishing interventions are fragmented
    across a diverse research landscape, which has not been
    systematized to date. This makes it challenging to gain an
    overview of the various approaches taken by prior works.
    In this paper, we present a taxonomy of phishing interventions
    based on a systematic literature analysis. We shed light
    on the diversity of existing approaches by analyzing them
    with respect to the intervention type, the addressed phishing
    attack vector, the time at which the intervention takes place,
    and the required user interaction. Furthermore, we highlight
    shortcomings and challenges emerging from both our literature
    sample and prior meta-analyses, and discuss them in
    the light of current movements in the field of usable security.
    With this article, we hope to provide useful directions for
    future works on phishing interventions.},
    booktitle = {{USENIX} {Symposium} on {Usable} {Privacy} and {Security} ({SOUPS})},
    author = {Franz, Anjuli and Albrecht, Gregor and Zimmermann, Verena and Hartwig, Katrin and Reuter, Christian and Benlian, Alexander and Vogt, Joachim},
    year = {2021},
    keywords = {UsableSec, Security, Ranking-CORE-B, Projekt-CROSSING, AuswahlUsableSec},
    }

  • Katrin Hartwig, Christian Reuter (2021)
    Transparenz im technischen Umgang mit Fake News
    Technik & Mensch (2):9–11.
    [BibTeX] [Abstract] [Download PDF]

    In den letzten Jahren haben sich soziale Medien wie Facebook und Twitter immer mehr zu wichtigen Informationsquellen entwickelt, welche die Verbreitung von nutzergenerierten Inhalten unterstützen. Durch die hohe Verbreitungsgeschwindigkeit, geringen Aufwand und (scheinbare) Anonymität nimmt gleichzeitig die Verbreitung von Fake News und ähnlichen Phänomenen zu. Bereits in den vergangenen Jahren aber insbesondere auch im Kontext der COVID-19 Pandemie hat sich gezeigt, dass Fake News und unbeabsichtigte Fehlinformationen ernsthafte und sogar lebensbedrohliche Konsequenzen mit sich tragen bringen können. Technische Unterstützungsmaßnahmen haben insbesondere in sozialen Medien ein großes Potenzial um Fake News effektiv zu bekämpfen. Hier sind zwei maßgebliche Schritte notwendig: (1) Fake News automatisiert detektieren und (2) nach der erfolgreichen Detektion sinnvolle technische Gegenmaßnahmen implementieren [2].

    @article{hartwig_transparenz_2021,
    title = {Transparenz im technischen {Umgang} mit {Fake} {News}},
    url = {https://peasec.de/paper/2021/2021_HartwigReuter_TransparenzFakeNews_TechnikMenschVDI.pdf},
    abstract = {In den letzten Jahren haben sich soziale Medien wie Facebook und Twitter immer mehr zu wichtigen Informationsquellen entwickelt, welche die Verbreitung von nutzergenerierten Inhalten unterstützen. Durch die hohe Verbreitungsgeschwindigkeit, geringen Aufwand und (scheinbare) Anonymität nimmt gleichzeitig die Verbreitung von Fake News und ähnlichen Phänomenen zu. Bereits in den vergangenen Jahren aber insbesondere auch im Kontext der COVID-19 Pandemie hat sich gezeigt, dass Fake News und unbeabsichtigte Fehlinformationen ernsthafte und sogar lebensbedrohliche Konsequenzen mit sich tragen bringen können. Technische Unterstützungsmaßnahmen haben insbesondere in sozialen Medien ein großes Potenzial um Fake News effektiv zu bekämpfen. Hier sind zwei maßgebliche Schritte notwendig: (1) Fake News automatisiert detektieren und (2) nach der erfolgreichen Detektion sinnvolle technische Gegenmaßnahmen implementieren [2].},
    number = {2},
    journal = {Technik \& Mensch},
    author = {Hartwig, Katrin and Reuter, Christian},
    year = {2021},
    keywords = {Crisis},
    pages = {9--11},
    }

  • Katrin Hartwig, Christian Reuter (2021)
    Fake News technisch begegnen – Detektions- und Behandlungsansätze zur Unterstützung von NutzerInnen
    In: Peter Klimczak, Thomas Zoglauer: Wahrheit und Fake News im postfaktischen Zeitalter. Wiesbaden: Springer Vieweg, , 133–150.
    [BibTeX] [Abstract] [Download PDF]

    Die Bedeutung des Umgangs mit Fake News hat sowohl im politischen als auch im sozialen Kontext zugenommen: Während sich bestehende Studien vor allem darauf konzentrieren, wie man gefälschte Nachrichten erkennt und kennzeichnet, fehlen Ansätze zur Unterstützung der NutzerInnen bei der eigenen Einschätzung weitgehend. Dieser Artikel stellt bestehende Black-Box- und White-Box-Ansätze vor und vergleicht Vor- und Nachteile. Dabei zeigen sich White-Box-Ansätze insbesondere als vielversprechend, um gegen Reaktanzen zu wirken, während Black-Box-Ansätze Fake News mit deutlich größerer Genauigkeit detektieren. Vorgestellt wird auch das von uns entwickelte Browser-Plugin TrustyTweet, welches die BenutzerInnen bei der Bewertung von Tweets auf Twitter unterstützt, indem es politisch neutrale und intuitive Warnungen anzeigt, ohne Reaktanz zu erzeugen.

    @incollection{hartwig_fake_2021,
    address = {Wiesbaden},
    series = {ars digitalis},
    title = {Fake {News} technisch begegnen – {Detektions}- und {Behandlungsansätze} zur {Unterstützung} von {NutzerInnen}},
    volume = {3},
    isbn = {978-3-658-32956-3},
    url = {https://peasec.de/paper/2021/2021_HartwigReuter_FakeNewstechnischbegegnen_WahrheitundFake.pdf},
    abstract = {Die Bedeutung des Umgangs mit Fake News hat sowohl im politischen als auch im sozialen Kontext zugenommen: Während sich bestehende Studien vor allem darauf konzentrieren, wie man gefälschte Nachrichten erkennt und kennzeichnet, fehlen Ansätze zur Unterstützung der NutzerInnen bei der eigenen Einschätzung weitgehend. Dieser Artikel stellt bestehende Black-Box- und White-Box-Ansätze vor und vergleicht Vor- und Nachteile. Dabei zeigen sich White-Box-Ansätze insbesondere als vielversprechend, um gegen Reaktanzen zu wirken, während Black-Box-Ansätze Fake News mit deutlich größerer Genauigkeit detektieren. Vorgestellt wird auch das von uns entwickelte Browser-Plugin TrustyTweet, welches die BenutzerInnen bei der Bewertung von Tweets auf Twitter unterstützt, indem es politisch neutrale und intuitive Warnungen anzeigt, ohne Reaktanz zu erzeugen.},
    language = {de},
    booktitle = {Wahrheit und {Fake} {News} im postfaktischen {Zeitalter}},
    publisher = {Springer Vieweg},
    author = {Hartwig, Katrin and Reuter, Christian},
    editor = {Klimczak, Peter and Zoglauer, Thomas},
    year = {2021},
    keywords = {Crisis, HCI, SocialMedia, Peace},
    pages = {133--150},
    }

  • Katrin Hartwig, Christian Reuter (2021)
    Nudge or Restraint: How do People Assess Nudging in Cybersecurity – A Representative Study in Germany
    European Symposium on Usable Security (EuroUSEC) Karlsruhe, Germany. doi:10.1145/3481357.3481514
    [BibTeX] [Abstract] [Download PDF]

    While nudging is a long-established instrument in many contexts, it has more recently emerged to be relevant in cybersecurity as well. For instance, existing research suggests nudges for stronger passwords or safe WiFi connections. However, those nudges are often not as effective as desired. To improve their effectiveness, it is crucial to understand how people assess nudges in cybersecurity, to address potential fears and resulting reactance and to facilitate voluntary compliance. In other contexts, such as the health sector, studies have already thoroughly explored the attitude towards nudging. To address that matter in cybersecurity, we conducted a representative study in Germany (𝑁 = 1, 012), asking people about their attitude towards nudging in that specific context. Our findings reveal that 64\% rated nudging in cybersecurity as helpful, however several participants expected risks such as intentional misguidance, manipulation and data exposure as well.

    @inproceedings{hartwig_nudge_2021,
    address = {Karlsruhe, Germany},
    title = {Nudge or {Restraint}: {How} do {People} {Assess} {Nudging} in {Cybersecurity} - {A} {Representative} {Study} in {Germany}},
    url = {https://peasec.de/paper/2021/2021_HartwigReuter_NudgingCybersecurityRepresentativeStudy_EuroUSEC.pdf},
    doi = {10.1145/3481357.3481514},
    abstract = {While nudging is a long-established instrument in many contexts, it has more recently emerged to be relevant in cybersecurity as well. For instance, existing research suggests nudges for stronger passwords or safe WiFi connections. However, those nudges are often not as effective as desired. To improve their effectiveness, it is crucial to understand how people assess nudges in cybersecurity, to address potential fears and resulting reactance and to facilitate voluntary compliance. In other contexts, such as the health sector, studies have already thoroughly explored the attitude towards nudging. To address that matter in cybersecurity, we conducted a representative study in Germany (𝑁 = 1, 012), asking people about their attitude towards nudging in that specific context. Our findings reveal that 64\% rated nudging in cybersecurity as helpful, however several participants expected risks such as intentional misguidance, manipulation and data exposure as well.},
    booktitle = {European {Symposium} on {Usable} {Security} ({EuroUSEC})},
    publisher = {ACM},
    author = {Hartwig, Katrin and Reuter, Christian},
    year = {2021},
    keywords = {UsableSec, Security, Projekt-CROSSING, Projekt-ATHENE-SecUrban},
    pages = {141--150},
    }

  • Katrin Hartwig, Atlas Englisch, Jan Pelle Thomson, Christian Reuter (2021)
    Finding Secret Treasure? Improving Memorized Secrets Through Gamification
    European Symposium on Usable Security (EuroUSEC) Karlsruhe, Germany. doi:10.1145/3481357.3481509
    [BibTeX] [Abstract] [Download PDF]

    Users tend to bypass systems that are designed to increase their personal security and privacy while limiting their perceived freedom. Nudges present a possible solution to this problem, offering security benefits without taking away perceived freedom. We have identified a lack of research comparing concrete implementations of nudging concepts in an emulated real-world scenario to assess their relative value as a nudge. Comparing multiple nudging implementations in an emulated real-world scenario including a novel avatar nudge with gamification elements, this publication discusses the advantages of nudging for stronger user-created passwords regarding efficacy, usability, and memorability.We investigated the effect of gamification in nudges, performing two studies (𝑁1 = 16, 𝑁2 = 1, 000) to refine and evaluate implementations of current and novel nudging concepts. Our research found a gamified nudge, which integrates a personalizable avatar guide into the registration process, to perform less effectively than state-of-the-art nudges, independently of participants’ gaming frequency.

    @inproceedings{hartwig_finding_2021,
    address = {Karlsruhe, Germany},
    title = {Finding {Secret} {Treasure}? {Improving} {Memorized} {Secrets} {Through} {Gamification}},
    url = {https://peasec.de/paper/2021/2021_HartwigEnglischThomsonReuter_MemorizedSecretsThroughGamification_EuroUSEC.pdf},
    doi = {10.1145/3481357.3481509},
    abstract = {Users tend to bypass systems that are designed to increase their personal security and privacy while limiting their perceived freedom.
    Nudges present a possible solution to this problem, offering security benefits without taking away perceived freedom. We have
    identified a lack of research comparing concrete implementations of nudging concepts in an emulated real-world scenario to assess their
    relative value as a nudge. Comparing multiple nudging implementations in an emulated real-world scenario including a novel avatar
    nudge with gamification elements, this publication discusses the advantages of nudging for stronger user-created passwords regarding
    efficacy, usability, and memorability.We investigated the effect of gamification in nudges, performing two studies (𝑁1 = 16, 𝑁2 = 1, 000)
    to refine and evaluate implementations of current and novel nudging concepts. Our research found a gamified nudge, which integrates
    a personalizable avatar guide into the registration process, to perform less effectively than state-of-the-art nudges, independently of
    participants’ gaming frequency.},
    booktitle = {European {Symposium} on {Usable} {Security} ({EuroUSEC})},
    publisher = {ACM},
    author = {Hartwig, Katrin and Englisch, Atlas and Thomson, Jan Pelle and Reuter, Christian},
    year = {2021},
    keywords = {Student, UsableSec, Security, Projekt-CROSSING, Projekt-ATHENE-SecUrban},
    pages = {105--117},
    }

    2019

  • Katrin Hartwig, Christian Reuter (2019)
    Fighting Misinformation on Twitter: The Plugin based approach TrustyTweet
    SCIENCE PEACE SECURITY ’19 – Proceedings of the Interdisciplinary Conference on Technical Peace and Security Research Darmstadt, Germany.
    [BibTeX] [Abstract] [Download PDF]

    Finding a responsible way to address fake news on social media has become an urgent matter both in political and social contexts. Existing studies focus mainly on how to detect and label fake news. However, approaches to assist users in making their own assessments are largely missing. In this article we present a study on how an indicator-based white-box approach can support Twitter-users in assessing tweets. In a first step, we identified indicators for fake news that have shown to be promising in previous studies and that are suitable for our idea of a white-box approach. Building on that basis of indicators we then designed and implemented the browser-plugin TrustyTweet, which aims to assist users on Twitter in assessing tweets by showing politically neutral and intuitive warnings without creating reactance. Finally, we present the findings of our evaluations carried out with a total of 27 participants, which result in further design implications for approaches to assist users in dealing with fake news.

    @inproceedings{hartwig_fighting_2019,
    address = {Darmstadt, Germany},
    title = {Fighting {Misinformation} on {Twitter}: {The} {Plugin} based approach {TrustyTweet}},
    url = {https://tuprints.ulb.tu-darmstadt.de/id/eprint/9164},
    abstract = {Finding a responsible way to address fake news on social media has become an urgent matter both in political and social contexts. Existing studies focus mainly on how to detect and label fake news. However, approaches to assist users in making their own assessments are largely missing. In this article we present a study on how an indicator-based white-box approach can support Twitter-users in assessing tweets. In a first step, we identified indicators for fake news that have shown to be promising in previous studies and that are suitable for our idea of a white-box approach. Building on that basis of indicators we then designed and implemented the browser-plugin TrustyTweet, which aims to assist users on Twitter in assessing tweets by showing politically neutral and intuitive warnings without creating reactance. Finally, we present the findings of our evaluations carried out with a total of 27 participants, which result in further design implications for approaches to assist users in dealing with fake news.},
    booktitle = {{SCIENCE} {PEACE} {SECURITY} '19 - {Proceedings} of the {Interdisciplinary} {Conference} on {Technical} {Peace} and {Security} {Research}},
    publisher = {TUprints},
    author = {Hartwig, Katrin and Reuter, Christian},
    editor = {Reuter, Christian and Altmann, Jürgen and Göttsche, Malte and Himmel, Mirko},
    year = {2019},
    keywords = {Crisis, HCI, SocialMedia, Peace},
    pages = {67--69},
    }

  • Katrin Hartwig, Christian Reuter (2019)
    TrustyTweet: An Indicator-based Browser-Plugin to Assist Users in Dealing with Fake News on Twitter
    Proceedings of the International Conference on Wirtschaftsinformatik (WI) Siegen, Germany.
    [BibTeX] [Abstract] [Download PDF]

    The importance of dealing withfake newsonsocial mediahas increased both in political and social contexts.While existing studies focus mainly on how to detect and label fake news, approaches to assist usersin making their own assessments are largely missing. This article presents a study on how Twitter-users’assessmentscan be supported by an indicator-based white-box approach.First, we gathered potential indicators for fake news that have proven to be promising in previous studies and that fit our idea of awhite-box approach. Based on those indicators we then designed and implemented the browser-plugin TrusyTweet, which assists users on Twitterin assessing tweetsby showing politically neutral and intuitive warnings without creating reactance. Finally, we suggest the findings of our evaluations with a total of 27 participants which lead to further design implicationsfor approachesto assistusers in dealing with fake news.

    @inproceedings{hartwig_trustytweet_2019,
    address = {Siegen, Germany},
    title = {{TrustyTweet}: {An} {Indicator}-based {Browser}-{Plugin} to {Assist} {Users} in {Dealing} with {Fake} {News} on {Twitter}},
    url = {http://www.peasec.de/paper/2019/2019_HartwigReuter_TrustyTweet_WI.pdf},
    abstract = {The importance of dealing withfake newsonsocial mediahas increased both in political and social contexts.While existing studies focus mainly on how to detect and label fake news, approaches to assist usersin making their own assessments are largely missing. This article presents a study on how Twitter-users'assessmentscan be supported by an indicator-based white-box approach.First, we gathered potential indicators for fake news that have proven to be promising in previous studies and that fit our idea of awhite-box approach. Based on those indicators we then designed and implemented the browser-plugin TrusyTweet, which assists users on Twitterin assessing tweetsby showing politically neutral and intuitive warnings without creating reactance. Finally, we suggest the findings of our evaluations with a total of 27 participants which lead to further design implicationsfor approachesto assistusers in dealing with fake news.},
    booktitle = {Proceedings of the {International} {Conference} on {Wirtschaftsinformatik} ({WI})},
    publisher = {AIS},
    author = {Hartwig, Katrin and Reuter, Christian},
    year = {2019},
    keywords = {Crisis, HCI, SocialMedia, Student, Ranking-CORE-C, Ranking-VHB-C, Ranking-WKWI-A, Peace, Projekt-CRISP, Projekt-ATHENE-FANCY},
    pages = {1858--1869},
    }

  • Christian Reuter, Katrin Hartwig, Jan Kirchner, Noah Schlegel (2019)
    Fake News Perception in Germany: A Representative Study of People’s Attitudes and Approaches to Counteract Disinformation
    Proceedings of the International Conference on Wirtschaftsinformatik (WI) Siegen, Germany.
    [BibTeX] [Abstract] [Download PDF]

    Fake news has become an important topic in our social and political environment. While research is coming up for the U.S. and European countries, many aspects remain uncovered as long as existing work only marginally inves-tigates people’s attitudes towards fake news. In this work, we present the results of a representative study (N=1023) in Germany asking participants about their attitudes towards fake news and approaches to counteract disinformation. More than 80\% of the participants agree that fake news poses a threat. 78\% see fake news as harming democracy. Even though about half of the respondents (48\%) have noticed fake news, most participants stated to have never liked, shared or commented on fake news. Regarding demographic factors, our findings support the view of younger and relatively educated people being more informed about fake news. Concerning ideological motives, the evaluation suggests left-wing or liberal respondents to be more critical of fake news

    @inproceedings{reuter_fake_2019-2,
    address = {Siegen, Germany},
    title = {Fake {News} {Perception} in {Germany}: {A} {Representative} {Study} of {People}'s {Attitudes} and {Approaches} to {Counteract} {Disinformation}},
    url = {http://www.peasec.de/paper/2019/2019_ReuterHartwigKirchnerSchlegel_FakeNewsPerceptionGermany_WI.pdf},
    abstract = {Fake news has become an important topic in our social and political environment. While research is coming up for the U.S. and European countries, many aspects remain uncovered as long as existing work only marginally inves-tigates people's attitudes towards fake news. In this work, we present the results of a representative study (N=1023) in Germany asking participants about their attitudes towards fake news and approaches to counteract disinformation. More than 80\% of the participants agree that fake news poses a threat. 78\% see fake news as harming democracy. Even though about half of the respondents (48\%) have noticed fake news, most participants stated to have never liked, shared or commented on fake news. Regarding demographic factors, our findings support the view of younger and relatively educated people being more informed about fake news. Concerning ideological motives, the evaluation suggests left-wing or liberal respondents to be more critical of fake news},
    booktitle = {Proceedings of the {International} {Conference} on {Wirtschaftsinformatik} ({WI})},
    publisher = {AIS},
    author = {Reuter, Christian and Hartwig, Katrin and Kirchner, Jan and Schlegel, Noah},
    year = {2019},
    keywords = {Crisis, HCI, SocialMedia, Student, Ranking-CORE-C, Ranking-VHB-C, Ranking-WKWI-A, Peace},
    pages = {1069--1083},
    }

    Weitere Veröffentlichungen:

    Laura, C. O., Hartwig, K., Distergoft, A., Hoffmann, T., Scheckenbach, K., Brüsseler, M., & Wesarg, S. (2021, February). Automatic segmentation of the structures in the nasal cavity and the ethmoidal sinus for the quantification of nasal septal deviations. In Medical Imaging 2021: Computer-Aided Diagnosis (Vol. 11597, p. 115972J). International Society for Optics and Photonics.

    Oyarzun, C. L., Hartwig, K., Hertlein, A. S., Jung, F., Burmeister, J., Kohlhammer, J., … & Sauter, G. (2020). Web-based Prostate Visualization Tool. Current Directions in Biomedical Engineering, 6(3), 563-566.