Wissenschaftliche Mitarbeiterin / Post-Doktorandin
Kontakt: +49 (0) 6151 / 1620944 | hartwig(at)peasec.tu-darmstadt.de
Technische Universität Darmstadt, Fachbereich Informatik, Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) Pankratiusstraße 2, 64289 Darmstadt, Raum 114
Online-Profile: ORCID | Google Scholar
DE
Dr.-Ing. Katrin Hartwig ist wissenschaftliche Mitarbeiterin und Post-Doktorandin am Lehrstuhl Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) im Fachbereich Informatik der Technischen Universität Darmstadt. Ihre wissenschaftlichen Interessen liegen an der Schnittstelle von Informatik und Psychologie, besonders im Bereich Desinformation, Social Media, Usable Security und Mensch-Computer-Interaktion. Sie wirkt maßgeblich am BMBF-Projekt NEBULA (Nutzerzentrierte KI-basierte Erkennung von Fake News und Fehlinformationen) mit. Im Jahr 2024 promovierte sie bei PEASEC (zum Dr.-Ing.) zum Thema „Navigating Misinformation: User-Centered Design and Evaluation of Indicator-Based Digital Interventions“.
Sie studierte Informatik (M.Sc.) und Psychologie in IT (B.Sc.) an der Technischen Universität Darmstadt. Neben dem Studium arbeitete sie als Softwareentwicklerin in der medizinischen Bildverarbeitung. In ihrer Masterarbeit befasste sie sich mit Aspekten der Usable Security im Rahmen des Sonderforschungsbereichs CROSSING.
EN
Dr.-Ing. Katrin Hartwig is a research associate and post-doctoral researcher at the Chair of Science and Technology for Peace and Security (PEASEC) in the Department of Computer Science at the Technical University of Darmstadt. Her research interests focus on the intersection of computer science and psychology, particularly in disinformation, social media, usable security, and human-computer interaction. She is significantly involved in the BMBF project NEBULA (Nutzerzentrierte KI-basierte Erkennung von Fake News und Fehlinformationen). In 2024, she completed her doctorate (Dr.-Ing.) at PEASEC on the topic of „Navigating Misinformation: User-Centered Design and Evaluation of Indicator-Based Digital Interventions“.
Katrin studied computer science (M.Sc.) and psychology in IT (B.Sc.) at the Technical University of Darmstadt. Parallel to her studies, she worked as a software developer in medical image processing. In her master’s thesis, she focused on aspects of usable security within the collaborative research center CROSSING.
Publikationen
2025
[BibTeX] [Abstract]
The ongoing challenge of misinformation on social media motivates ongoing efforts to find effective countermeasures. In this study, we evaluated the potential of personalised nudging to reduce the sharing of misinformation on social media, as personalised support has been successfully applied in other areas of critical information handling. In an online experiment (N = 396) exposing users to social media posts, we assessed the degree of misinformation sharing between groups receiving (1) no nudges, (2) non-personalised nudges, and (3) personalised nudges. Personalisation was based on three psychometric dimensions – general decision-making style, consideration of future consequences, need for cognition – to assign the most appropriate nudge from a pool of five nudges. The results showed significant differences (p {\textless} .05) between all three groups, with the personalised nudge group sharing the least misinformation. Detailed analyses at the nudge level revealed that one nudge was universally effective and two nudges were effective only in their personalised form. The results generally confirm the potential of personalisation, although the effect is limited in scope. These findings shed light on the nuanced results of nudging studies, highlight the benefits of personalisation, and raise ethical considerations regarding the privacy implications of personalisation and those inherent in nudges.
@article{biselli_mitigating_2025,
title = {Mitigating {Misinformation} {Sharing} on {Social} {Media} through {Personalised} {Nudging}},
abstract = {The ongoing challenge of misinformation on social media motivates ongoing efforts to find effective countermeasures.
In this study, we evaluated the potential of personalised nudging to reduce the sharing of misinformation on social media, as personalised support has been successfully applied in other areas of critical information handling.
In an online experiment (N = 396) exposing users to social media posts, we assessed the degree of misinformation sharing between groups receiving (1) no nudges, (2) non-personalised nudges, and (3) personalised nudges. Personalisation was based on three psychometric dimensions - general decision-making style, consideration of future consequences, need for cognition - to assign the most appropriate nudge from a pool of five nudges.
The results showed significant differences (p {\textless} .05) between all three groups, with the personalised nudge group sharing the least misinformation. Detailed analyses at the nudge level revealed that one nudge was universally effective and two nudges were effective only in their personalised form.
The results generally confirm the potential of personalisation, although the effect is limited in scope.
These findings shed light on the nuanced results of nudging studies, highlight the benefits of personalisation, and raise ethical considerations regarding the privacy implications of personalisation and those inherent in nudges.},
journal = {Proceedings of the ACM: Human Computer Interaction (PACM): Computer-Supported Cooperative Work and Social Computing},
author = {Biselli, Tom and Hartwig, Katrin and Reuter, Christian},
year = {2025},
keywords = {A-Paper, Projekt-ATHENE-PriVis, Projekt-NEBULA, Ranking-CORE-A},
}
[BibTeX]
@book{hartwig_navigating_2025,
address = {Wiesbaden, Germany},
title = {Navigating {Misinformation}: {User}-{Centered} {Design} and {Evaluation} of {Indicator}-{Based} {Digital} {Interventions}},
publisher = {Springer Vieweg},
author = {Hartwig, Katrin},
year = {2025},
keywords = {Crisis, HCI, Projekt-NEBULA, DissPublisher},
}
2024
[BibTeX] [Abstract] [Download PDF]
Misinformation poses a recurrent challenge for video-sharing platforms (VSPs) like TikTok. Obtaining user perspectives on digital interventions addressing the need for transparency (e.g., through indicators) is essential. This article offers a thorough examination of the comprehensibility, usefulness, and limitations of an indicator-based intervention from an adolescents’ perspective. This study (𝑁 = 39; aged 13-16 years) comprised two qualitative steps: (1) focus group discussions and (2) think-aloud sessions, where participants engaged with a smartphone-app for TikTok. The results offer new insights into how video-based indicators can assist adolescents’ assessments. The intervention received positive feedback, especially for its transparency, and could be applicable to new content. This paper sheds light on how adolescents are expected to be experts while also being prone to video-based misinformation, with limited understanding of an intervention’s limitations. By adopting teenagers’ perspectives, we contribute to HCI research and provide new insights into the chances and limitations of interventions for VSPs.
@inproceedings{hartwig_adolescents_2024,
address = {New York, NY, USA},
series = {{CHI} '24},
title = {From {Adolescents}' {Eyes}: {Assessing} an {Indicator}-{Based} {Intervention} to {Combat} {Misinformation} on {TikTok}},
isbn = {9798400703300},
url = {https://doi.org/10.1145/3613904.3642264},
doi = {10.1145/3613904.3642264},
abstract = {Misinformation poses a recurrent challenge for video-sharing platforms (VSPs) like TikTok. Obtaining user perspectives on digital interventions addressing the need for transparency (e.g., through indicators) is essential. This article offers a thorough examination of the comprehensibility, usefulness, and limitations of an indicator-based intervention from an adolescents’ perspective. This study (𝑁 = 39; aged 13-16 years) comprised two qualitative steps: (1) focus group discussions and (2) think-aloud sessions, where participants
engaged with a smartphone-app for TikTok. The results offer new insights into how video-based indicators can assist adolescents’ assessments. The intervention received positive feedback, especially for its transparency, and could be applicable to new content. This paper sheds light on how adolescents are expected to be experts while also being prone to video-based misinformation, with limited understanding of an intervention’s limitations. By adopting
teenagers’ perspectives, we contribute to HCI research and provide new insights into the chances and limitations of interventions for VSPs.},
booktitle = {Proceedings of the {Conference} on {Human} {Factors} in {Computing} {Systems} ({CHI})},
publisher = {Association for Computing Machinery},
author = {Hartwig, Katrin and Biselli, Tom and Schneider, Franziska and Reuter, Christian},
year = {2024},
keywords = {Security, UsableSec, HCI, A-Paper, Ranking-CORE-A*, Selected, AuswahlCrisis, Projekt-ATHENE-PriVis, Projekt-NEBULA},
}
[BibTeX] [Abstract] [Download PDF]
Misinformation presents a challenge to democracies, particularly in times of crisis. One way in which misinformation is spread is through voice messages sent via messenger groups, which enable members to share information on a larger scale. Gaining user perspectives on digital misinformation interventions as countermeasure after detection is crucial. In this paper, we extract potential features of misinformation in voice messages from literature, implement them within a program that automatically processes voice messages, and evaluate their perceived usefulness and comprehensibility as user-centered indicators.We propose 35 features extracted from audio files at the character, word, sentence, audio and creator levels to assist (1) private individuals in conducting credibility assessments, (2) government agencies faced with data overload during crises, and (3) researchers seeking to gather features for automatic detection approaches. We conducted a think-aloud study with laypersons (N = 20) to provide initial insight into how individuals autonomously assess the credibility of voice messages, as well as which automatically extracted features they find to be clear and convincing indicators of misinformation. Our study provides qualitative and quantitative insights into valuable indicators, particularly when they relate directly to the content or its creator, and uncovers challenges in user interface design.
@article{hartwig_navigating_2024,
title = {Navigating {Misinformation} in {Voice} {Messages}: {Identification} of {User}-{Centered} {Features} for {Digital} {Interventions}},
issn = {1944-4079},
url = {https://peasec.de/paper/2024/2024_HartwigSandlerReuter_NavigatingMisinfoVoiceMessages_RiskHazards.pdf},
doi = {10.1002/rhc3.12296},
abstract = {Misinformation presents a challenge to democracies, particularly in times of crisis. One way in which misinformation is spread is through voice messages sent via messenger groups, which enable members to share information on a larger scale. Gaining user perspectives on digital misinformation interventions as countermeasure after detection is crucial. In this paper, we extract potential features of misinformation in voice messages from literature, implement them within a program that automatically processes voice messages, and evaluate their perceived usefulness and comprehensibility as user-centered indicators.We propose 35 features extracted from audio files at the character, word, sentence, audio and creator levels to assist (1) private individuals in conducting credibility assessments, (2) government agencies faced with data overload during crises, and (3) researchers seeking to gather features for automatic detection approaches. We conducted a think-aloud study with laypersons (N = 20) to provide initial insight into how individuals autonomously assess the credibility of voice messages, as well as which automatically extracted features they find to be clear and convincing indicators of misinformation. Our study provides qualitative and quantitative insights into valuable indicators, particularly when they relate directly to the content or its creator, and uncovers challenges in user interface design.},
journal = {Risk, Hazards, \& Crisis in Public Policy (RHCPP)},
author = {Hartwig, Katrin and Sandler, Ruslan and Reuter, Christian},
year = {2024},
note = {Publisher: John Wiley \& Sons, Ltd},
keywords = {Student, UsableSec, Crisis, HCI, Projekt-CYLENCE, A-Paper, Projekt-NEBULA, Projekt-ATHENE, Ranking-ImpactFactor, SocialMedia, Cyberwar},
}
[BibTeX] [Abstract] [Download PDF]
Misinformation represent a key challenge for society. User-centered misinformation interventions as digital countermeasures that exert a direct influence on users represent a promising means to deal with the large amounts of information available. While an extensive body of research on this topic exists, researchers are confronted with a diverse research landscape spanning multiple disciplines. This review systematizes the landscape of user-centered misinformation interventions to facilitate knowledge transfer, identify trends, and enable informed decision-making. Over 3,700 scholarly publications were screened and a systematic literature review (N=108) was conducted. A taxonomy was derived regarding intervention design (e.g., binary label), user interaction (active or passive), and timing (e.g., post exposure to misinformation). We provide a structured overview of approaches across multiple disciplines, and derive six overarching challenges for future research.
@article{hartwig_landscape_2024,
title = {The {Landscape} of {User}-centered {Misinformation} {Interventions} – {A} {Systematic} {Literature} {Review}},
volume = {56},
issn = {0360-0300},
url = {https://peasec.de/paper/2024/2024_HartwigDoellReuter_LandscapeUserCentredMisinfoInterventions_CSUR.pdf},
doi = {10.1145/3674724},
abstract = {Misinformation represent a key challenge for society. User-centered misinformation interventions as digital countermeasures that exert a direct influence on users represent a promising means to deal with the large amounts of information available. While an extensive body of research on this topic exists, researchers are confronted with a diverse research landscape spanning multiple disciplines. This review systematizes the landscape of user-centered misinformation interventions to facilitate knowledge transfer, identify trends, and enable informed decision-making. Over 3,700 scholarly publications were screened and a systematic literature review (N=108) was conducted. A taxonomy was derived regarding intervention design (e.g., binary label), user interaction (active or passive), and timing (e.g., post exposure to misinformation). We provide a structured overview of approaches across multiple disciplines, and derive six overarching challenges for future research.},
number = {11},
journal = {ACM Computing Surveys (CSUR)},
author = {Hartwig, Katrin and Doell, Frederic and Reuter, Christian},
month = jul,
year = {2024},
keywords = {Peace, Student, HCI, A-Paper, Ranking-CORE-A*, Selected, Projekt-NEBULA, Ranking-ImpactFactor},
}
[BibTeX] [Abstract] [Download PDF]
Die Verbreitung falscher und irreführender Informationen – insbesondere über soziale Medien wie TikTok, Twitter, Facebook und Co. – nehmen eine immer größer werdende Relevanz in sicherheitsrelevanten Situationen ein. Gerade im Kontext des russischen Angriffskrieges gegen die Ukraine spielen derartige Plattformen eine besondere Rolle, indem gefälschte Videos oder Inhalte mit falscher zeitlicher Einordnung in kürzester Zeit viral gehen und somit das Potential für Verunsicherung und Meinungsmanipulation bergen. Problematisch sind dabei nicht nur absichtliche, sondern auch unabsichtlich irreführende Informationen. Ziel des interdisziplinären BMBF-Projekts NEBULA (Laufzeit: 1.7.2022-30.6.2025) ist die transparente, KI- basierte Erkennung von Falsch- und Fehlinformationen in sicherheitsrelevanten Situationen sowie die zielgruppengerechte Darstellung der Detektionsergebnisse zur Förderung der Medienkompetenz. Die nutzerzentrierten Ansätze adressieren dabei sowohl Behörden und Organisationen mit Sicherheitsaufgaben (BOS) in der akkuraten Lagebilderstellung und Krisenkommunikation, als auch vulnerable Personengruppen durch partizipative Entwicklung von technischen Unterstützungswerkzeugen. Innerhalb des Projekts entstehen Demonstratoren in Form von Smartphone-Apps, Browser-Plugins und Webanwendungen, um Einzelpersonen und Behörden dazu zu befähigen, Falsch- und Fehlinformationen eigenständig kritisch zu reflektieren und Umgangsstrategien zur Informationseinordnung anzueignen.
@inproceedings{hartwig_nebula_2024,
address = {München},
title = {{NEBULA}: {Nutzerzentrierte} {KI}-basierte {Erkennung} von {Fake} {News} und {Fehlinformationen}},
url = {https://peasec.de/paper/2024/2024_HartwigBiselliSchneiderReuter_NEBULA_BfSTagungsband.pdf},
abstract = {Die Verbreitung falscher und irreführender Informationen – insbesondere über soziale Medien wie TikTok,
Twitter, Facebook und Co. – nehmen eine immer größer werdende Relevanz in sicherheitsrelevanten
Situationen ein. Gerade im Kontext des russischen Angriffskrieges gegen die Ukraine spielen derartige
Plattformen eine besondere Rolle, indem gefälschte Videos oder Inhalte mit falscher zeitlicher Einordnung
in kürzester Zeit viral gehen und somit das Potential für Verunsicherung und Meinungsmanipulation
bergen. Problematisch sind dabei nicht nur absichtliche, sondern auch unabsichtlich irreführende
Informationen.
Ziel des interdisziplinären BMBF-Projekts NEBULA (Laufzeit: 1.7.2022-30.6.2025) ist die transparente, KI-
basierte Erkennung von Falsch- und Fehlinformationen in sicherheitsrelevanten Situationen sowie die
zielgruppengerechte Darstellung der Detektionsergebnisse zur Förderung der Medienkompetenz. Die
nutzerzentrierten Ansätze adressieren dabei sowohl Behörden und Organisationen mit Sicherheitsaufgaben
(BOS) in der akkuraten Lagebilderstellung und Krisenkommunikation, als auch vulnerable Personengruppen
durch partizipative Entwicklung von technischen Unterstützungswerkzeugen. Innerhalb des Projekts
entstehen Demonstratoren in Form von Smartphone-Apps, Browser-Plugins und Webanwendungen, um
Einzelpersonen und Behörden dazu zu befähigen, Falsch- und Fehlinformationen eigenständig kritisch zu
reflektieren und Umgangsstrategien zur Informationseinordnung anzueignen.},
booktitle = {Aktuelle {Themen} und {Herausforderungen} behördlicher {Risikokommunikation} - {Tagungsband}},
publisher = {Bundesamt für Strahlenschutz},
author = {Hartwig, Katrin and Biselli, Tom and Schneider, Franziska and Reuter, Christian},
year = {2024},
keywords = {Crisis, Projekt-NEBULA},
}
[BibTeX] [Abstract] [Download PDF]
Recent crises like the COVID-19 pandemic provoked an increasing appearance of misleading information, emphasizing the need for effective user-centered countermeasures as an important field in HCI research. This work investigates how content-specific user-centered indicators can contribute to an informed approach to misleading information. In a threefold study, we conducted an in-depth content analysis of 2,382 German tweets on Twitter (now X) to identify topical (e.g., 5G), formal (e.g., links), and rhetorical (e.g., sarcasm) characteristics through manual coding, followed by a qualitative online survey to evaluate which indicators users already use autonomously to assess a tweet’s credibility. Subsequently, in a think-aloud study participants qualitatively evaluated the identified indicators in terms of perceived comprehensibility and usefulness. While a number of indicators were found to be particularly comprehensible and useful (e.g., claim for absolute truth and rhetorical questions), our findings reveal limitations of indicator-based interventions, particularly for people with entrenched conspiracy theory views. We derive four implications for digitally supporting users in dealing with misleading information, especially during crises.
@article{hartwig_misleading_2024,
title = {Misleading {Information} in {Crises}: {Exploring} {Content}-specific {Indicators} on {Twitter} from a {User} {Perspective}},
issn = {0144-929X},
url = {https://doi.org/10.1080/0144929X.2024.2373166},
doi = {10.1080/0144929X.2024.2373166},
abstract = {Recent crises like the COVID-19 pandemic provoked an increasing appearance of misleading information,
emphasizing the need for effective user-centered countermeasures as an important field in HCI research. This
work investigates how content-specific user-centered indicators can contribute to an informed approach to
misleading information. In a threefold study, we conducted an in-depth content analysis of 2,382 German
tweets on Twitter (now X) to identify topical (e.g., 5G), formal (e.g., links), and rhetorical (e.g., sarcasm)
characteristics through manual coding, followed by a qualitative online survey to evaluate which indicators
users already use autonomously to assess a tweet’s credibility. Subsequently, in a think-aloud study participants
qualitatively evaluated the identified indicators in terms of perceived comprehensibility and usefulness. While
a number of indicators were found to be particularly comprehensible and useful (e.g., claim for absolute truth
and rhetorical questions), our findings reveal limitations of indicator-based interventions, particularly for
people with entrenched conspiracy theory views. We derive four implications for digitally supporting users in
dealing with misleading information, especially during crises.},
journal = {Behaviour \& Information Technology (BIT)},
author = {Hartwig, Katrin and Schmid, Stefka and Biselli, Tom and Pleil, Helene and Reuter, Christian},
year = {2024},
keywords = {Crisis, HCI, A-Paper, Projekt-ATHENE-PriVis, Projekt-NEBULA, Ranking-CORE-A, Ranking-ImpactFactor},
pages = {1--34},
}
[BibTeX]
@book{hartwig_navigating_2024-1,
address = {Darmstadt, Germany},
title = {Navigating {Misinformation}: {User}-{Centered} {Design} and {Evaluation} of {Indicator}-{Based} {Digital} {Interventions}},
publisher = {Dissertation (Dr.-Ing.), Department of Computer Science, Technische Universität Darmstadt},
author = {Hartwig, Katrin},
year = {2024},
keywords = {Crisis, HCI, Projekt-NEBULA, Dissertation},
}
[BibTeX] [Abstract] [Download PDF]
Fortschritte in Wissenschaft und Technik, besonders der Informatik, spielen im Kontext von Frieden und Sicherheit eine essenzielle Rolle. Der Lehrstuhl Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) an der Technischen Universität Darmstadt verbindet Informatik mit Friedens-, Konflikt- und Sicherheitsforschung.
@techreport{reuter_informatik_2024,
address = {FIfF-Kommunikation},
title = {Informatik für den {Frieden}: {Perspektive} von {PEASEC} zu 40 {Jahren} {FIfF}},
url = {https://peasec.de/paper/2024/2024_Reuteretal_InformatikFuerFrieden_fiff.pdf},
abstract = {Fortschritte in Wissenschaft und Technik, besonders der Informatik, spielen im Kontext von Frieden und Sicherheit eine essenzielle Rolle. Der Lehrstuhl Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) an der Technischen Universität Darmstadt verbindet Informatik mit Friedens-, Konflikt- und Sicherheitsforschung.},
author = {Reuter, Christian and Franken, Jonas and Reinhold, Thomas and Kuehn, Philipp and Kaufhold, Marc-André and Riebe, Thea and Hartwig, Katrin and Biselli, Tom and Schmid, Stefka and Guntrum, Laura and Haesler, Steffen},
year = {2024},
keywords = {Peace, Security},
}
[BibTeX] [Abstract] [Download PDF]
In crises such as the COVID-19 pandemic, it is crucial to support users when dealing with social media content. Considering digital resilience, we propose a web app based on Social Network Analysis (SNA) to provide an overview of potentially misleading vs. non-misleading content on Twitter, which can be explored by users and enable foundational learning. The latter aims at systematically identifying thematic patterns which may be associated with misleading information. Additionally, it entails reflecting on indicators of misleading tweets which are proposed to approach classification of tweets. Paying special attention to non-expert users of social media, we conducted a two-step Think Aloud study for evaluation. While participants valued the opportunity to generate new knowledge and the diversity of the application, qualities such as equality and rapidity may be further improved. However, learning effects outweighed individual costs as all users were able to shift focus onto relevant features, such as hashtags, while readily pointing out content characteristics. Our design artifact connects to learning-oriented interventions regarding the spread of misleading information and tackles information overload by a SNA-based plug-in.
@article{schmid_digital_2024,
title = {Digital {Resilience} in {Dealing} with {Misinformation} on {Social} {Media} during {COVID}-19: {A} {Web} {Application} to {Assist} {Users} in {Crises}},
volume = {26},
issn = {1572-9419},
url = {https://doi.org/10.1007/s10796-022-10347-5},
doi = {10.1007/s10796-022-10347-5},
abstract = {In crises such as the COVID-19 pandemic, it is crucial to support users when dealing with social media content. Considering digital resilience, we propose a web app based on Social Network Analysis (SNA) to provide an overview of potentially misleading vs. non-misleading content on Twitter, which can be explored by users and enable foundational learning. The latter aims at systematically identifying thematic patterns which may be associated with misleading information. Additionally, it entails reflecting on indicators of misleading tweets which are proposed to approach classification of tweets. Paying special attention to non-expert users of social media, we conducted a two-step Think Aloud study for evaluation. While participants valued the opportunity to generate new knowledge and the diversity of the application, qualities such as equality and rapidity may be further improved. However, learning effects outweighed individual costs as all users were able to shift focus onto relevant features, such as hashtags, while readily pointing out content characteristics. Our design artifact connects to learning-oriented interventions regarding the spread of misleading information and tackles information overload by a SNA-based plug-in.},
number = {2},
journal = {Information Systems Frontiers (ISF)},
author = {Schmid, Stefka and Hartwig, Katrin and Cieslinski, Robert and Reuter, Christian},
month = apr,
year = {2024},
keywords = {Projekt-TraCe, Student, Crisis, A-Paper, Projekt-NEBULA},
pages = {477--499},
}
2023
[BibTeX] [Abstract] [Download PDF]
The importance of dealing with fake newsfake news has increased in both political and social contexts: While existing studies mainly focus on how to detect and label fake news, approaches to help users make their own assessments are largely lacking. This article presents existing black-boxblack box and white-boxwhite box approaches and compares advantages and disadvantages. In particular, white-box approaches show promise in counteracting reactance, while black-box approaches detect fake news with much greater accuracy. We also present the browser plugin TrustyTweetTrustyTweet, which we developed to help users evaluate tweets on Twitter by displaying politically neutral and intuitive warnings without generating reactance.
@incollection{hartwig_countering_2023,
address = {Wiesbaden},
title = {Countering {Fake} {News} {Technically} – {Detection} and {Countermeasure} {Approaches} to {Support} {Users}},
isbn = {978-3-658-40406-2},
url = {https://peasec.de/paper/2023/2023_HartwigReuter_CounteringFakeNews_TruthFakePostTruth.pdf},
abstract = {The importance of dealing with fake newsfake news has increased in both political and social contexts: While existing studies mainly focus on how to detect and label fake news, approaches to help users make their own assessments are largely lacking. This article presents existing black-boxblack box and white-boxwhite box approaches and compares advantages and disadvantages. In particular, white-box approaches show promise in counteracting reactance, while black-box approaches detect fake news with much greater accuracy. We also present the browser plugin TrustyTweetTrustyTweet, which we developed to help users evaluate tweets on Twitter by displaying politically neutral and intuitive warnings without generating reactance.},
booktitle = {Truth and {Fake} in the {Post}-{Factual} {Digital} {Age}: {Distinctions} in the {Humanities} and {IT} {Sciences}},
publisher = {Springer Fachmedien Wiesbaden},
author = {Hartwig, Katrin and Reuter, Christian},
editor = {Klimczak, Peter and Zoglauer, Thomas},
year = {2023},
doi = {10.1007/978-3-658-40406-2_7},
keywords = {Crisis, HCI, Projekt-CROSSING, Projekt-ATHENE, SocialMedia},
pages = {131--147},
}
[BibTeX] [Abstract] [Download PDF]
In this paper we provide an overview of XAI by introducing fundamental terminology and the goals of XAI, as well as recent research findings. Whilst doing this, we pay special attention to strategies for non-expert stakeholders. This leads us to our first research question: “What are the trends in explainable AI strategies for non-experts?”. In order to illustrate the current state of these trends, we further want to study an exemplary and very relevant application domain. According to Abdul et al. (2018), one of the first domains where researchers pursued XAI is the medical domain. This leads to our second research question: “What are the approaches of XAI in the medical domain for non-expert stakeholders?” These research questions will provide an overview of current topics in XAI and show possible research extensions for specific domains.
@incollection{ozalp_trends_2023,
address = {Bielefeld},
title = {Trends in {Explainable} {Artificial} {Intelligence} for {Non}-{Experts}},
url = {https://www.transcript-verlag.de/978-3-8376-5732-6/ai-limits-and-prospects-of-artificial-intelligence/?c=313000019},
abstract = {In this paper we provide an overview of XAI by introducing fundamental terminology and the goals of XAI, as well as recent research findings. Whilst doing this, we pay special attention to strategies for non-expert stakeholders. This leads us to our first research question: “What are the trends in explainable AI strategies for non-experts?”. In order to illustrate the current state of these trends, we further want to study an exemplary and very relevant application domain. According to Abdul et al. (2018), one of the first domains where researchers pursued XAI is the medical domain. This leads to our second research question: “What are the approaches of XAI in the medical domain for non-expert stakeholders?” These research questions will provide an overview of current topics in XAI and show possible research extensions for specific domains.},
booktitle = {{AI} - {Limits} and {Prospects} of {Artificial} {Intelligence}},
publisher = {Transcript Verlag},
author = {Özalp, Elise and Hartwig, Katrin and Reuter, Christian},
editor = {Klimczak, Peter and Petersen, Christer},
year = {2023},
keywords = {Student, UsableSec, HCI, Projekt-CYWARN, Projekt-ATHENE-SecUrban, Projekt-CROSSING},
pages = {223--243},
}
2022
[BibTeX] [Abstract] [Download PDF]
Nudging users to keep them secure online has become a growing research field in cybersecurity. While existing approaches are mainly blackbox based, showing aggregated visualisations as one-size-fits-all nudges, personalisation turned out promising to enhance the efficacy of nudges within the high variance of users and contexts. This article presents a disaggregated whitebox-based visualisation of critical information as a novel nudge. By segmenting users according to their decision-making and information processing styles, we investigate if the novel nudge is more effective for specific users than a common black-box nudge. Based on existing literature about critical factors in password security, we designed a dynamic radar chart and parallel coordinates as disaggregated visualisations. We evaluated the short-term effectiveness and users‘ perception of the nudges in a think-aloud prestudy and a representative online evaluation (N=1.012). Our findings suggest that dynamic radar charts present a moderately effective nudge towards stronger passwords regarding short-term efficacy and are appreciated particularly by players of role-playing games.
@article{hartwig_nudging_2022,
title = {Nudging {Users} {Towards} {Better} {Security} {Decisions} in {Password} {Creation} {Using} {Whitebox}-based {Multidimensional} {Visualizations}},
volume = {41},
url = {https://peasec.de/paper/2022/2022_HartwigReuter_WhiteboxMultidimensionalNudges_BIT.pdf},
doi = {10.1080/0144929X.2021.1876167},
abstract = {Nudging users to keep them secure online has become a growing research field in cybersecurity. While existing approaches are mainly blackbox based, showing aggregated visualisations as one-size-fits-all nudges, personalisation turned out promising to enhance the efficacy of nudges within the high variance of users and contexts. This article presents a disaggregated whitebox-based visualisation of critical information as a novel nudge. By segmenting users according to their decision-making and information processing styles, we investigate if the novel nudge is more effective for specific users than a common black-box nudge. Based on existing literature about critical factors in password security, we designed a dynamic radar chart and parallel coordinates as disaggregated visualisations. We evaluated the short-term effectiveness and users' perception of the nudges in a think-aloud prestudy and a representative online evaluation (N=1.012). Our findings suggest that dynamic radar charts present a moderately effective nudge towards stronger passwords regarding short-term efficacy and are appreciated particularly by players of role-playing games.},
number = {7},
journal = {Behaviour \& Information Technology (BIT)},
author = {Hartwig, Katrin and Reuter, Christian},
year = {2022},
keywords = {Security, UsableSec, HCI, Projekt-ATHENE-FANCY, Projekt-CROSSING, A-Paper, AuswahlUsableSec, Selected, Ranking-CORE-A, Ranking-ImpactFactor},
pages = {1357--1380},
}
2021
[BibTeX] [Abstract] [Download PDF]
Phishing is a prevalent cyber threat, targeting individuals and organizations alike. Previous approaches on anti-phishing measures have started to recognize the role of the user, who, at the center of the target, builds the last line of defense. However, user-oriented phishing interventions are fragmented across a diverse research landscape, which has not been systematized to date. This makes it challenging to gain an overview of the various approaches taken by prior works. In this paper, we present a taxonomy of phishing interventions based on a systematic literature analysis. We shed light on the diversity of existing approaches by analyzing them with respect to the intervention type, the addressed phishing attack vector, the time at which the intervention takes place, and the required user interaction. Furthermore, we highlight shortcomings and challenges emerging from both our literature sample and prior meta-analyses, and discuss them in the light of current movements in the field of usable security. With this article, we hope to provide useful directions for future works on phishing interventions.
@inproceedings{franz_sok_2021,
title = {{SoK}: {Still} {Plenty} of {Phish} in the {Sea} — {A} {Review} of {User}-{Oriented} {Phishing} {Interventions} and {Avenues} for {Future} {Research}},
isbn = {978-1-939133-25-0},
url = {https://www.usenix.org/system/files/soups2021-franz.pdf},
abstract = {Phishing is a prevalent cyber threat, targeting individuals and
organizations alike. Previous approaches on anti-phishing
measures have started to recognize the role of the user, who,
at the center of the target, builds the last line of defense.
However, user-oriented phishing interventions are fragmented
across a diverse research landscape, which has not been
systematized to date. This makes it challenging to gain an
overview of the various approaches taken by prior works.
In this paper, we present a taxonomy of phishing interventions
based on a systematic literature analysis. We shed light
on the diversity of existing approaches by analyzing them
with respect to the intervention type, the addressed phishing
attack vector, the time at which the intervention takes place,
and the required user interaction. Furthermore, we highlight
shortcomings and challenges emerging from both our literature
sample and prior meta-analyses, and discuss them in
the light of current movements in the field of usable security.
With this article, we hope to provide useful directions for
future works on phishing interventions.},
booktitle = {{USENIX} {Symposium} on {Usable} {Privacy} and {Security} ({SOUPS})},
author = {Franz, Anjuli and Albrecht, Gregor and Zimmermann, Verena and Hartwig, Katrin and Reuter, Christian and Benlian, Alexander and Vogt, Joachim},
year = {2021},
keywords = {Security, UsableSec, Projekt-CROSSING, AuswahlUsableSec, Ranking-CORE-B},
}
[BibTeX] [Abstract] [Download PDF]
Die Bedeutung des Umgangs mit Fake News hat sowohl im politischen als auch im sozialen Kontext zugenommen: Während sich bestehende Studien vor allem darauf konzentrieren, wie man gefälschte Nachrichten erkennt und kennzeichnet, fehlen Ansätze zur Unterstützung der NutzerInnen bei der eigenen Einschätzung weitgehend. Dieser Artikel stellt bestehende Black-Box- und White-Box-Ansätze vor und vergleicht Vor- und Nachteile. Dabei zeigen sich White-Box-Ansätze insbesondere als vielversprechend, um gegen Reaktanzen zu wirken, während Black-Box-Ansätze Fake News mit deutlich größerer Genauigkeit detektieren. Vorgestellt wird auch das von uns entwickelte Browser-Plugin TrustyTweet, welches die BenutzerInnen bei der Bewertung von Tweets auf Twitter unterstützt, indem es politisch neutrale und intuitive Warnungen anzeigt, ohne Reaktanz zu erzeugen.
@incollection{hartwig_fake_2021,
address = {Wiesbaden},
series = {ars digitalis},
title = {Fake {News} technisch begegnen – {Detektions}- und {Behandlungsansätze} zur {Unterstützung} von {NutzerInnen}},
volume = {3},
isbn = {978-3-658-32956-3},
url = {https://peasec.de/paper/2021/2021_HartwigReuter_FakeNewstechnischbegegnen_WahrheitundFake.pdf},
abstract = {Die Bedeutung des Umgangs mit Fake News hat sowohl im politischen als auch im sozialen Kontext zugenommen: Während sich bestehende Studien vor allem darauf konzentrieren, wie man gefälschte Nachrichten erkennt und kennzeichnet, fehlen Ansätze zur Unterstützung der NutzerInnen bei der eigenen Einschätzung weitgehend. Dieser Artikel stellt bestehende Black-Box- und White-Box-Ansätze vor und vergleicht Vor- und Nachteile. Dabei zeigen sich White-Box-Ansätze insbesondere als vielversprechend, um gegen Reaktanzen zu wirken, während Black-Box-Ansätze Fake News mit deutlich größerer Genauigkeit detektieren. Vorgestellt wird auch das von uns entwickelte Browser-Plugin TrustyTweet, welches die BenutzerInnen bei der Bewertung von Tweets auf Twitter unterstützt, indem es politisch neutrale und intuitive Warnungen anzeigt, ohne Reaktanz zu erzeugen.},
language = {de},
booktitle = {Wahrheit und {Fake} {News} im postfaktischen {Zeitalter}},
publisher = {Springer Vieweg},
author = {Hartwig, Katrin and Reuter, Christian},
editor = {Klimczak, Peter and Zoglauer, Thomas},
year = {2021},
keywords = {Peace, Crisis, HCI, SocialMedia},
pages = {133--150},
}
[BibTeX] [Abstract] [Download PDF]
While nudging is a long-established instrument in many contexts, it has more recently emerged to be relevant in cybersecurity as well. For instance, existing research suggests nudges for stronger passwords or safe WiFi connections. However, those nudges are often not as effective as desired. To improve their effectiveness, it is crucial to understand how people assess nudges in cybersecurity, to address potential fears and resulting reactance and to facilitate voluntary compliance. In other contexts, such as the health sector, studies have already thoroughly explored the attitude towards nudging. To address that matter in cybersecurity, we conducted a representative study in Germany (𝑁 = 1, 012), asking people about their attitude towards nudging in that specific context. Our findings reveal that 64\% rated nudging in cybersecurity as helpful, however several participants expected risks such as intentional misguidance, manipulation and data exposure as well.
@inproceedings{hartwig_nudge_2021,
address = {Karlsruhe, Germany},
title = {Nudge or {Restraint}: {How} do {People} {Assess} {Nudging} in {Cybersecurity} - {A} {Representative} {Study} in {Germany}},
url = {https://peasec.de/paper/2021/2021_HartwigReuter_NudgingCybersecurityRepresentativeStudy_EuroUSEC.pdf},
doi = {10.1145/3481357.3481514},
abstract = {While nudging is a long-established instrument in many contexts, it has more recently emerged to be relevant in cybersecurity as well. For instance, existing research suggests nudges for stronger passwords or safe WiFi connections. However, those nudges are often not as effective as desired. To improve their effectiveness, it is crucial to understand how people assess nudges in cybersecurity, to address potential fears and resulting reactance and to facilitate voluntary compliance. In other contexts, such as the health sector, studies have already thoroughly explored the attitude towards nudging. To address that matter in cybersecurity, we conducted a representative study in Germany (𝑁 = 1, 012), asking people about their attitude towards nudging in that specific context. Our findings reveal that 64\% rated nudging in cybersecurity as helpful, however several participants expected risks such as intentional misguidance, manipulation and data exposure as well.},
booktitle = {European {Symposium} on {Usable} {Security} ({EuroUSEC})},
publisher = {ACM},
author = {Hartwig, Katrin and Reuter, Christian},
year = {2021},
keywords = {Security, UsableSec, Projekt-ATHENE-SecUrban, Projekt-CROSSING},
pages = {141--150},
}
[BibTeX] [Abstract] [Download PDF]
Users tend to bypass systems that are designed to increase their personal security and privacy while limiting their perceived freedom. Nudges present a possible solution to this problem, offering security benefits without taking away perceived freedom. We have identified a lack of research comparing concrete implementations of nudging concepts in an emulated real-world scenario to assess their relative value as a nudge. Comparing multiple nudging implementations in an emulated real-world scenario including a novel avatar nudge with gamification elements, this publication discusses the advantages of nudging for stronger user-created passwords regarding efficacy, usability, and memorability.We investigated the effect of gamification in nudges, performing two studies (𝑁1 = 16, 𝑁2 = 1, 000) to refine and evaluate implementations of current and novel nudging concepts. Our research found a gamified nudge, which integrates a personalizable avatar guide into the registration process, to perform less effectively than state-of-the-art nudges, independently of participants’ gaming frequency.
@inproceedings{hartwig_finding_2021,
address = {Karlsruhe, Germany},
title = {Finding {Secret} {Treasure}? {Improving} {Memorized} {Secrets} {Through} {Gamification}},
url = {https://peasec.de/paper/2021/2021_HartwigEnglischThomsonReuter_MemorizedSecretsThroughGamification_EuroUSEC.pdf},
doi = {10.1145/3481357.3481509},
abstract = {Users tend to bypass systems that are designed to increase their personal security and privacy while limiting their perceived freedom.
Nudges present a possible solution to this problem, offering security benefits without taking away perceived freedom. We have
identified a lack of research comparing concrete implementations of nudging concepts in an emulated real-world scenario to assess their
relative value as a nudge. Comparing multiple nudging implementations in an emulated real-world scenario including a novel avatar
nudge with gamification elements, this publication discusses the advantages of nudging for stronger user-created passwords regarding
efficacy, usability, and memorability.We investigated the effect of gamification in nudges, performing two studies (𝑁1 = 16, 𝑁2 = 1, 000)
to refine and evaluate implementations of current and novel nudging concepts. Our research found a gamified nudge, which integrates
a personalizable avatar guide into the registration process, to perform less effectively than state-of-the-art nudges, independently of
participants’ gaming frequency.},
booktitle = {European {Symposium} on {Usable} {Security} ({EuroUSEC})},
publisher = {ACM},
author = {Hartwig, Katrin and Englisch, Atlas and Thomson, Jan Pelle and Reuter, Christian},
year = {2021},
keywords = {Student, Security, UsableSec, Projekt-ATHENE-SecUrban, Projekt-CROSSING},
pages = {105--117},
}
[BibTeX] [Abstract] [Download PDF]
In den letzten Jahren haben sich soziale Medien wie Facebook und Twitter immer mehr zu wichtigen Informationsquellen entwickelt, welche die Verbreitung von nutzergenerierten Inhalten unterstützen. Durch die hohe Verbreitungsgeschwindigkeit, geringen Aufwand und (scheinbare) Anonymität nimmt gleichzeitig die Verbreitung von Fake News und ähnlichen Phänomenen zu. Bereits in den vergangenen Jahren aber insbesondere auch im Kontext der COVID-19 Pandemie hat sich gezeigt, dass Fake News und unbeabsichtigte Fehlinformationen ernsthafte und sogar lebensbedrohliche Konsequenzen mit sich tragen bringen können. Technische Unterstützungsmaßnahmen haben insbesondere in sozialen Medien ein großes Potenzial um Fake News effektiv zu bekämpfen. Hier sind zwei maßgebliche Schritte notwendig: (1) Fake News automatisiert detektieren und (2) nach der erfolgreichen Detektion sinnvolle technische Gegenmaßnahmen implementieren [2].
@article{hartwig_transparenz_2021,
title = {Transparenz im technischen {Umgang} mit {Fake} {News}},
url = {https://peasec.de/paper/2021/2021_HartwigReuter_TransparenzFakeNews_TechnikMenschVDI.pdf},
abstract = {In den letzten Jahren haben sich soziale Medien wie Facebook und Twitter immer mehr zu wichtigen Informationsquellen entwickelt, welche die Verbreitung von nutzergenerierten Inhalten unterstützen. Durch die hohe Verbreitungsgeschwindigkeit, geringen Aufwand und (scheinbare) Anonymität nimmt gleichzeitig die Verbreitung von Fake News und ähnlichen Phänomenen zu. Bereits in den vergangenen Jahren aber insbesondere auch im Kontext der COVID-19 Pandemie hat sich gezeigt, dass Fake News und unbeabsichtigte Fehlinformationen ernsthafte und sogar lebensbedrohliche Konsequenzen mit sich tragen bringen können. Technische Unterstützungsmaßnahmen haben insbesondere in sozialen Medien ein großes Potenzial um Fake News effektiv zu bekämpfen. Hier sind zwei maßgebliche Schritte notwendig: (1) Fake News automatisiert detektieren und (2) nach der erfolgreichen Detektion sinnvolle technische Gegenmaßnahmen implementieren [2].},
number = {2},
journal = {Technik \& Mensch},
author = {Hartwig, Katrin and Reuter, Christian},
year = {2021},
keywords = {Crisis},
pages = {9--11},
}
2019
[BibTeX] [Abstract] [Download PDF]
Finding a responsible way to address fake news on social media has become an urgent matter both in political and social contexts. Existing studies focus mainly on how to detect and label fake news. However, approaches to assist users in making their own assessments are largely missing. In this article we present a study on how an indicator-based white-box approach can support Twitter-users in assessing tweets. In a first step, we identified indicators for fake news that have shown to be promising in previous studies and that are suitable for our idea of a white-box approach. Building on that basis of indicators we then designed and implemented the browser-plugin TrustyTweet, which aims to assist users on Twitter in assessing tweets by showing politically neutral and intuitive warnings without creating reactance. Finally, we present the findings of our evaluations carried out with a total of 27 participants, which result in further design implications for approaches to assist users in dealing with fake news.
@inproceedings{hartwig_fighting_2019,
address = {Darmstadt, Germany},
title = {Fighting {Misinformation} on {Twitter}: {The} {Plugin} based approach {TrustyTweet}},
url = {https://tuprints.ulb.tu-darmstadt.de/id/eprint/9164},
abstract = {Finding a responsible way to address fake news on social media has become an urgent matter both in political and social contexts. Existing studies focus mainly on how to detect and label fake news. However, approaches to assist users in making their own assessments are largely missing. In this article we present a study on how an indicator-based white-box approach can support Twitter-users in assessing tweets. In a first step, we identified indicators for fake news that have shown to be promising in previous studies and that are suitable for our idea of a white-box approach. Building on that basis of indicators we then designed and implemented the browser-plugin TrustyTweet, which aims to assist users on Twitter in assessing tweets by showing politically neutral and intuitive warnings without creating reactance. Finally, we present the findings of our evaluations carried out with a total of 27 participants, which result in further design implications for approaches to assist users in dealing with fake news.},
booktitle = {{SCIENCE} {PEACE} {SECURITY} '19 - {Proceedings} of the {Interdisciplinary} {Conference} on {Technical} {Peace} and {Security} {Research}},
publisher = {TUprints},
author = {Hartwig, Katrin and Reuter, Christian},
editor = {Reuter, Christian and Altmann, Jürgen and Göttsche, Malte and Himmel, Mirko},
year = {2019},
keywords = {Peace, Crisis, HCI, SocialMedia},
pages = {67--69},
}
[BibTeX] [Abstract] [Download PDF]
The importance of dealing withfake newsonsocial mediahas increased both in political and social contexts.While existing studies focus mainly on how to detect and label fake news, approaches to assist usersin making their own assessments are largely missing. This article presents a study on how Twitter-users’assessmentscan be supported by an indicator-based white-box approach.First, we gathered potential indicators for fake news that have proven to be promising in previous studies and that fit our idea of awhite-box approach. Based on those indicators we then designed and implemented the browser-plugin TrusyTweet, which assists users on Twitterin assessing tweetsby showing politically neutral and intuitive warnings without creating reactance. Finally, we suggest the findings of our evaluations with a total of 27 participants which lead to further design implicationsfor approachesto assistusers in dealing with fake news.
@inproceedings{hartwig_trustytweet_2019,
address = {Siegen, Germany},
title = {{TrustyTweet}: {An} {Indicator}-based {Browser}-{Plugin} to {Assist} {Users} in {Dealing} with {Fake} {News} on {Twitter}},
url = {http://www.peasec.de/paper/2019/2019_HartwigReuter_TrustyTweet_WI.pdf},
abstract = {The importance of dealing withfake newsonsocial mediahas increased both in political and social contexts.While existing studies focus mainly on how to detect and label fake news, approaches to assist usersin making their own assessments are largely missing. This article presents a study on how Twitter-users'assessmentscan be supported by an indicator-based white-box approach.First, we gathered potential indicators for fake news that have proven to be promising in previous studies and that fit our idea of awhite-box approach. Based on those indicators we then designed and implemented the browser-plugin TrusyTweet, which assists users on Twitterin assessing tweetsby showing politically neutral and intuitive warnings without creating reactance. Finally, we suggest the findings of our evaluations with a total of 27 participants which lead to further design implicationsfor approachesto assistusers in dealing with fake news.},
booktitle = {Proceedings of the {International} {Conference} on {Wirtschaftsinformatik} ({WI})},
publisher = {AIS},
author = {Hartwig, Katrin and Reuter, Christian},
year = {2019},
keywords = {Peace, Student, Crisis, HCI, Projekt-ATHENE-FANCY, SocialMedia, Projekt-CRISP, Ranking-CORE-C, Ranking-VHB-C, Ranking-WKWI-A},
pages = {1858--1869},
}
[BibTeX] [Abstract] [Download PDF]
Fake news has become an important topic in our social and political environment. While research is coming up for the U.S. and European countries, many aspects remain uncovered as long as existing work only marginally inves-tigates people’s attitudes towards fake news. In this work, we present the results of a representative study (N=1023) in Germany asking participants about their attitudes towards fake news and approaches to counteract disinformation. More than 80\% of the participants agree that fake news poses a threat. 78\% see fake news as harming democracy. Even though about half of the respondents (48\%) have noticed fake news, most participants stated to have never liked, shared or commented on fake news. Regarding demographic factors, our findings support the view of younger and relatively educated people being more informed about fake news. Concerning ideological motives, the evaluation suggests left-wing or liberal respondents to be more critical of fake news
@inproceedings{reuter_fake_2019-1,
address = {Siegen, Germany},
title = {Fake {News} {Perception} in {Germany}: {A} {Representative} {Study} of {People}'s {Attitudes} and {Approaches} to {Counteract} {Disinformation}},
url = {http://www.peasec.de/paper/2019/2019_ReuterHartwigKirchnerSchlegel_FakeNewsPerceptionGermany_WI.pdf},
abstract = {Fake news has become an important topic in our social and political environment. While research is coming up for the U.S. and European countries, many aspects remain uncovered as long as existing work only marginally inves-tigates people's attitudes towards fake news. In this work, we present the results of a representative study (N=1023) in Germany asking participants about their attitudes towards fake news and approaches to counteract disinformation. More than 80\% of the participants agree that fake news poses a threat. 78\% see fake news as harming democracy. Even though about half of the respondents (48\%) have noticed fake news, most participants stated to have never liked, shared or commented on fake news. Regarding demographic factors, our findings support the view of younger and relatively educated people being more informed about fake news. Concerning ideological motives, the evaluation suggests left-wing or liberal respondents to be more critical of fake news},
booktitle = {Proceedings of the {International} {Conference} on {Wirtschaftsinformatik} ({WI})},
publisher = {AIS},
author = {Reuter, Christian and Hartwig, Katrin and Kirchner, Jan and Schlegel, Noah},
year = {2019},
keywords = {Peace, Student, Crisis, HCI, SocialMedia, Ranking-CORE-C, Ranking-VHB-C, Ranking-WKWI-A},
pages = {1069--1083},
}
Weitere Veröffentlichungen:
Laura, C. O., Hartwig, K., Distergoft, A., Hoffmann, T., Scheckenbach, K., Brüsseler, M., & Wesarg, S. (2021, February). Automatic segmentation of the structures in the nasal cavity and the ethmoidal sinus for the quantification of nasal septal deviations. In Medical Imaging 2021: Computer-Aided Diagnosis (Vol. 11597, p. 115972J). International Society for Optics and Photonics.
Oyarzun, C. L., Hartwig, K., Hertlein, A. S., Jung, F., Burmeister, J., Kohlhammer, J., … & Sauter, G. (2020). Web-based Prostate Visualization Tool. Current Directions in Biomedical Engineering, 6(3), 563-566.