Katrin Hartwig, M.Sc.
Wissenschaftliche Mitarbeiterin / Doktorandin
Kontakt: +49 (0) 6151 / 1620944 | hartwig(at)peasec.tu-darmstadt.de
Technische Universität Darmstadt, Fachbereich Informatik, Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) Pankratiusstraße 2, 64289 Darmstadt, Raum 114
Katrin Hartwig ist wissenschaftliche Mitarbeiterin am Lehrstuhl Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) im Fachbereich Informatik der Technischen Universität Darmstadt. Ihre wissenschaftlichen Interessen liegen an der Schnittstelle von Informatik und Psychologie, besonders im Bereich Fake News, Usable Security und Mensch-Computer-Interaktion. Sie wirkt maßgeblich am BMBF-Projekt NEBULA (Nutzerzentrierte KI-basierte Erkennung von Fake News und Fehlinformationen) mit.
Sie studierte Informatik (M.Sc.) und Psychologie in IT (B.Sc.) an der Technischen Universität Darmstadt. Neben dem Studium arbeitete sie als Softwareentwicklerin in der medizinischen Bildverarbeitung. In ihrer Masterarbeit befasste sie sich mit Aspekten der Usable Security im Rahmen des Sonderforschungsbereichs CROSSING.
Publikationen
2023
[BibTeX]
@incollection{ozalp_trends_2023,
address = {Bielefeld},
title = {Trends in {Explainable} {Artificial} {Intelligence} for {Non}-{Experts}},
booktitle = {Artificial {Intelligence} – {Limits} and {Prospects}},
publisher = {Transcript Verlag},
author = {Özalp, Elise and Hartwig, Katrin and Reuter, Christian},
editor = {Klimczak, Peter and Petersen, Christer},
year = {2023},
keywords = {HCI, Student, Projekt-ATHENE-SecUrban, Projekt-CYWARN},
pages = {223--243},
}
[BibTeX]
@incollection{hartwig_countering_2023,
address = {Wiesbaden},
title = {Countering {Fake} {News} {Technically} – {Detection} and {Treatment} {Approaches} to {Support} {Users}},
language = {en},
booktitle = {Truth, {Fake} and {Post}-{Truth} in the {Digital} {Age}},
publisher = {Springer Vieweg},
author = {Hartwig, Katrin and Reuter, Christian},
editor = {Klimczak, Peter and Zoglauer, Thomas},
year = {2023},
keywords = {Crisis, SocialMedia},
pages = {133--150},
}
[BibTeX] [Abstract] [Download PDF]
Misinformation represent a key challenge for society. User-centered misinformation interventions as digital countermeasures that exert a direct influence on users represent a promising means to deal with the large amounts of information available. While an extensive body of research on this topic exists, researchers are confronted with a diverse research landscape spanning multiple disciplines. This review systematizes the landscape of user-centered misinformation interventions to facilitate knowledge transfer, identify trends, and enable informed decision-making. Over 3,700 scholarly publications were screened and a systematic literature review (N=108) was conducted. A taxonomy was derived regarding intervention design (e.g., binary label), user interaction (active or passive), and timing (e.g., post exposure to misinformation). We provide a structured overview of approaches across multiple disciplines, and derive six overarching challenges for future research.
@techreport{hartwig_landscape_2023,
title = {The {Landscape} of {User}-centered {Misinformation} {Interventions} – {A} {Systematic} {Literature} {Review}},
copyright = {arXiv.org perpetual, non-exclusive license},
url = {https://arxiv.org/abs/2301.06517},
abstract = {Misinformation represent a key challenge for society. User-centered misinformation interventions as digital countermeasures that exert a direct influence on users represent a promising means to deal with the large amounts of information available. While an extensive body of research on this topic exists, researchers are confronted with a diverse research landscape spanning multiple disciplines. This review systematizes the landscape of user-centered misinformation interventions to facilitate knowledge transfer, identify trends, and enable informed decision-making. Over 3,700 scholarly publications were screened and a systematic literature review (N=108) was conducted. A taxonomy was derived regarding intervention design (e.g., binary label), user interaction (active or passive), and timing (e.g., post exposure to misinformation). We provide a structured overview of approaches across multiple disciplines, and derive six overarching challenges for future research.},
institution = {arXiv},
author = {Hartwig, Katrin and Doell, Frederic and Reuter, Christian},
year = {2023},
doi = {10.48550/ARXIV.2301.06517},
keywords = {HCI, Peace, Projekt-NEBULA},
}
2022
[BibTeX] [Abstract] [Download PDF]
In crises such as the COVID-19 pandemic, it is crucial to support users when dealing with social media content. Considering digital resilience, we propose a web app based on Social Network Analysis (SNA) to provide an overview of potentially misleading vs. non-misleading content on Twitter, which can be explored by users and enable foundational learning. The latter aims at systematically identifying thematic patterns which may be associated with misleading information. Additionally, it entails reflecting on indicators of misleading tweets which are proposed to approach classification of tweets. Paying special attention to non-expert users of social media, we conducted a two-step Think Aloud study for evaluation. While participants valued the opportunity to generate new knowledge and the diversity of the application, qualities such as equality and rapidity may be further improved. However, learning effects outweighed individual costs as all users were able to shift focus onto relevant features, such as hashtags, while readily pointing out content characteristics. Our design artifact connects to learning-oriented interventions regarding the spread of misleading information and tackles information overload by a SNA-based plug-in.
@article{schmid_digital_2022,
title = {Digital {Resilience} in {Dealing} with {Misinformation} on {Social} {Media} during {COVID}-19: {A} {Web} {Application} to {Assist} {Users} in {Crises}},
url = {https://link.springer.com/article/10.1007/s10796-022-10347-5},
doi = {10.1007/s10796-022-10347-5},
abstract = {In crises such as the COVID-19 pandemic, it is crucial to support users when dealing with social media content. Considering digital resilience, we propose a web app based on Social Network Analysis (SNA) to provide an overview of potentially misleading vs. non-misleading content on Twitter, which can be explored by users and enable foundational learning. The latter aims at systematically identifying thematic patterns which may be associated with misleading information. Additionally, it entails reflecting on indicators of misleading tweets which are proposed to approach classification of tweets. Paying special attention to non-expert users of social media, we conducted a two-step Think Aloud study for evaluation. While participants valued the opportunity to generate new knowledge and the diversity of the application, qualities such as equality and rapidity may be further improved. However, learning effects outweighed individual costs as all users were able to shift focus onto relevant features, such as hashtags, while readily pointing out content characteristics. Our design artifact connects to learning-oriented interventions regarding the spread of misleading information and tackles information overload by a SNA-based plug-in.},
journal = {Information Systems Frontiers},
author = {Schmid, Stefka and Hartwig, Katrin and Cieslinski, Robert and Reuter, Christian},
year = {2022},
keywords = {Crisis, Student, A-Paper, Projekt-NEBULA},
}
[BibTeX] [Abstract] [Download PDF]
Nudging users to keep them secure online has become a growing research field in cybersecurity. While existing approaches are mainly blackbox based, showing aggregated visualisations as one-size-fits-all nudges, personalisation turned out promising to enhance the efficacy of nudges within the high variance of users and contexts. This article presents a disaggregated whitebox-based visualisation of critical information as a novel nudge. By segmenting users according to their decision-making and information processing styles, we investigate if the novel nudge is more effective for specific users than a common black-box nudge. Based on existing literature about critical factors in password security, we designed a dynamic radar chart and parallel coordinates as disaggregated visualisations. We evaluated the short-term effectiveness and users‘ perception of the nudges in a think-aloud prestudy and a representative online evaluation (N=1.012). Our findings suggest that dynamic radar charts present a moderately effective nudge towards stronger passwords regarding short-term efficacy and are appreciated particularly by players of role-playing games.
@article{hartwig_nudging_2022,
title = {Nudging {Users} {Towards} {Better} {Security} {Decisions} in {Password} {Creation} {Using} {Whitebox}-based {Multidimensional} {Visualizations}},
volume = {41},
url = {https://peasec.de/paper/2022/2022_HartwigReuter_WhiteboxMultidimensionalNudges_BIT.pdf},
doi = {10.1080/0144929X.2021.1876167},
abstract = {Nudging users to keep them secure online has become a growing research field in cybersecurity. While existing approaches are mainly blackbox based, showing aggregated visualisations as one-size-fits-all nudges, personalisation turned out promising to enhance the efficacy of nudges within the high variance of users and contexts. This article presents a disaggregated whitebox-based visualisation of critical information as a novel nudge. By segmenting users according to their decision-making and information processing styles, we investigate if the novel nudge is more effective for specific users than a common black-box nudge. Based on existing literature about critical factors in password security, we designed a dynamic radar chart and parallel coordinates as disaggregated visualisations. We evaluated the short-term effectiveness and users' perception of the nudges in a think-aloud prestudy and a representative online evaluation (N=1.012). Our findings suggest that dynamic radar charts present a moderately effective nudge towards stronger passwords regarding short-term efficacy and are appreciated particularly by players of role-playing games.},
number = {7},
journal = {Behaviour \& Information Technology (BIT)},
author = {Hartwig, Katrin and Reuter, Christian},
year = {2022},
keywords = {HCI, Selected, UsableSec, Security, A-Paper, Ranking-ImpactFactor, Ranking-CORE-A, Projekt-CROSSING, AuswahlUsableSec},
pages = {1357--1380},
}
2021
[BibTeX] [Abstract] [Download PDF]
Phishing is a prevalent cyber threat, targeting individuals and organizations alike. Previous approaches on anti-phishing measures have started to recognize the role of the user, who, at the center of the target, builds the last line of defense. However, user-oriented phishing interventions are fragmented across a diverse research landscape, which has not been systematized to date. This makes it challenging to gain an overview of the various approaches taken by prior works. In this paper, we present a taxonomy of phishing interventions based on a systematic literature analysis. We shed light on the diversity of existing approaches by analyzing them with respect to the intervention type, the addressed phishing attack vector, the time at which the intervention takes place, and the required user interaction. Furthermore, we highlight shortcomings and challenges emerging from both our literature sample and prior meta-analyses, and discuss them in the light of current movements in the field of usable security. With this article, we hope to provide useful directions for future works on phishing interventions.
@inproceedings{franz_still_2021,
title = {Still {Plenty} of {Phish} in the {Sea} — {A} {Review} of {User}-{Oriented} {Phishing} {Interventions} and {Avenues} for {Future} {Research}},
isbn = {978-1-939133-25-0},
url = {https://www.usenix.org/system/files/soups2021-franz.pdf},
abstract = {Phishing is a prevalent cyber threat, targeting individuals and
organizations alike. Previous approaches on anti-phishing
measures have started to recognize the role of the user, who,
at the center of the target, builds the last line of defense.
However, user-oriented phishing interventions are fragmented
across a diverse research landscape, which has not been
systematized to date. This makes it challenging to gain an
overview of the various approaches taken by prior works.
In this paper, we present a taxonomy of phishing interventions
based on a systematic literature analysis. We shed light
on the diversity of existing approaches by analyzing them
with respect to the intervention type, the addressed phishing
attack vector, the time at which the intervention takes place,
and the required user interaction. Furthermore, we highlight
shortcomings and challenges emerging from both our literature
sample and prior meta-analyses, and discuss them in
the light of current movements in the field of usable security.
With this article, we hope to provide useful directions for
future works on phishing interventions.},
booktitle = {{USENIX} {Symposium} on {Usable} {Privacy} and {Security} ({SOUPS})},
author = {Franz, Anjuli and Albrecht, Gregor and Zimmermann, Verena and Hartwig, Katrin and Reuter, Christian and Benlian, Alexander and Vogt, Joachim},
year = {2021},
keywords = {UsableSec, Security, Ranking-CORE-B, Projekt-CROSSING, AuswahlUsableSec},
}
[BibTeX] [Abstract] [Download PDF]
In den letzten Jahren haben sich soziale Medien wie Facebook und Twitter immer mehr zu wichtigen Informationsquellen entwickelt, welche die Verbreitung von nutzergenerierten Inhalten unterstützen. Durch die hohe Verbreitungsgeschwindigkeit, geringen Aufwand und (scheinbare) Anonymität nimmt gleichzeitig die Verbreitung von Fake News und ähnlichen Phänomenen zu. Bereits in den vergangenen Jahren aber insbesondere auch im Kontext der COVID-19 Pandemie hat sich gezeigt, dass Fake News und unbeabsichtigte Fehlinformationen ernsthafte und sogar lebensbedrohliche Konsequenzen mit sich tragen bringen können. Technische Unterstützungsmaßnahmen haben insbesondere in sozialen Medien ein großes Potenzial um Fake News effektiv zu bekämpfen. Hier sind zwei maßgebliche Schritte notwendig: (1) Fake News automatisiert detektieren und (2) nach der erfolgreichen Detektion sinnvolle technische Gegenmaßnahmen implementieren [2].
@article{hartwig_transparenz_2021,
title = {Transparenz im technischen {Umgang} mit {Fake} {News}},
url = {https://peasec.de/paper/2021/2021_HartwigReuter_TransparenzFakeNews_TechnikMenschVDI.pdf},
abstract = {In den letzten Jahren haben sich soziale Medien wie Facebook und Twitter immer mehr zu wichtigen Informationsquellen entwickelt, welche die Verbreitung von nutzergenerierten Inhalten unterstützen. Durch die hohe Verbreitungsgeschwindigkeit, geringen Aufwand und (scheinbare) Anonymität nimmt gleichzeitig die Verbreitung von Fake News und ähnlichen Phänomenen zu. Bereits in den vergangenen Jahren aber insbesondere auch im Kontext der COVID-19 Pandemie hat sich gezeigt, dass Fake News und unbeabsichtigte Fehlinformationen ernsthafte und sogar lebensbedrohliche Konsequenzen mit sich tragen bringen können. Technische Unterstützungsmaßnahmen haben insbesondere in sozialen Medien ein großes Potenzial um Fake News effektiv zu bekämpfen. Hier sind zwei maßgebliche Schritte notwendig: (1) Fake News automatisiert detektieren und (2) nach der erfolgreichen Detektion sinnvolle technische Gegenmaßnahmen implementieren [2].},
number = {2},
journal = {Technik \& Mensch},
author = {Hartwig, Katrin and Reuter, Christian},
year = {2021},
keywords = {Crisis},
pages = {9--11},
}
[BibTeX] [Abstract] [Download PDF]
Die Bedeutung des Umgangs mit Fake News hat sowohl im politischen als auch im sozialen Kontext zugenommen: Während sich bestehende Studien vor allem darauf konzentrieren, wie man gefälschte Nachrichten erkennt und kennzeichnet, fehlen Ansätze zur Unterstützung der NutzerInnen bei der eigenen Einschätzung weitgehend. Dieser Artikel stellt bestehende Black-Box- und White-Box-Ansätze vor und vergleicht Vor- und Nachteile. Dabei zeigen sich White-Box-Ansätze insbesondere als vielversprechend, um gegen Reaktanzen zu wirken, während Black-Box-Ansätze Fake News mit deutlich größerer Genauigkeit detektieren. Vorgestellt wird auch das von uns entwickelte Browser-Plugin TrustyTweet, welches die BenutzerInnen bei der Bewertung von Tweets auf Twitter unterstützt, indem es politisch neutrale und intuitive Warnungen anzeigt, ohne Reaktanz zu erzeugen.
@incollection{hartwig_fake_2021,
address = {Wiesbaden},
series = {ars digitalis},
title = {Fake {News} technisch begegnen – {Detektions}- und {Behandlungsansätze} zur {Unterstützung} von {NutzerInnen}},
volume = {3},
isbn = {978-3-658-32956-3},
url = {https://peasec.de/paper/2021/2021_HartwigReuter_FakeNewstechnischbegegnen_WahrheitundFake.pdf},
abstract = {Die Bedeutung des Umgangs mit Fake News hat sowohl im politischen als auch im sozialen Kontext zugenommen: Während sich bestehende Studien vor allem darauf konzentrieren, wie man gefälschte Nachrichten erkennt und kennzeichnet, fehlen Ansätze zur Unterstützung der NutzerInnen bei der eigenen Einschätzung weitgehend. Dieser Artikel stellt bestehende Black-Box- und White-Box-Ansätze vor und vergleicht Vor- und Nachteile. Dabei zeigen sich White-Box-Ansätze insbesondere als vielversprechend, um gegen Reaktanzen zu wirken, während Black-Box-Ansätze Fake News mit deutlich größerer Genauigkeit detektieren. Vorgestellt wird auch das von uns entwickelte Browser-Plugin TrustyTweet, welches die BenutzerInnen bei der Bewertung von Tweets auf Twitter unterstützt, indem es politisch neutrale und intuitive Warnungen anzeigt, ohne Reaktanz zu erzeugen.},
language = {de},
booktitle = {Wahrheit und {Fake} {News} im postfaktischen {Zeitalter}},
publisher = {Springer Vieweg},
author = {Hartwig, Katrin and Reuter, Christian},
editor = {Klimczak, Peter and Zoglauer, Thomas},
year = {2021},
keywords = {Crisis, HCI, SocialMedia, Peace},
pages = {133--150},
}
[BibTeX] [Abstract] [Download PDF]
While nudging is a long-established instrument in many contexts, it has more recently emerged to be relevant in cybersecurity as well. For instance, existing research suggests nudges for stronger passwords or safe WiFi connections. However, those nudges are often not as effective as desired. To improve their effectiveness, it is crucial to understand how people assess nudges in cybersecurity, to address potential fears and resulting reactance and to facilitate voluntary compliance. In other contexts, such as the health sector, studies have already thoroughly explored the attitude towards nudging. To address that matter in cybersecurity, we conducted a representative study in Germany (𝑁 = 1, 012), asking people about their attitude towards nudging in that specific context. Our findings reveal that 64\% rated nudging in cybersecurity as helpful, however several participants expected risks such as intentional misguidance, manipulation and data exposure as well.
@inproceedings{hartwig_nudge_2021,
address = {Karlsruhe, Germany},
title = {Nudge or {Restraint}: {How} do {People} {Assess} {Nudging} in {Cybersecurity} - {A} {Representative} {Study} in {Germany}},
url = {https://peasec.de/paper/2021/2021_HartwigReuter_NudgingCybersecurityRepresentativeStudy_EuroUSEC.pdf},
doi = {10.1145/3481357.3481514},
abstract = {While nudging is a long-established instrument in many contexts, it has more recently emerged to be relevant in cybersecurity as well. For instance, existing research suggests nudges for stronger passwords or safe WiFi connections. However, those nudges are often not as effective as desired. To improve their effectiveness, it is crucial to understand how people assess nudges in cybersecurity, to address potential fears and resulting reactance and to facilitate voluntary compliance. In other contexts, such as the health sector, studies have already thoroughly explored the attitude towards nudging. To address that matter in cybersecurity, we conducted a representative study in Germany (𝑁 = 1, 012), asking people about their attitude towards nudging in that specific context. Our findings reveal that 64\% rated nudging in cybersecurity as helpful, however several participants expected risks such as intentional misguidance, manipulation and data exposure as well.},
booktitle = {European {Symposium} on {Usable} {Security} ({EuroUSEC})},
publisher = {ACM},
author = {Hartwig, Katrin and Reuter, Christian},
year = {2021},
keywords = {UsableSec, Security, Projekt-CROSSING, Projekt-ATHENE-SecUrban},
pages = {141--150},
}
[BibTeX] [Abstract] [Download PDF]
Users tend to bypass systems that are designed to increase their personal security and privacy while limiting their perceived freedom. Nudges present a possible solution to this problem, offering security benefits without taking away perceived freedom. We have identified a lack of research comparing concrete implementations of nudging concepts in an emulated real-world scenario to assess their relative value as a nudge. Comparing multiple nudging implementations in an emulated real-world scenario including a novel avatar nudge with gamification elements, this publication discusses the advantages of nudging for stronger user-created passwords regarding efficacy, usability, and memorability.We investigated the effect of gamification in nudges, performing two studies (𝑁1 = 16, 𝑁2 = 1, 000) to refine and evaluate implementations of current and novel nudging concepts. Our research found a gamified nudge, which integrates a personalizable avatar guide into the registration process, to perform less effectively than state-of-the-art nudges, independently of participants’ gaming frequency.
@inproceedings{hartwig_finding_2021,
address = {Karlsruhe, Germany},
title = {Finding {Secret} {Treasure}? {Improving} {Memorized} {Secrets} {Through} {Gamification}},
url = {https://peasec.de/paper/2021/2021_HartwigEnglischThomsonReuter_MemorizedSecretsThroughGamification_EuroUSEC.pdf},
doi = {10.1145/3481357.3481509},
abstract = {Users tend to bypass systems that are designed to increase their personal security and privacy while limiting their perceived freedom.
Nudges present a possible solution to this problem, offering security benefits without taking away perceived freedom. We have
identified a lack of research comparing concrete implementations of nudging concepts in an emulated real-world scenario to assess their
relative value as a nudge. Comparing multiple nudging implementations in an emulated real-world scenario including a novel avatar
nudge with gamification elements, this publication discusses the advantages of nudging for stronger user-created passwords regarding
efficacy, usability, and memorability.We investigated the effect of gamification in nudges, performing two studies (𝑁1 = 16, 𝑁2 = 1, 000)
to refine and evaluate implementations of current and novel nudging concepts. Our research found a gamified nudge, which integrates
a personalizable avatar guide into the registration process, to perform less effectively than state-of-the-art nudges, independently of
participants’ gaming frequency.},
booktitle = {European {Symposium} on {Usable} {Security} ({EuroUSEC})},
publisher = {ACM},
author = {Hartwig, Katrin and Englisch, Atlas and Thomson, Jan Pelle and Reuter, Christian},
year = {2021},
keywords = {Student, UsableSec, Security, Projekt-CROSSING, Projekt-ATHENE-SecUrban},
pages = {105--117},
}
2019
[BibTeX] [Abstract] [Download PDF]
Finding a responsible way to address fake news on social media has become an urgent matter both in political and social contexts. Existing studies focus mainly on how to detect and label fake news. However, approaches to assist users in making their own assessments are largely missing. In this article we present a study on how an indicator-based white-box approach can support Twitter-users in assessing tweets. In a first step, we identified indicators for fake news that have shown to be promising in previous studies and that are suitable for our idea of a white-box approach. Building on that basis of indicators we then designed and implemented the browser-plugin TrustyTweet, which aims to assist users on Twitter in assessing tweets by showing politically neutral and intuitive warnings without creating reactance. Finally, we present the findings of our evaluations carried out with a total of 27 participants, which result in further design implications for approaches to assist users in dealing with fake news.
@inproceedings{hartwig_fighting_2019,
address = {Darmstadt, Germany},
title = {Fighting {Misinformation} on {Twitter}: {The} {Plugin} based approach {TrustyTweet}},
url = {https://tuprints.ulb.tu-darmstadt.de/id/eprint/9164},
abstract = {Finding a responsible way to address fake news on social media has become an urgent matter both in political and social contexts. Existing studies focus mainly on how to detect and label fake news. However, approaches to assist users in making their own assessments are largely missing. In this article we present a study on how an indicator-based white-box approach can support Twitter-users in assessing tweets. In a first step, we identified indicators for fake news that have shown to be promising in previous studies and that are suitable for our idea of a white-box approach. Building on that basis of indicators we then designed and implemented the browser-plugin TrustyTweet, which aims to assist users on Twitter in assessing tweets by showing politically neutral and intuitive warnings without creating reactance. Finally, we present the findings of our evaluations carried out with a total of 27 participants, which result in further design implications for approaches to assist users in dealing with fake news.},
booktitle = {{SCIENCE} {PEACE} {SECURITY} '19 - {Proceedings} of the {Interdisciplinary} {Conference} on {Technical} {Peace} and {Security} {Research}},
publisher = {TUprints},
author = {Hartwig, Katrin and Reuter, Christian},
editor = {Reuter, Christian and Altmann, Jürgen and Göttsche, Malte and Himmel, Mirko},
year = {2019},
keywords = {Crisis, HCI, SocialMedia, Peace},
pages = {67--69},
}
[BibTeX] [Abstract] [Download PDF]
The importance of dealing withfake newsonsocial mediahas increased both in political and social contexts.While existing studies focus mainly on how to detect and label fake news, approaches to assist usersin making their own assessments are largely missing. This article presents a study on how Twitter-users’assessmentscan be supported by an indicator-based white-box approach.First, we gathered potential indicators for fake news that have proven to be promising in previous studies and that fit our idea of awhite-box approach. Based on those indicators we then designed and implemented the browser-plugin TrusyTweet, which assists users on Twitterin assessing tweetsby showing politically neutral and intuitive warnings without creating reactance. Finally, we suggest the findings of our evaluations with a total of 27 participants which lead to further design implicationsfor approachesto assistusers in dealing with fake news.
@inproceedings{hartwig_trustytweet_2019,
address = {Siegen, Germany},
title = {{TrustyTweet}: {An} {Indicator}-based {Browser}-{Plugin} to {Assist} {Users} in {Dealing} with {Fake} {News} on {Twitter}},
url = {http://www.peasec.de/paper/2019/2019_HartwigReuter_TrustyTweet_WI.pdf},
abstract = {The importance of dealing withfake newsonsocial mediahas increased both in political and social contexts.While existing studies focus mainly on how to detect and label fake news, approaches to assist usersin making their own assessments are largely missing. This article presents a study on how Twitter-users'assessmentscan be supported by an indicator-based white-box approach.First, we gathered potential indicators for fake news that have proven to be promising in previous studies and that fit our idea of awhite-box approach. Based on those indicators we then designed and implemented the browser-plugin TrusyTweet, which assists users on Twitterin assessing tweetsby showing politically neutral and intuitive warnings without creating reactance. Finally, we suggest the findings of our evaluations with a total of 27 participants which lead to further design implicationsfor approachesto assistusers in dealing with fake news.},
booktitle = {Proceedings of the {International} {Conference} on {Wirtschaftsinformatik} ({WI})},
publisher = {AIS},
author = {Hartwig, Katrin and Reuter, Christian},
year = {2019},
keywords = {Crisis, HCI, SocialMedia, Student, Ranking-CORE-C, Ranking-VHB-C, Ranking-WKWI-A, Peace, Projekt-CRISP, Projekt-ATHENE-FANCY},
pages = {1858--1869},
}
[BibTeX] [Abstract] [Download PDF]
Fake news has become an important topic in our social and political environment. While research is coming up for the U.S. and European countries, many aspects remain uncovered as long as existing work only marginally inves-tigates people’s attitudes towards fake news. In this work, we present the results of a representative study (N=1023) in Germany asking participants about their attitudes towards fake news and approaches to counteract disinformation. More than 80\% of the participants agree that fake news poses a threat. 78\% see fake news as harming democracy. Even though about half of the respondents (48\%) have noticed fake news, most participants stated to have never liked, shared or commented on fake news. Regarding demographic factors, our findings support the view of younger and relatively educated people being more informed about fake news. Concerning ideological motives, the evaluation suggests left-wing or liberal respondents to be more critical of fake news
@inproceedings{reuter_fake_2019-2,
address = {Siegen, Germany},
title = {Fake {News} {Perception} in {Germany}: {A} {Representative} {Study} of {People}'s {Attitudes} and {Approaches} to {Counteract} {Disinformation}},
url = {http://www.peasec.de/paper/2019/2019_ReuterHartwigKirchnerSchlegel_FakeNewsPerceptionGermany_WI.pdf},
abstract = {Fake news has become an important topic in our social and political environment. While research is coming up for the U.S. and European countries, many aspects remain uncovered as long as existing work only marginally inves-tigates people's attitudes towards fake news. In this work, we present the results of a representative study (N=1023) in Germany asking participants about their attitudes towards fake news and approaches to counteract disinformation. More than 80\% of the participants agree that fake news poses a threat. 78\% see fake news as harming democracy. Even though about half of the respondents (48\%) have noticed fake news, most participants stated to have never liked, shared or commented on fake news. Regarding demographic factors, our findings support the view of younger and relatively educated people being more informed about fake news. Concerning ideological motives, the evaluation suggests left-wing or liberal respondents to be more critical of fake news},
booktitle = {Proceedings of the {International} {Conference} on {Wirtschaftsinformatik} ({WI})},
publisher = {AIS},
author = {Reuter, Christian and Hartwig, Katrin and Kirchner, Jan and Schlegel, Noah},
year = {2019},
keywords = {Crisis, HCI, SocialMedia, Student, Ranking-CORE-C, Ranking-VHB-C, Ranking-WKWI-A, Peace},
pages = {1069--1083},
}
Weitere Veröffentlichungen:
Laura, C. O., Hartwig, K., Distergoft, A., Hoffmann, T., Scheckenbach, K., Brüsseler, M., & Wesarg, S. (2021, February). Automatic segmentation of the structures in the nasal cavity and the ethmoidal sinus for the quantification of nasal septal deviations. In Medical Imaging 2021: Computer-Aided Diagnosis (Vol. 11597, p. 115972J). International Society for Optics and Photonics.
Oyarzun, C. L., Hartwig, K., Hertlein, A. S., Jung, F., Burmeister, J., Kohlhammer, J., … & Sauter, G. (2020). Web-based Prostate Visualization Tool. Current Directions in Biomedical Engineering, 6(3), 563-566.