NEBULA: Nutzerzentrierte KI-basierte Erkennung von Fake News und Fehlinformationen (1.7.2022-30.6.2025, BMBF)
Die Verbreitung falscher und irreführender Informationen – insbesondere über soziale Medien wie TikTok, Twitter, Facebook und Co. – nehmen eine immer größer werdende Relevanz in sicherheitsrelevanten Situationen ein. Gerade im Kontext des Ukrainekriegs spielen derartige Plattformen eine besondere Rolle, indem gefälschte Videos oder Inhalte mit falscher zeitlicher Einordnung in kürzester Zeit viral gehen und somit das Potential für Verunsicherung und Meinungsmanipulation birgen. Problematisch sind dabei nicht nur absichtliche sondern auch unabsichtlich irreführende Informationen.
Ziel des interdisziplinären Verbundvorhabens NEBULA ist die transparente, KI-basierte Erkennung von Fake News und Fehlinformationen in sicherheitsrelevanten Situationen sowie die zielgruppengerechte Darstellung der Detektionsergebnisse zur Förderung der Medienkompetenz. Die nutzerzentrierten Ansätze adressieren dabei sowohl Behörden und Organisationen mit Sicherheitsaufgaben (BOS) in der akkuraten Lagebilderstellung und Krisenkommunikation als auch vulnerable Personen (z.B. aufgrund ihres Alters, Bildungshintergrunds oder ihrer Sprachkenntnisse) durch partizipative Entwicklung von technischen Unterstützungswerkzeugen. Innerhalb des Projekts entstehen Demonstratore in Form von Smartphone-Apps, Browser-Plugins und Webanwendungen, um Einzelpersonen und Behörden dazu zu befähigen, Fake News und Fehlinformationen selbst erkennen zu können.
Dem Verbundprojekt NEBULA gehören fünf Verbundpartner, acht assoziierte Partner sowie zwei Unterauftragsnehmer an: Neben dem Lehrstuhl Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) der Technischen Universität Darmstadt (Prof. Dr. Dr. Christian Reuter) als Verbunskoordinator sind der Lehrstuhl für Wirtschaftsinformatik und Neue Medien (WiNeMe) der Universität Siegen (Prof. Dr. Volker Wulf), die Professur für Kommunikationswissenschaft mit Schwerpunkt politische Kommunikation der Hochschule Bonn-Rhein-Sieg (Prof. Dr. Hektor Haarkötter), der Lehrstuhl Data Science (DICE) der Universität Paderborn (Prof. Dr. Axel-Cyrille Ngonga Ngomo) sowie die NanoGiants GmbH (Lukas Czarnecki) als Verbundpartner am Projekt beteiligt.
Konsortialpartner
Assoziierte Partner
2024
[BibTeX] [Abstract] [Download PDF]
Misinformation poses a recurrent challenge for video-sharing platforms (VSPs) like TikTok. Obtaining user perspectives on digital interventions addressing the need for transparency (e.g., through indicators) is essential. This article offers a thorough examination of the comprehensibility, usefulness, and limitations of an indicator-based intervention from an adolescents’ perspective. This study (𝑁 = 39; aged 13-16 years) comprised two qualitative steps: (1) focus group discussions and (2) think-aloud sessions, where participants engaged with a smartphone-app for TikTok. The results offer new insights into how video-based indicators can assist adolescents’ assessments. The intervention received positive feedback, especially for its transparency, and could be applicable to new content. This paper sheds light on how adolescents are expected to be experts while also being prone to video-based misinformation, with limited understanding of an intervention’s limitations. By adopting teenagers’ perspectives, we contribute to HCI research and provide new insights into the chances and limitations of interventions for VSPs.
@inproceedings{hartwig_adolescents_2024,
address = {New York, NY, USA},
series = {{CHI} '24},
title = {From {Adolescents}' {Eyes}: {Assessing} an {Indicator}-{Based} {Intervention} to {Combat} {Misinformation} on {TikTok}},
isbn = {9798400703300},
url = {https://doi.org/10.1145/3613904.3642264},
doi = {10.1145/3613904.3642264},
abstract = {Misinformation poses a recurrent challenge for video-sharing platforms (VSPs) like TikTok. Obtaining user perspectives on digital interventions addressing the need for transparency (e.g., through indicators) is essential. This article offers a thorough examination of the comprehensibility, usefulness, and limitations of an indicator-based intervention from an adolescents’ perspective. This study (𝑁 = 39; aged 13-16 years) comprised two qualitative steps: (1) focus group discussions and (2) think-aloud sessions, where participants
engaged with a smartphone-app for TikTok. The results offer new insights into how video-based indicators can assist adolescents’ assessments. The intervention received positive feedback, especially for its transparency, and could be applicable to new content. This paper sheds light on how adolescents are expected to be experts while also being prone to video-based misinformation, with limited understanding of an intervention’s limitations. By adopting
teenagers’ perspectives, we contribute to HCI research and provide new insights into the chances and limitations of interventions for VSPs.},
booktitle = {Proceedings of the {Conference} on {Human} {Factors} in {Computing} {Systems} ({CHI})},
publisher = {Association for Computing Machinery},
author = {Hartwig, Katrin and Biselli, Tom and Schneider, Franziska and Reuter, Christian},
year = {2024},
keywords = {Security, UsableSec, HCI, A-Paper, Ranking-CORE-A*, Selected, AuswahlCrisis, Projekt-ATHENE-PriVis, Projekt-NEBULA},
}
[BibTeX] [Abstract] [Download PDF]
Misinformation presents a challenge to democracies, particularly in times of crisis. One way in which misinformation is spread is through voice messages sent via messenger groups, which enable members to share information on a larger scale. Gaining user perspectives on digital misinformation interventions as countermeasure after detection is crucial. In this paper, we extract potential features of misinformation in voice messages from literature, implement them within a program that automatically processes voice messages, and evaluate their perceived usefulness and comprehensibility as user-centered indicators.We propose 35 features extracted from audio files at the character, word, sentence, audio and creator levels to assist (1) private individuals in conducting credibility assessments, (2) government agencies faced with data overload during crises, and (3) researchers seeking to gather features for automatic detection approaches. We conducted a think-aloud study with laypersons (N = 20) to provide initial insight into how individuals autonomously assess the credibility of voice messages, as well as which automatically extracted features they find to be clear and convincing indicators of misinformation. Our study provides qualitative and quantitative insights into valuable indicators, particularly when they relate directly to the content or its creator, and uncovers challenges in user interface design.
@article{hartwig_navigating_2024,
title = {Navigating {Misinformation} in {Voice} {Messages}: {Identification} of {User}-{Centered} {Features} for {Digital} {Interventions}},
issn = {1944-4079},
url = {https://peasec.de/paper/2024/2024_HartwigSandlerReuter_NavigatingMisinfoVoiceMessages_RiskHazards.pdf},
doi = {10.1002/rhc3.12296},
abstract = {Misinformation presents a challenge to democracies, particularly in times of crisis. One way in which misinformation is spread is through voice messages sent via messenger groups, which enable members to share information on a larger scale. Gaining user perspectives on digital misinformation interventions as countermeasure after detection is crucial. In this paper, we extract potential features of misinformation in voice messages from literature, implement them within a program that automatically processes voice messages, and evaluate their perceived usefulness and comprehensibility as user-centered indicators.We propose 35 features extracted from audio files at the character, word, sentence, audio and creator levels to assist (1) private individuals in conducting credibility assessments, (2) government agencies faced with data overload during crises, and (3) researchers seeking to gather features for automatic detection approaches. We conducted a think-aloud study with laypersons (N = 20) to provide initial insight into how individuals autonomously assess the credibility of voice messages, as well as which automatically extracted features they find to be clear and convincing indicators of misinformation. Our study provides qualitative and quantitative insights into valuable indicators, particularly when they relate directly to the content or its creator, and uncovers challenges in user interface design.},
journal = {Risk, Hazards, \& Crisis in Public Policy (RHCPP)},
author = {Hartwig, Katrin and Sandler, Ruslan and Reuter, Christian},
year = {2024},
note = {Publisher: John Wiley \& Sons, Ltd},
keywords = {Student, UsableSec, Crisis, HCI, Projekt-CYLENCE, A-Paper, Projekt-NEBULA, Projekt-ATHENE, Ranking-ImpactFactor, SocialMedia, Cyberwar},
}
[BibTeX] [Abstract] [Download PDF]
Misinformation represent a key challenge for society. User-centered misinformation interventions as digital countermeasures that exert a direct influence on users represent a promising means to deal with the large amounts of information available. While an extensive body of research on this topic exists, researchers are confronted with a diverse research landscape spanning multiple disciplines. This review systematizes the landscape of user-centered misinformation interventions to facilitate knowledge transfer, identify trends, and enable informed decision-making. Over 3,700 scholarly publications were screened and a systematic literature review (N=108) was conducted. A taxonomy was derived regarding intervention design (e.g., binary label), user interaction (active or passive), and timing (e.g., post exposure to misinformation). We provide a structured overview of approaches across multiple disciplines, and derive six overarching challenges for future research.
@article{hartwig_landscape_2024,
title = {The {Landscape} of {User}-centered {Misinformation} {Interventions} – {A} {Systematic} {Literature} {Review}},
volume = {56},
issn = {0360-0300},
url = {https://peasec.de/paper/2024/2024_HartwigDoellReuter_LandscapeUserCentredMisinfoInterventions_CSUR.pdf},
doi = {10.1145/3674724},
abstract = {Misinformation represent a key challenge for society. User-centered misinformation interventions as digital countermeasures that exert a direct influence on users represent a promising means to deal with the large amounts of information available. While an extensive body of research on this topic exists, researchers are confronted with a diverse research landscape spanning multiple disciplines. This review systematizes the landscape of user-centered misinformation interventions to facilitate knowledge transfer, identify trends, and enable informed decision-making. Over 3,700 scholarly publications were screened and a systematic literature review (N=108) was conducted. A taxonomy was derived regarding intervention design (e.g., binary label), user interaction (active or passive), and timing (e.g., post exposure to misinformation). We provide a structured overview of approaches across multiple disciplines, and derive six overarching challenges for future research.},
number = {11},
journal = {ACM Computing Surveys (CSUR)},
author = {Hartwig, Katrin and Doell, Frederic and Reuter, Christian},
month = jul,
year = {2024},
keywords = {Peace, Student, HCI, A-Paper, Ranking-CORE-A*, Selected, Projekt-NEBULA, Ranking-ImpactFactor},
}
[BibTeX] [Abstract] [Download PDF]
Recent crises like the COVID-19 pandemic provoked an increasing appearance of misleading information, emphasizing the need for effective user-centered countermeasures as an important field in HCI research. This work investigates how content-specific user-centered indicators can contribute to an informed approach to misleading information. In a threefold study, we conducted an in-depth content analysis of 2,382 German tweets on Twitter (now X) to identify topical (e.g., 5G), formal (e.g., links), and rhetorical (e.g., sarcasm) characteristics through manual coding, followed by a qualitative online survey to evaluate which indicators users already use autonomously to assess a tweet’s credibility. Subsequently, in a think-aloud study participants qualitatively evaluated the identified indicators in terms of perceived comprehensibility and usefulness. While a number of indicators were found to be particularly comprehensible and useful (e.g., claim for absolute truth and rhetorical questions), our findings reveal limitations of indicator-based interventions, particularly for people with entrenched conspiracy theory views. We derive four implications for digitally supporting users in dealing with misleading information, especially during crises.
@article{hartwig_misleading_2024,
title = {Misleading {Information} in {Crises}: {Exploring} {Content}-specific {Indicators} on {Twitter} from a {User} {Perspective}},
issn = {0144-929X},
url = {https://doi.org/10.1080/0144929X.2024.2373166},
doi = {10.1080/0144929X.2024.2373166},
abstract = {Recent crises like the COVID-19 pandemic provoked an increasing appearance of misleading information,
emphasizing the need for effective user-centered countermeasures as an important field in HCI research. This
work investigates how content-specific user-centered indicators can contribute to an informed approach to
misleading information. In a threefold study, we conducted an in-depth content analysis of 2,382 German
tweets on Twitter (now X) to identify topical (e.g., 5G), formal (e.g., links), and rhetorical (e.g., sarcasm)
characteristics through manual coding, followed by a qualitative online survey to evaluate which indicators
users already use autonomously to assess a tweet’s credibility. Subsequently, in a think-aloud study participants
qualitatively evaluated the identified indicators in terms of perceived comprehensibility and usefulness. While
a number of indicators were found to be particularly comprehensible and useful (e.g., claim for absolute truth
and rhetorical questions), our findings reveal limitations of indicator-based interventions, particularly for
people with entrenched conspiracy theory views. We derive four implications for digitally supporting users in
dealing with misleading information, especially during crises.},
journal = {Behaviour \& Information Technology (BIT)},
author = {Hartwig, Katrin and Schmid, Stefka and Biselli, Tom and Pleil, Helene and Reuter, Christian},
year = {2024},
keywords = {Crisis, HCI, A-Paper, Projekt-ATHENE-PriVis, Projekt-NEBULA, Ranking-CORE-A, Ranking-ImpactFactor},
pages = {1--34},
}
[BibTeX]
@book{hartwig_navigating_2024-1,
address = {Darmstadt, Germany},
title = {Navigating {Misinformation}: {User}-{Centered} {Design} and {Evaluation} of {Indicator}-{Based} {Digital} {Interventions}},
publisher = {Dissertation (Dr.-Ing.), Department of Computer Science, Technische Universität Darmstadt},
author = {Hartwig, Katrin},
year = {2024},
keywords = {Crisis, HCI, Projekt-NEBULA, Dissertation},
}
[BibTeX]
@book{hartwig_navigating_2024-2,
address = {Wiesbaden, Germany},
title = {Navigating {Misinformation}: {User}-{Centered} {Design} and {Evaluation} of {Indicator}-{Based} {Digital} {Interventions}},
publisher = {Springer Vieweg},
author = {Hartwig, Katrin},
year = {2024},
keywords = {Crisis, DissPublisher, HCI, Projekt-NEBULA},
}
[BibTeX] [Abstract] [Download PDF]
In crises such as the COVID-19 pandemic, it is crucial to support users when dealing with social media content. Considering digital resilience, we propose a web app based on Social Network Analysis (SNA) to provide an overview of potentially misleading vs. non-misleading content on Twitter, which can be explored by users and enable foundational learning. The latter aims at systematically identifying thematic patterns which may be associated with misleading information. Additionally, it entails reflecting on indicators of misleading tweets which are proposed to approach classification of tweets. Paying special attention to non-expert users of social media, we conducted a two-step Think Aloud study for evaluation. While participants valued the opportunity to generate new knowledge and the diversity of the application, qualities such as equality and rapidity may be further improved. However, learning effects outweighed individual costs as all users were able to shift focus onto relevant features, such as hashtags, while readily pointing out content characteristics. Our design artifact connects to learning-oriented interventions regarding the spread of misleading information and tackles information overload by a SNA-based plug-in.
@article{schmid_digital_2024,
title = {Digital {Resilience} in {Dealing} with {Misinformation} on {Social} {Media} during {COVID}-19: {A} {Web} {Application} to {Assist} {Users} in {Crises}},
volume = {26},
issn = {1572-9419},
url = {https://doi.org/10.1007/s10796-022-10347-5},
doi = {10.1007/s10796-022-10347-5},
abstract = {In crises such as the COVID-19 pandemic, it is crucial to support users when dealing with social media content. Considering digital resilience, we propose a web app based on Social Network Analysis (SNA) to provide an overview of potentially misleading vs. non-misleading content on Twitter, which can be explored by users and enable foundational learning. The latter aims at systematically identifying thematic patterns which may be associated with misleading information. Additionally, it entails reflecting on indicators of misleading tweets which are proposed to approach classification of tweets. Paying special attention to non-expert users of social media, we conducted a two-step Think Aloud study for evaluation. While participants valued the opportunity to generate new knowledge and the diversity of the application, qualities such as equality and rapidity may be further improved. However, learning effects outweighed individual costs as all users were able to shift focus onto relevant features, such as hashtags, while readily pointing out content characteristics. Our design artifact connects to learning-oriented interventions regarding the spread of misleading information and tackles information overload by a SNA-based plug-in.},
number = {2},
journal = {Information Systems Frontiers (ISF)},
author = {Schmid, Stefka and Hartwig, Katrin and Cieslinski, Robert and Reuter, Christian},
month = apr,
year = {2024},
keywords = {Projekt-TraCe, Student, Crisis, A-Paper, Projekt-NEBULA},
pages = {477--499},
}
2023
[BibTeX] [Abstract] [Download PDF]
The value of social media in crises, disasters, and emergencies across different events, participants, and states is now well-examined in crisis informatics research. Previous research has contributed to the state of the art with empirical insights on the use of social media, approaches for the gathering and processing of big social data, the design and evaluation of information systems, and the analysis of cumulative and longitudinal data. While some studies examined social media use representatively for their target audience, these usually only comprise a single point of inquiry and do not allow for a trend analysis. This work provides results (1) of a representative survey with German citizens from 2021 on use patterns, perceptions, and expectations regarding social media during emergencies. Furthermore, it (2) compares these results to previous surveys and provides insights on temporal changes and trends from 2017, over 2019 to 2021. Our findings highlight that social media use in emergencies increased in 2021 and 2019 compared to 2017. Between 2019 and 2021, the amount of information shared on social media remained on a similar level, while the perceived disadvantages of social media in emergencies significantly increased. In light of demographic variables, the results of the 2021 survey confirm previous findings, according to which older individuals (45+ years) use social media in emergencies less often than younger individuals (18-24 years). Furthermore, while the quicker availability of information was one of the reasons for social media use, especially the potential information overload was a key factor for not using social media in emergencies. The results are discussed in light of the dynamic nature of attitudes regarding social media in emergencies and the need to account for heterogeneity in user expectations to build trustworthy information ecosystems in social media.
@article{reuter_increasing_2023,
title = {Increasing {Adoption} {Despite} {Perceived} {Limitations} of {Social} {Media} in {Emergencies}: {Representative} {Insights} on {German} {Citizens}’ {Perception} and {Trends} from 2017 to 2021},
volume = {96},
issn = {2212-4209},
url = {https://peasec.de/paper/2023/2023_ReuterKaufholdBiselliPleil_SocialMediaEmergenciesSurvey_IJDRR.pdf},
doi = {https://doi.org/10.1016/j.ijdrr.2023.103880},
abstract = {The value of social media in crises, disasters, and emergencies across different events, participants, and states is now well-examined in crisis informatics research. Previous research has contributed to the state of the art with empirical insights on the use of social media, approaches for the gathering and processing of big social data, the design and evaluation of information systems, and the analysis of cumulative and longitudinal data. While some studies examined social media use representatively for their target audience, these usually only comprise a single point of inquiry and do not allow for a trend analysis. This work provides results (1) of a representative survey with German citizens from 2021 on use patterns, perceptions, and expectations regarding social media during emergencies. Furthermore, it (2) compares these results to previous surveys and provides insights on temporal changes and trends from 2017, over 2019 to 2021. Our findings highlight that social media use in emergencies increased in 2021 and 2019 compared to 2017. Between 2019 and 2021, the amount of information shared on social media remained on a similar level, while the perceived disadvantages of social media in emergencies significantly increased. In light of demographic variables, the results of the 2021 survey confirm previous findings, according to which older individuals (45+ years) use social media in emergencies less often than younger individuals (18-24 years). Furthermore, while the quicker availability of information was one of the reasons for social media use, especially the potential information overload was a key factor for not using social media in emergencies. The results are discussed in light of the dynamic nature of attitudes regarding social media in emergencies and the need to account for heterogeneity in user expectations to build trustworthy information ecosystems in social media.},
journal = {International Journal of Disaster Risk Reduction (IJDRR)},
author = {Reuter, Christian and Kaufhold, Marc-André and Biselli, Tom and Pleil, Helene},
year = {2023},
keywords = {Student, Crisis, Projekt-emergenCITY, Projekt-CYLENCE, A-Paper, AuswahlCrisis, Projekt-NEBULA, Ranking-ImpactFactor, SocialMedia},
}
CYLENCE ist ein vom Bundesministerium für Bildung und Forschung (BMBF) gefördertes Forschungsprojekt unter Koordination der Technischen Universität Darmstadt.
Technische Universität Darmstadt
Fachbereich Informatik
Wissenschaft und Technik für Frieden und Sicherheit (PEASEC)
Pankratiusstraße 2, 64289 Darmstadt
www.peasec.de
Projektmanager
Katrin Hartwig
www.peasec.de/team/hartwig
Verbundkoodinator
Prof. Dr. Dr. Christian Reuter
www.peasec.de/team/reuter
Sekretariat
Förderung: „Künstliche Intelligenz in der zivilen Sicherheitsforschung II“ im Rahmen des Programms „Forschung für die zivile Sicherheit 2018 – 2023“ (Bundesregierung)
Förderkennzeichen: 13N16361
Laufzeit: 01.07.2022 – 60.06.2025