Projekt

CYLENCE: Strategien und Werkzeuge gegen Cybermobbing und Hassbotschaften (1.8.2023-31.7.2026, BMBF)

https://peasec.de/cylence


Laut einer vergleichenden Studie des Bündnis gegen Cybermobbing e.V. (Beitzinger & Leest, 2021) waren 2021 rund 11,5% der Menschen in Deutschland von Cybermobbing betroffen. Während etwas mehr als 53% der Cybermobbingvorfälle im privaten Umfeld geschehen, entfallen noch 38% auf das Arbeitsumfeld. Neben Depressionen, Suchtgefahr oder körperlichen Beschwerden stuften sich rund 15% der Betroffenen von Mobbing und Cybermobbing als suizidgefährdet ein. Aus wirtschaftlicher Sicht ist dabei die Kündigungsbereitschaft von Mobbingopfern um 40% höher, Betroffene weisen fast doppelt so viele Krankheitstage wie der Durchschnitt auf und die jährlichen Kosten durch den Produktionsausfall in der deutschen Wirtschaft werden auf rund 8 Mrd. Euro geschätzt. Eine regelmäßige Befragung der Landesanstalt für Medien NRW (Landesanstalt für Medien NRW, 2021) verdeutlicht zudem, dass die Anzahl der Internetnutzer*innen in Deutschland, die häufig mit Hassbotschaften konfrontiert sind, in den letzten Jahren von 27% (2017) auf 39% (2021) angestiegen ist. Obwohl im Jahr 2021 über zwei Drittel der Befragten schon einmal Hasskommentare wahrgenommen haben, haben nur 28% von diesen einen Hasskommentar beim jeweiligen Portal gemeldet.

Das Ziel von CYLENCE ist die Entwicklung von Strategien und Werkzeugen zur medienübergreifenden Meldung, Erkennung und Behandlung von Cybermobbing und Hassbotschaften. Dazu sollen organisationale Strategien und Werkzeuge zur Erfassung und Analyse (teil-)öffentlicher, sozialer Datenquellen (z.B. Facebook, Telegram, Twitter) auf Basis eines partizipativen Entwicklungsprozesses Ermittlungs- und Strafverfolgungsbehörden (ESBs) zur verbesserten Früherkennung und Behandlung von Cyber-Missbrauchsfällen befähigen. Eine darauf ausgerichtete Schulungsstrategie wird durch ein interaktives Tutorial für die Aneignung der entwickelten Werkzeuge ergänzt, die mithilfe von Künstlicher Intelligenz (KI) und Visual Analytics (VA) die anpassbare, faire und nachvollziehbare KI-Detektion und echtzeitbasierte Dashboard-Aufbereitung von Cyber-Missbrauchsinhalten unterstützen. Zur Erhöhung der zivilen Sicherheit sollen weiter die Erkennung und Meldung von Cybermobbing und Hassbotschaften durch die Bevölkerung gestärkt werden. Dies umfasst eine Strategie zur Verbesserung der Kommunikation zwischen Bürger*innen, Betroffenen und ESBs, welche durch empirische Feldforschung (z.B. repräsentative Befragungen) unterstützt und im Rahmen einer Kampagne erprobt wird. Dazu werden Werkzeuge zur Erkennung und Meldung von Cyber-Missbrauch für Bürger*innen durch ein Browser-Plugin und eine Smartphone-App zur Verfügung gestellt und evaluiert.

Weitere Informationen sowie ein Projektumriss sind der Website der BMBF-Sicherheitsforschung (sifo.de) zu entnehmen.

 

Konsortium

Technische Universität Darmstadt: Wissenschaft und Technik für Frieden und Sicherheit (PEASEC)

Seit ihrer Gründung im Jahre 1877 zeichnet sich die TU Darmstadt durch besonderen Pioniergeist aus. Zu unserem Selbstverständnis gehört es, diese Tradition der Innovation kontinuierlich fortzusetzen. Durch herausragende Leistungen in Forschung, Lehre und Transfer erschließen wir wichtige wissenschaftliche Zukunftsfelder und eröffnen kontinuierlich neue Chancen der gesellschaftlichen Gestaltung. Damit zählt die TU Darmstadt zu einer der führenden Technischen Universitäten in Deutschland mit hoher internationaler Sichtbarkeit und Reputation. Der Lehrstuhl Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) unter Leitung von Prof. Dr. Dr. Christian Reuter im Fachbereich Informatik und Zweitmitgliedschaft im Fachbereich Gesellschafts- und Geschichtswissenschaften der Technischen Universität Darmstadt verbindet Informatik mit Friedens- und Sicherheitsforschung. Methodisch werden empirische Studien (qualitative und quantitative Erhebungen aktueller Entwicklungen, z.B. der Selbsthilfeorganisation während COVID-19) mit technischer Forschung (Konzeptionierung innovativer Interaktionskonzepte, Sicherheitsmechanismen, privatheitsfördernder Technologien oder Machine-Learning-Algorithmen) und abschließender Evaluationen im Anwendungsfeld (z.B. Social Media Analytics für den Katastrophenschutz) kombiniert.

Universität Potsdam: Wirtschaftsinformatik und Digitale Transformation (digicat)

Modern, jung, überschaubar – so kann man die Universität Potsdam in aller Kürze beschreiben. Darüber hinaus exzellent in der Lehre. Die Universität Potsdam ist eine Hochschule im Spannungsfeld zwischen Tradition und Moderne. Dynamisch, ambitioniert, gewachsen und wachsend an einem Wissenschaftsstandort, der sich mit den international renommiertesten messen kann. Die Forschungsgruppe Wirtschaftsinformatik und Digitale Transformation (digicat) unter der Leitung von Prof. Dr. Stefan Stieglitz untersucht die digitale Transformation und deren Auswirkungen auf Unternehmen und andere Organisationen sowie auf Gesellschaft und Individuen. Die Forschung ist interdisziplinär ausgerichtet und basiert auf fortgeschrittenen Methoden der Datensammlung und -analyse. Wir leisten Beiträge für die Grundlagenforschung als auch für anwendungsorientierte Fragestellungen der Industrie und Verwaltung. Die Arbeitsgruppe kooperiert mit ausgewählten Unternehmen sowie mit herausragenden internationalen und nationalen Forschungseinrichtungen.

Universität Bamberg: Wirtschaftsinformatik, insb. KI-Engineering (DISCIETY)

Die Universität Bamberg ist eine mittelgroße Universität mit einem klaren Profil in den Geistes- und Kulturwissenschaften, in den Sozial- und Wirtschaftswissenschaften, in den Humanwissenschaften sowie in Wirtschaftsinformatik und Angewandter Informatik. Interdisziplinäre Forschungsaktivitäten und vielfältig kombinierbare Studiengänge tragen zur Profilierung in der Wissenschaftslandschaft Deutschlands bei. Der Lehrstuhl Wirtschaftsinformatik, insb. KI-Engineering (DISCIETY) beschäftigt sich in Forschung und Lehre umfassend mit der Digitalen Transformation, und legt dabei einen besonderen Fokus auf die Digitale Gesellschaft. Unter Einbezug soziotechnischer Systeme, die die Interaktion zwischen neuen Technologien und dem Menschen definieren, untersuchen DISCIETY die daraus resultierende Auswirkung auf die Individuen, die Gesellschaft aber auch besonders auf Unternehmen.

Virtimo AG

Die Virtimo AG ist Softwareherstellerin und IT-Beratungsunternehmen mit Sitz in Berlin. Gegründet im Jahr 2010 arbeiten heute rund 100 Expert:innen für Virtimo in den Themenfeldern Softwareentwicklung, Consulting, Digitale Transformation, Systemintegration, Prozessautomatisierung, Cloud Services und Solutions sowie der Konzeption und Umsetzung fachspezifischer IT-Lösungen. Der Branchenschwerpunkt liegt in der Energiewirtschaft, darüber hinaus ist Virtimo für Kund:innen der Bereiche Automobil, Technik, Versicherungen und Handel tätig.

Assoziierte Partner und Kooperationen

Die Zusammenarbeit mit weiteren Organisationen und Organisationseinheiten ist über assoziierte Partnerschaften und Kooperationsverträge abgedeckt:

  • Hessen CyberCompetenceCenter, Meldestelle Hessen gegen Hetze (Hessen3C)
  • Hessisches Ministerium des Innern und für Sport, Stabsstelle Gemeinsam Sicher In Hessen (GSIH)
  • Landespolizeipräsidium Baden-Württemberg, Präventiv und offensiv gegen Hasskriminalität, Antisemitismus und Extremismus (LBW)
  • HateAid gGmbH, Beratungsstelle gegen Hass im Netz
  • Digitalstadt Darmstadt GmbH
Publikationen

2024

  • Markus Bayer, Philipp Kuehn, Ramin Shanehsaz, Christian Reuter (2024)
    CySecBERT: A Domain-Adapted Language Model for the Cybersecurity Domain
    ACM Transactions on Privacy and Security (TOPS) ;27(2). doi:10.1145/3652594
    [BibTeX] [Abstract] [Download PDF]

    The field of cybersecurity is evolving fast. Security professionals are in need of intelligence on past, current and – ideally – on upcoming threats, because attacks are becoming more advanced and are increasingly targeting larger and more complex systems. Since the processing and analysis of such large amounts of information cannot be addressed manually, cybersecurity experts rely on machine learning techniques. In the textual domain, pre-trained language models like BERT have proven to be helpful as they provide a good baseline for further fine-tuning. However, due to the domain-knowledge and the many technical terms in cybersecurity, general language models might miss the gist of textual information. For this reason, we create a high-quality dataset and present a language model specifically tailored to the cybersecurity domain which can serve as a basic building block for cybersecurity systems. The model is compared on 15 tasks: Domain-dependent extrinsic tasks for measuring the performance on specific problems, intrinsic tasks for measuring the performance of the internal representations of the model as well as general tasks from the SuperGLUE benchmark. The results of the intrinsic tasks show that our model improves the internal representation space of domain words compared to the other models. The extrinsic, domain-dependent tasks, consisting of sequence tagging and classification, show that the model performs best in cybersecurity scenarios. In addition, we pay special attention to the choice of hyperparameters against catastrophic forgetting, as pre-trained models tend to forget the original knowledge during further training.

    @article{bayer_cysecbert_2024,
    title = {{CySecBERT}: {A} {Domain}-{Adapted} {Language} {Model} for the {Cybersecurity} {Domain}},
    volume = {27},
    issn = {2471-2566},
    url = {https://doi.org/10.1145/3652594},
    doi = {10.1145/3652594},
    abstract = {The field of cybersecurity is evolving fast. Security professionals are in need of intelligence on past, current and - ideally - on upcoming threats, because attacks are becoming more advanced and are increasingly targeting larger and more complex systems. Since the processing and analysis of such large amounts of information cannot be addressed manually, cybersecurity experts rely on machine learning techniques. In the textual domain, pre-trained language models like BERT have proven to be helpful as they provide a good baseline for further fine-tuning. However, due to the domain-knowledge and the many technical terms in cybersecurity, general language models might miss the gist of textual information. For this reason, we create a high-quality dataset and present a language model specifically tailored to the cybersecurity domain which can serve as a basic building block for cybersecurity systems. The model is compared on 15 tasks: Domain-dependent extrinsic tasks for measuring the performance on specific problems, intrinsic tasks for measuring the performance of the internal representations of the model as well as general tasks from the SuperGLUE benchmark. The results of the intrinsic tasks show that our model improves the internal representation space of domain words compared to the other models. The extrinsic, domain-dependent tasks, consisting of sequence tagging and classification, show that the model performs best in cybersecurity scenarios. In addition, we pay special attention to the choice of hyperparameters against catastrophic forgetting, as pre-trained models tend to forget the original knowledge during further training.},
    number = {2},
    journal = {ACM Transactions on Privacy and Security (TOPS)},
    author = {Bayer, Markus and Kuehn, Philipp and Shanehsaz, Ramin and Reuter, Christian},
    month = apr,
    year = {2024},
    note = {Place: New York, NY, USA
    Publisher: Association for Computing Machinery},
    keywords = {Student, UsableSec, Security, A-Paper, Ranking-ImpactFactor, Ranking-CORE-A, Projekt-CYWARN, Projekt-CYLENCE, Projekt-ATHENE-CyAware},
    }

  • Marc-André Kaufhold, Tilo Mentler, Simon Nestler, Christian Reuter (2024)
    11. Workshop Mensch-Maschine-Interaktion in sicherheitskritischen Systemen
    Mensch und Computer – Workshopband Karlsruhe, Germany.
    [BibTeX]

    @inproceedings{kaufhold_11_2024,
    address = {Karlsruhe, Germany},
    title = {11. {Workshop} {Mensch}-{Maschine}-{Interaktion} in sicherheitskritischen {Systemen}},
    language = {de},
    booktitle = {Mensch und {Computer} - {Workshopband}},
    publisher = {Gesellschaft für Informatik e.V.},
    author = {Kaufhold, Marc-André and Mentler, Tilo and Nestler, Simon and Reuter, Christian},
    year = {2024},
    keywords = {HCI, UsableSec, Security, Projekt-CYLENCE},
    }

  • Marc-André Kaufhold, Jasmin Haunschild, Christian Reuter (2024)
    Cultural Violence and Peace Interventions in Social Media
    In: Christian Reuter: Information Technology for Peace and Security – IT Applications and Infrastructures in Conflicts, Crises, War, and Peace. Wiesbaden, Germany: Springer Vieweg.
    [BibTeX] [Abstract]

    Over the last decade, social media services had an enormous impact on modern culture. They are nowadays widely established in everyday life, but also during natural and man-made crises and conflicts. For instance, Facebook was part of the Arabic Spring, in which the tool facilitated the communication and interaction between participants of political protests. On the contrary, terrorists may recruit new members and disseminate ideologies, and social bots may influence social and political processes. Based on the notions of cultural violence and cultural peace as well as the phenomena of fake news, terrorism and social bots, this exploratory review firstly presents human cultural inter-ventions in social media (e.g. dissemination of fake news and terroristic propaganda) and respective countermeasures (e.g. fake news detection and counter-narratives). Sec-ondly, it discusses automatic cultural interventions realised via social bots (e.g. astro-turfing, misdirection and smoke screening) and countermeasures (e.g. crowdsourcing and social bot detection). Finally, this chapter concludes with a range of cultural inter-ventions and information and communication technology (ICT) in terms of actors and intentions to identify future research potential for supporting situational assessments during conflicts.

    @incollection{kaufhold_cultural_2024,
    address = {Wiesbaden, Germany},
    title = {Cultural {Violence} and {Peace} {Interventions} in {Social} {Media}},
    abstract = {Over the last decade, social media services had an enormous impact on modern culture. They are nowadays widely established in everyday life, but also during natural and man-made crises and conflicts. For instance, Facebook was part of the Arabic Spring, in which the tool facilitated the communication and interaction between participants of political protests. On the contrary, terrorists may recruit new members and disseminate ideologies, and social bots may influence social and political processes. Based on the notions of cultural violence and cultural peace as well as the phenomena of fake news, terrorism and social bots, this exploratory review firstly presents human cultural inter-ventions in social media (e.g. dissemination of fake news and terroristic propaganda) and respective countermeasures (e.g. fake news detection and counter-narratives). Sec-ondly, it discusses automatic cultural interventions realised via social bots (e.g. astro-turfing, misdirection and smoke screening) and countermeasures (e.g. crowdsourcing and social bot detection). Finally, this chapter concludes with a range of cultural inter-ventions and information and communication technology (ICT) in terms of actors and intentions to identify future research potential for supporting situational assessments during conflicts.},
    booktitle = {Information {Technology} for {Peace} and {Security} - {IT} {Applications} and {Infrastructures} in {Conflicts}, {Crises}, {War}, and {Peace}},
    publisher = {Springer Vieweg},
    author = {Kaufhold, Marc-André and Haunschild, Jasmin and Reuter, Christian},
    editor = {Reuter, Christian},
    year = {2024},
    keywords = {Crisis, HCI, SocialMedia, Peace, Projekt-CYLENCE},
    }

  • Marc-André Kaufhold, Thea Riebe, Markus Bayer, Christian Reuter (2024)
    ‚We Do Not Have the Capacity to Monitor All Media‘: A Design Case Study on Cyber Situational Awareness in Computer Emergency Response Teams
    Proceedings of the Conference on Human Factors in Computing Systems (CHI) . doi:10.1145/3613904.3642368
    [BibTeX] [Abstract]

    Computer Emergency Response Teams (CERTs) have been established in the public sector globally to provide advisory, preventive and reactive cybersecurity services for government agencies, citizens, and businesses. Nevertheless, their responsibility of monitoring, analyzing, and communicating cyber threats and security vulnerabilities have become increasingly challenging due to the growing volume and varying quality of information disseminated through public and social channels. Based on a design case study conducted from 2021 to 2023, this paper combines three iterations of expert interviews (N=25), design workshops (N=4) and cognitive walkthroughs (N=25) to design an automated, cross-platform and real-time cybersecurity dashboard. By adopting the notion of cyber situational awareness, the study further extracts user requirements and design heuristics for enhanced threat intelligence and mission awareness in CERTs, discussing the aspects of source integration, data management, customizable visualization, relationship awareness, information assessment, software integration, (inter-)organizational collaboration, and communication of stakeholder warnings.

    @inproceedings{kaufhold_we_2024,
    series = {{CHI} '24},
    title = {'{We} {Do} {Not} {Have} the {Capacity} to {Monitor} {All} {Media}': {A} {Design} {Case} {Study} on {Cyber} {Situational} {Awareness} in {Computer} {Emergency} {Response} {Teams}},
    doi = {10.1145/3613904.3642368},
    abstract = {Computer Emergency Response Teams (CERTs) have been established in the public sector globally to provide advisory, preventive and reactive cybersecurity services for government agencies, citizens, and businesses. Nevertheless, their responsibility of monitoring, analyzing, and communicating cyber threats and security vulnerabilities have become increasingly challenging due to the growing volume and varying quality of information disseminated through public and social channels. Based on a design case study conducted from 2021 to 2023, this paper combines three iterations of expert interviews (N=25), design workshops (N=4) and cognitive walkthroughs (N=25) to design an automated, cross-platform and real-time cybersecurity dashboard. By adopting the notion of cyber situational awareness, the study further extracts user requirements and design heuristics for enhanced threat intelligence and mission awareness in CERTs, discussing the aspects of source integration, data management, customizable visualization, relationship awareness, information assessment, software integration, (inter-)organizational collaboration, and communication of stakeholder warnings.},
    booktitle = {Proceedings of the {Conference} on {Human} {Factors} in {Computing} {Systems} ({CHI})},
    publisher = {Association for Computing Machinery},
    author = {Kaufhold, Marc-André and Riebe, Thea and Bayer, Markus and Reuter, Christian},
    year = {2024},
    keywords = {HCI, Selected, UsableSec, Security, A-Paper, Ranking-CORE-A*, Projekt-CYWARN, AuswahlUsableSec, AuswahlKaufhold, Projekt-CYLENCE, Projekt-ATHENE-CyAware},
    }

  • Katrin Hartwig, Ruslan Sandler, Christian Reuter (2024)
    Navigating Misinformation in Voice Messages: Identification of User-Centered Features for Digital Interventions
    Risk, Hazards, & Crisis in Public Policy (RHCPP) . doi:10.1002/rhc3.12296
    [BibTeX] [Abstract] [Download PDF]

    Misinformation presents a challenge to democracies, particularly in times of crisis. One way in which misinformation is spread is through voice messages sent via messenger groups, which enable members to share information on a larger scale. Gaining user perspectives on digital misinformation interventions as countermeasure after detection is crucial. In this paper, we extract potential features of misinformation in voice messages from literature, implement them within a program that automatically processes voice messages, and evaluate their perceived usefulness and comprehensibility as user-centered indicators.We propose 35 features extracted from audio files at the character, word, sentence, audio and creator levels to assist (1) private individuals in conducting credibility assessments, (2) government agencies faced with data overload during crises, and (3) researchers seeking to gather features for automatic detection approaches. We conducted a think-aloud study with laypersons (N = 20) to provide initial insight into how individuals autonomously assess the credibility of voice messages, as well as which automatically extracted features they find to be clear and convincing indicators of misinformation. Our study provides qualitative and quantitative insights into valuable indicators, particularly when they relate directly to the content or its creator, and uncovers challenges in user interface design.

    @article{hartwig_navigating_2024,
    title = {Navigating {Misinformation} in {Voice} {Messages}: {Identification} of {User}-{Centered} {Features} for {Digital} {Interventions}},
    issn = {1944-4079},
    url = {https://peasec.de/paper/2024/2024_HartwigSandlerReuter_NavigatingMisinfoVoiceMessages_RiskHazards.pdf},
    doi = {10.1002/rhc3.12296},
    abstract = {Misinformation presents a challenge to democracies, particularly in times of crisis. One way in which misinformation is spread is through voice messages sent via messenger groups, which enable members to share information on a larger scale. Gaining user perspectives on digital misinformation interventions as countermeasure after detection is crucial. In this paper, we extract potential features of misinformation in voice messages from literature, implement them within a program that automatically processes voice messages, and evaluate their perceived usefulness and comprehensibility as user-centered indicators.We propose 35 features extracted from audio files at the character, word, sentence, audio and creator levels to assist (1) private individuals in conducting credibility assessments, (2) government agencies faced with data overload during crises, and (3) researchers seeking to gather features for automatic detection approaches. We conducted a think-aloud study with laypersons (N = 20) to provide initial insight into how individuals autonomously assess the credibility of voice messages, as well as which automatically extracted features they find to be clear and convincing indicators of misinformation. Our study provides qualitative and quantitative insights into valuable indicators, particularly when they relate directly to the content or its creator, and uncovers challenges in user interface design.},
    journal = {Risk, Hazards, \& Crisis in Public Policy (RHCPP)},
    author = {Hartwig, Katrin and Sandler, Ruslan and Reuter, Christian},
    year = {2024},
    note = {Publisher: John Wiley \& Sons, Ltd},
    keywords = {Crisis, HCI, SocialMedia, Student, UsableSec, A-Paper, Ranking-ImpactFactor, Cyberwar, Projekt-NEBULA, Projekt-CYLENCE, Projekt-ATHENE},
    }

  • Markus Bayer, Markus Neiczer, Maximilian Samsinger, Björn Buchhold, Christian Reuter (2024)
    XAI-Attack: Utilizing Explainable AI to Find Incorrectly Learned Patterns for Black-Box Adversarial Example Creation
    Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING) .
    [BibTeX] [Abstract]

    Adversarial examples, capable of misleading machine learning models into making erroneous predictions, pose significant risks in safety-critical domains such as crisis informatics, medicine, and autonomous driving. To counter this, we introduce a novel textual adversarial example method that identifies falsely learned word indicators by leveraging explainable AI methods as importance functions on incorrectly predicted instances, thus revealing and understanding the weaknesses of a model. Coupled with adversarial training, this approach guides models to adopt complex decision rules when necessary and simpler ones otherwise, enhancing their robustness. To evaluate the effectiveness of our approach, we conduct a human and a transfer evaluation and propose a novel adversarial training evaluation setting for better robustness assessment. While outperforming current adversarial example and training methods, the results also show our method’s potential in facilitating the development of more resilient transformer models by detecting and rectifying biases and patterns in training data, showing baseline improvements of up to 23 percentage points in accuracy on adversarial tasks. The code of our approach is freely available for further exploration and use.

    @inproceedings{bayer_xai-attack_2024,
    title = {{XAI}-{Attack}: {Utilizing} {Explainable} {AI} to {Find} {Incorrectly} {Learned} {Patterns} for {Black}-{Box} {Adversarial} {Example} {Creation}},
    abstract = {Adversarial examples, capable of misleading machine learning models into making erroneous predictions, pose significant risks in safety-critical domains such as crisis informatics, medicine, and autonomous driving. To counter this, we introduce a novel textual adversarial example method that identifies falsely learned word indicators by leveraging explainable AI methods as importance functions on incorrectly predicted instances, thus revealing and understanding the weaknesses of a model. Coupled with adversarial training, this approach guides models to adopt complex decision rules when necessary and simpler ones otherwise, enhancing their robustness. To evaluate the effectiveness of our approach, we conduct a human and a transfer evaluation and propose a novel adversarial training evaluation setting for better robustness assessment. While outperforming current adversarial example and training methods, the results also show our method's potential in facilitating the development of more resilient transformer models by detecting and rectifying biases and patterns in training data, showing baseline improvements of up to 23 percentage points in accuracy on adversarial tasks. The code of our approach is freely available for further exploration and use.},
    booktitle = {Proceedings of the 2024 {Joint} {International} {Conference} on {Computational} {Linguistics}, {Language} {Resources} and {Evaluation} ({LREC}-{COLING})},
    author = {Bayer, Markus and Neiczer, Markus and Samsinger, Maximilian and Buchhold, Björn and Reuter, Christian},
    year = {2024},
    keywords = {UsableSec, Security, A-Paper, Ranking-CORE-B, Projekt-CYLENCE, Projekt-ATHENE-CyAware},
    }

  • Marc-André Kaufhold (2024)
    Exploring the Evolving Landscape of Human-Centred Crisis Informatics: Current Challenges and Future Trends
    i-com – Journal of Interactive Media ;Accepted.
    [BibTeX]

    @article{kaufhold_exploring_2024,
    title = {Exploring the {Evolving} {Landscape} of {Human}-{Centred} {Crisis} {Informatics}: {Current} {Challenges} and {Future} {Trends}},
    volume = {Accepted},
    journal = {i-com - Journal of Interactive Media},
    author = {Kaufhold, Marc-André},
    year = {2024},
    keywords = {AuswahlUsableSec, HCI, Projekt-ATHENE-CyAware, Projekt-CYLENCE, Projekt-emergenCITY},
    }

    2023

  • Christian Reuter, Marc-André Kaufhold, Tom Biselli, Helene Pleil (2023)
    Increasing Adoption Despite Perceived Limitations of Social Media in Emergencies: Representative Insights on German Citizens’ Perception and Trends from 2017 to 2021
    International Journal of Disaster Risk Reduction (IJDRR) ;96. doi:https://doi.org/10.1016/j.ijdrr.2023.103880
    [BibTeX] [Abstract] [Download PDF]

    The value of social media in crises, disasters, and emergencies across different events, participants, and states is now well-examined in crisis informatics research. Previous research has contributed to the state of the art with empirical insights on the use of social media, approaches for the gathering and processing of big social data, the design and evaluation of information systems, and the analysis of cumulative and longitudinal data. While some studies examined social media use representatively for their target audience, these usually only comprise a single point of inquiry and do not allow for a trend analysis. This work provides results (1) of a representative survey with German citizens from 2021 on use patterns, perceptions, and expectations regarding social media during emergencies. Furthermore, it (2) compares these results to previous surveys and provides insights on temporal changes and trends from 2017, over 2019 to 2021. Our findings highlight that social media use in emergencies increased in 2021 and 2019 compared to 2017. Between 2019 and 2021, the amount of information shared on social media remained on a similar level, while the perceived disadvantages of social media in emergencies significantly increased. In light of demographic variables, the results of the 2021 survey confirm previous findings, according to which older individuals (45+ years) use social media in emergencies less often than younger individuals (18-24 years). Furthermore, while the quicker availability of information was one of the reasons for social media use, especially the potential information overload was a key factor for not using social media in emergencies. The results are discussed in light of the dynamic nature of attitudes regarding social media in emergencies and the need to account for heterogeneity in user expectations to build trustworthy information ecosystems in social media.

    @article{reuter_increasing_2023,
    title = {Increasing {Adoption} {Despite} {Perceived} {Limitations} of {Social} {Media} in {Emergencies}: {Representative} {Insights} on {German} {Citizens}’ {Perception} and {Trends} from 2017 to 2021},
    volume = {96},
    issn = {2212-4209},
    url = {https://peasec.de/paper/2023/2023_ReuterKaufholdBiselliPleil_SocialMediaEmergenciesSurvey_IJDRR.pdf},
    doi = {https://doi.org/10.1016/j.ijdrr.2023.103880},
    abstract = {The value of social media in crises, disasters, and emergencies across different events, participants, and states is now well-examined in crisis informatics research. Previous research has contributed to the state of the art with empirical insights on the use of social media, approaches for the gathering and processing of big social data, the design and evaluation of information systems, and the analysis of cumulative and longitudinal data. While some studies examined social media use representatively for their target audience, these usually only comprise a single point of inquiry and do not allow for a trend analysis. This work provides results (1) of a representative survey with German citizens from 2021 on use patterns, perceptions, and expectations regarding social media during emergencies. Furthermore, it (2) compares these results to previous surveys and provides insights on temporal changes and trends from 2017, over 2019 to 2021. Our findings highlight that social media use in emergencies increased in 2021 and 2019 compared to 2017. Between 2019 and 2021, the amount of information shared on social media remained on a similar level, while the perceived disadvantages of social media in emergencies significantly increased. In light of demographic variables, the results of the 2021 survey confirm previous findings, according to which older individuals (45+ years) use social media in emergencies less often than younger individuals (18-24 years). Furthermore, while the quicker availability of information was one of the reasons for social media use, especially the potential information overload was a key factor for not using social media in emergencies. The results are discussed in light of the dynamic nature of attitudes regarding social media in emergencies and the need to account for heterogeneity in user expectations to build trustworthy information ecosystems in social media.},
    journal = {International Journal of Disaster Risk Reduction (IJDRR)},
    author = {Reuter, Christian and Kaufhold, Marc-André and Biselli, Tom and Pleil, Helene},
    year = {2023},
    keywords = {AuswahlCrisis, Crisis, SocialMedia, Student, A-Paper, Ranking-ImpactFactor, Projekt-emergenCITY, Projekt-NEBULA, Projekt-CYLENCE},
    }

  • Marc-André Kaufhold, Markus Bayer, Julian Bäumler, Christian Reuter, Stefan Stieglitz, Ali Sercan Basyurt, Milad Mirabaie, Christoph Fuchß, Kaan Eyilmez (2023)
    CYLENCE: Strategies and Tools for Cross-Media Reporting, Detection, and Treatment of Cyberbullying and Hatespeech in Law Enforcement Agencies
    Mensch und Computer – Workshopband Rapperswil, Switzerland. doi:10.18420/muc2023-mci-ws01-211
    [BibTeX] [Abstract] [Download PDF]

    Despite the merits of public and social media in private and professional spaces, citizens and professionals are increasingly exposed to cyberabuse, such as cyberbullying and hate speech. Thus, Law Enforcement Agencies (LEA) are deployed in many countries and organisations to enhance the preventive and reactive capabilities against cyberabuse. However, their tasks are getting more complex by the increasing amount and varying quality of information disseminated into public channels. Adopting the perspectives of Crisis Informatics and safety-critical Human-Computer Interaction (HCI) and based on both a narrative literature review and group discussions, this paper first outlines the research agenda of the CYLENCE project, which seeks to design strategies and tools for cross-media reporting, detection, and treatment of cyberbullying and hatespeech in investigative and law enforcement agencies. Second, it identifies and elaborates seven research challenges with regard to the monitoring, analysis and communication of cyberabuse in LEAs, which serve as a starting point for in-depth research within the project.

    @inproceedings{kaufhold_cylence_2023,
    address = {Rapperswil, Switzerland},
    title = {{CYLENCE}: {Strategies} and {Tools} for {Cross}-{Media} {Reporting}, {Detection}, and {Treatment} of {Cyberbullying} and {Hatespeech} in {Law} {Enforcement} {Agencies}},
    url = {https://dl.gi.de/items/0e0efe8f-64bf-400c-85f7-02b65f83189d},
    doi = {10.18420/muc2023-mci-ws01-211},
    abstract = {Despite the merits of public and social media in private and professional spaces, citizens and professionals are increasingly exposed to cyberabuse, such as cyberbullying and hate speech. Thus, Law Enforcement Agencies (LEA) are deployed in many countries and organisations to enhance the preventive and reactive capabilities against cyberabuse. However, their tasks are getting more complex by the increasing amount and varying quality of information disseminated into public channels. Adopting the perspectives of Crisis Informatics and safety-critical Human-Computer Interaction (HCI) and based on both a narrative literature review and group discussions, this paper first outlines the research agenda of the CYLENCE project, which seeks to design strategies and tools for cross-media reporting, detection, and treatment of cyberbullying and hatespeech in investigative and law enforcement agencies. Second, it identifies and elaborates seven research challenges with regard to the monitoring, analysis and communication of cyberabuse in LEAs, which serve as a starting point for in-depth research within the project.},
    language = {de},
    booktitle = {Mensch und {Computer} - {Workshopband}},
    publisher = {Gesellschaft für Informatik e.V.},
    author = {Kaufhold, Marc-André and Bayer, Markus and Bäumler, Julian and Reuter, Christian and Stieglitz, Stefan and Basyurt, Ali Sercan and Mirabaie, Milad and Fuchß, Christoph and Eyilmez, Kaan},
    year = {2023},
    keywords = {HCI, UsableSec, Projekt-CYLENCE},
    }

    Förderung

    Projektname: Entwicklung von Strategien und Werkzeugen zur medienübergreifenden Meldung, Erkennung und Behandlung von Cybermobbing und Hassbotschaften in Ermittlungs- und Strafverfolgungsbehörden (CYLENCE)

    Schlüsselwörter: Cybermobbing, Hassbotschaften, Meldestelle, Lagebilderfassung, Ermittlungs- und Strafverfolgungsbehörden, Social Media Analytics, Künstliche Intelligenz, Visual Analytics

    Förderung: Programm „Zivile Sicherheit – Bedrohungen aus dem digitalen Raum“ vom Bundesministerium für Bildung und Forschung (BMBF).

    Förderkennzeichen: 13N16636 bis 13N16639

    Laufzeit: 08.2023 – 07.2026

    Projektträger: VDI Technologiezentrum GmbH

    Kontakt

    CYLENCE ist ein vom Bundesministerium für Bildung und Forschung (BMBF) gefördertes Forschungsprojekt unter Koordination der Technischen Universität Darmstadt.

    Technische Universität Darmstadt

    Fachbereich Informatik
    Wissenschaft und Technik für Frieden und Sicherheit (PEASEC)
    Pankratiusstraße 2, 64289 Darmstadt
    www.peasec.de

    Verbundkoodinator

    Prof. Dr. Dr. Christian Reuter
    www.peasec.de/team/reuter

    Projektmanager

    Dr. Marc-André Kaufhold
    www.peasec.de/team/kaufhold

    Sekretariat