Projekt

CYLENCE: Strategien und Werkzeuge gegen Cybermobbing und Hassbotschaften (1.8.2023-31.7.2026, BMBF)

https://peasec.de/cylence


Laut einer vergleichenden Studie des Bündnis gegen Cybermobbing e.V. (Beitzinger & Leest, 2021) waren 2021 rund 11,5% der Menschen in Deutschland von Cybermobbing betroffen. Während etwas mehr als 53% der Cybermobbingvorfälle im privaten Umfeld geschehen, entfallen noch 38% auf das Arbeitsumfeld. Neben Depressionen, Suchtgefahr oder körperlichen Beschwerden stuften sich rund 15% der Betroffenen von Mobbing und Cybermobbing als suizidgefährdet ein. Aus wirtschaftlicher Sicht ist dabei die Kündigungsbereitschaft von Mobbingopfern um 40% höher, Betroffene weisen fast doppelt so viele Krankheitstage wie der Durchschnitt auf und die jährlichen Kosten durch den Produktionsausfall in der deutschen Wirtschaft werden auf rund 8 Mrd. Euro geschätzt. Eine regelmäßige Befragung der Landesanstalt für Medien NRW (Landesanstalt für Medien NRW, 2021) verdeutlicht zudem, dass die Anzahl der Internetnutzer*innen in Deutschland, die häufig mit Hassbotschaften konfrontiert sind, in den letzten Jahren von 27% (2017) auf 39% (2021) angestiegen ist. Obwohl im Jahr 2021 über zwei Drittel der Befragten schon einmal Hasskommentare wahrgenommen haben, haben nur 28% von diesen einen Hasskommentar beim jeweiligen Portal gemeldet.

Das Ziel von CYLENCE ist die Entwicklung von Strategien und Werkzeugen zur medienübergreifenden Meldung, Erkennung und Behandlung von Cybermobbing und Hassbotschaften. Dazu sollen organisationale Strategien und Werkzeuge zur Erfassung und Analyse (teil-)öffentlicher, sozialer Datenquellen (z.B. Facebook, Telegram, Twitter) auf Basis eines partizipativen Entwicklungsprozesses Ermittlungs- und Strafverfolgungsbehörden (ESBs) zur verbesserten Früherkennung und Behandlung von Cyber-Missbrauchsfällen befähigen. Eine darauf ausgerichtete Schulungsstrategie wird durch ein interaktives Tutorial für die Aneignung der entwickelten Werkzeuge ergänzt, die mithilfe von Künstlicher Intelligenz (KI) und Visual Analytics (VA) die anpassbare, faire und nachvollziehbare KI-Detektion und echtzeitbasierte Dashboard-Aufbereitung von Cyber-Missbrauchsinhalten unterstützen. Zur Erhöhung der zivilen Sicherheit sollen weiter die Erkennung und Meldung von Cybermobbing und Hassbotschaften durch die Bevölkerung gestärkt werden. Dies umfasst eine Strategie zur Verbesserung der Kommunikation zwischen Bürger*innen, Betroffenen und ESBs, welche durch empirische Feldforschung (z.B. repräsentative Befragungen) unterstützt und im Rahmen einer Kampagne erprobt wird. Dazu werden Werkzeuge zur Erkennung und Meldung von Cyber-Missbrauch für Bürger*innen durch ein Browser-Plugin und eine Smartphone-App zur Verfügung gestellt und evaluiert.

Weitere Informationen sowie ein Projektumriss sind der Website der BMBF-Sicherheitsforschung (sifo.de) zu entnehmen.

 

Konsortium

Technische Universität Darmstadt: Wissenschaft und Technik für Frieden und Sicherheit (PEASEC)

Seit ihrer Gründung im Jahre 1877 zeichnet sich die TU Darmstadt durch besonderen Pioniergeist aus. Zu unserem Selbstverständnis gehört es, diese Tradition der Innovation kontinuierlich fortzusetzen. Durch herausragende Leistungen in Forschung, Lehre und Transfer erschließen wir wichtige wissenschaftliche Zukunftsfelder und eröffnen kontinuierlich neue Chancen der gesellschaftlichen Gestaltung. Damit zählt die TU Darmstadt zu einer der führenden Technischen Universitäten in Deutschland mit hoher internationaler Sichtbarkeit und Reputation. Der Lehrstuhl Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) unter Leitung von Prof. Dr. Dr. Christian Reuter im Fachbereich Informatik und Zweitmitgliedschaft im Fachbereich Gesellschafts- und Geschichtswissenschaften der Technischen Universität Darmstadt verbindet Informatik mit Friedens- und Sicherheitsforschung. Methodisch werden empirische Studien (qualitative und quantitative Erhebungen aktueller Entwicklungen, z.B. der Selbsthilfeorganisation während COVID-19) mit technischer Forschung (Konzeptionierung innovativer Interaktionskonzepte, Sicherheitsmechanismen, privatheitsfördernder Technologien oder Machine-Learning-Algorithmen) und abschließender Evaluationen im Anwendungsfeld (z.B. Social Media Analytics für den Katastrophenschutz) kombiniert.

Universität Potsdam: Wirtschaftsinformatik und Digitale Transformation (digicat)

Modern, jung, überschaubar – so kann man die Universität Potsdam in aller Kürze beschreiben. Darüber hinaus exzellent in der Lehre. Die Universität Potsdam ist eine Hochschule im Spannungsfeld zwischen Tradition und Moderne. Dynamisch, ambitioniert, gewachsen und wachsend an einem Wissenschaftsstandort, der sich mit den international renommiertesten messen kann. Die Forschungsgruppe Wirtschaftsinformatik und Digitale Transformation (digicat) unter der Leitung von Prof. Dr. Stefan Stieglitz untersucht die digitale Transformation und deren Auswirkungen auf Unternehmen und andere Organisationen sowie auf Gesellschaft und Individuen. Die Forschung ist interdisziplinär ausgerichtet und basiert auf fortgeschrittenen Methoden der Datensammlung und -analyse. Wir leisten Beiträge für die Grundlagenforschung als auch für anwendungsorientierte Fragestellungen der Industrie und Verwaltung. Die Arbeitsgruppe kooperiert mit ausgewählten Unternehmen sowie mit herausragenden internationalen und nationalen Forschungseinrichtungen.

Universität Bamberg: Wirtschaftsinformatik, insb. KI-Engineering (DISCIETY)

Die Universität Bamberg ist eine mittelgroße Universität mit einem klaren Profil in den Geistes- und Kulturwissenschaften, in den Sozial- und Wirtschaftswissenschaften, in den Humanwissenschaften sowie in Wirtschaftsinformatik und Angewandter Informatik. Interdisziplinäre Forschungsaktivitäten und vielfältig kombinierbare Studiengänge tragen zur Profilierung in der Wissenschaftslandschaft Deutschlands bei. Der Lehrstuhl Wirtschaftsinformatik, insb. KI-Engineering (DISCIETY) beschäftigt sich in Forschung und Lehre umfassend mit der Digitalen Transformation, und legt dabei einen besonderen Fokus auf die Digitale Gesellschaft. Unter Einbezug soziotechnischer Systeme, die die Interaktion zwischen neuen Technologien und dem Menschen definieren, untersuchen DISCIETY die daraus resultierende Auswirkung auf die Individuen, die Gesellschaft aber auch besonders auf Unternehmen.

Virtimo AG

Die Virtimo AG ist Softwareherstellerin und IT-Beratungsunternehmen mit Sitz in Berlin. Gegründet im Jahr 2010 arbeiten heute rund 100 Expert:innen für Virtimo in den Themenfeldern Softwareentwicklung, Consulting, Digitale Transformation, Systemintegration, Prozessautomatisierung, Cloud Services und Solutions sowie der Konzeption und Umsetzung fachspezifischer IT-Lösungen. Der Branchenschwerpunkt liegt in der Energiewirtschaft, darüber hinaus ist Virtimo für Kund:innen der Bereiche Automobil, Technik, Versicherungen und Handel tätig.

Assoziierte Partner und Kooperationen

Die Zusammenarbeit mit weiteren Organisationen und Organisationseinheiten ist über assoziierte Partnerschaften und Kooperationsverträge abgedeckt:

  • Hessen CyberCompetenceCenter, Meldestelle Hessen gegen Hetze (Hessen3C)
  • Hessisches Ministerium des Innern und für Sport, Stabsstelle Gemeinsam Sicher In Hessen (GSIH)
  • Landespolizeipräsidium Baden-Württemberg, Präventiv und offensiv gegen Hasskriminalität, Antisemitismus und Extremismus (LBW)
  • HateAid gGmbH, Beratungsstelle gegen Hass im Netz
  • Digitalstadt Darmstadt GmbH
Publikationen

2025

  • Markus Bayer (2025)
    Deep Learning in Textual Low-Data Regimes for Cybersecurity
    Wiesbaden, Germany: Springer Vieweg.
    [BibTeX]

    @book{bayer_deep_2025,
    address = {Wiesbaden, Germany},
    title = {Deep {Learning} in {Textual} {Low}-{Data} {Regimes} for {Cybersecurity}},
    publisher = {Springer Vieweg},
    author = {Bayer, Markus},
    year = {2025},
    keywords = {Security, Projekt-CYWARN, Projekt-ATHENE-CyAware, Projekt-CYLENCE, DissPublisher},
    }

  • Julian Bäumler, Thea Riebe, Marc-André Kaufhold, Christian Reuter (2025)
    Harnessing Inter-Organizational Collaboration and Automation to Combat Online Hate Speech: A Qualitative Study with German Reporting Centers
    Proceedings of the ACM: Human Computer Interaction (PACM): Computer-Supported Cooperative Work and Social Computing .
    [BibTeX] [Abstract]

    In Germany and other countries, specialized non-profit reporting centers combat online hate speech by submitting criminal content to law enforcement agencies, forwarding deletion requests to social media platforms, and providing counseling to victims, thus contributing to the governance mechanism of content moderation as intermediaries between victims and various organizations. Whereas research in computer-supported cooperative work has extensively explored collaboration of and automation for content moderators, there are no works that focus on reporting centers. Based on expert interviews with their staff (N=15), this study finds that most German centers share a collaborative workflow, of which multiple tasks are heavily dependent on inter-organizational exchange. However, there are differences in their implementation of monitoring, content assessment, automation technology adoption, and external collaborators. As the centers are faced with diverse challenges, such as borderline case assessment, psychological burdens, limited visibility, conflicting goals with other actors, and manual repetitive work, our study contributes with nine implications for designing and researching supportive technologies. They provide suggestions for improving hate speech gathering and reporting, researching hate speech prioritization and assessment algorithms, and designing case processing systems. Beyond that, we outline directions for research on inter-organizational collaboration.

    @article{baumler_harnessing_2025,
    title = {Harnessing {Inter}-{Organizational} {Collaboration} and {Automation} to {Combat} {Online} {Hate} {Speech}: {A} {Qualitative} {Study} with {German} {Reporting} {Centers}},
    abstract = {In Germany and other countries, specialized non-profit reporting centers combat online hate speech by submitting criminal content to law enforcement agencies, forwarding deletion requests to social media platforms, and providing counseling to victims, thus contributing to the governance mechanism of content moderation as intermediaries between victims and various organizations. Whereas research in computer-supported cooperative work has extensively explored collaboration of and automation for content moderators, there are no works that focus on reporting centers. Based on expert interviews with their staff (N=15), this study finds that most German centers share a collaborative workflow, of which multiple tasks are heavily dependent on inter-organizational exchange. However, there are differences in their implementation of monitoring, content assessment, automation technology adoption, and external collaborators. As the centers are faced with diverse challenges, such as borderline case assessment, psychological burdens, limited visibility, conflicting goals with other actors, and manual repetitive work, our study contributes with nine implications for designing and researching supportive technologies. They provide suggestions for improving hate speech gathering and reporting, researching hate speech prioritization and assessment algorithms, and designing case processing systems. Beyond that, we outline directions for research on inter-organizational collaboration.},
    journal = {Proceedings of the ACM: Human Computer Interaction (PACM): Computer-Supported Cooperative Work and Social Computing},
    author = {Bäumler, Julian and Riebe, Thea and Kaufhold, Marc-André and Reuter, Christian},
    year = {2025},
    keywords = {Crisis, HCI, Projekt-CYWARN, Projekt-CYLENCE, A-Paper, AuswahlCrisis, Ranking-CORE-A},
    }

    2024

  • Markus Bayer, Christian Reuter (2024)
    ActiveLLM: Large Language Model-based Active Learning for Textual Few-Shot Scenarios
    arXiv . doi:10.48550/arXiv.2405.10808
    [BibTeX] [Abstract] [Download PDF]

    Active learning is designed to minimize annotation efforts by prioritizing instances that most enhance learning. However, many active learning strategies struggle with a ‚cold start‘ problem, needing substantial initial data to be effective. This limitation often reduces their utility for pre-trained models, which already perform well in few-shot scenarios. To address this, we introduce ActiveLLM, a novel active learning approach that leverages large language models such as GPT-4, Llama 3, and Mistral Large for selecting instances. We demonstrate that ActiveLLM significantly enhances the classification performance of BERT classifiers in few-shot scenarios, outperforming both traditional active learning methods and the few-shot learning method SetFit. Additionally, ActiveLLM can be extended to non-few-shot scenarios, allowing for iterative selections. In this way, ActiveLLM can even help other active learning strategies to overcome their cold start problem. Our results suggest that ActiveLLM offers a promising solution for improving model performance across various learning setups.

    @article{bayer_activellm_2024,
    title = {{ActiveLLM}: {Large} {Language} {Model}-based {Active} {Learning} for {Textual} {Few}-{Shot} {Scenarios}},
    url = {https://arxiv.org/pdf/2405.10808},
    doi = {10.48550/arXiv.2405.10808},
    abstract = {Active learning is designed to minimize annotation efforts by prioritizing instances that most enhance learning. However, many active learning strategies struggle with a 'cold start' problem, needing substantial initial data to be effective. This limitation often reduces their utility for pre-trained models, which already perform well in few-shot scenarios. To address this, we introduce ActiveLLM, a novel active learning approach that leverages large language models such as GPT-4, Llama 3, and Mistral Large for selecting instances. We demonstrate that ActiveLLM significantly enhances the classification performance of BERT classifiers in few-shot scenarios, outperforming both traditional active learning methods and the few-shot learning method SetFit. Additionally, ActiveLLM can be extended to non-few-shot scenarios, allowing for iterative selections. In this way, ActiveLLM can even help other active learning strategies to overcome their cold start problem. Our results suggest that ActiveLLM offers a promising solution for improving model performance across various learning setups.},
    journal = {arXiv},
    author = {Bayer, Markus and Reuter, Christian},
    year = {2024},
    keywords = {Projekt-ATHENE-CyAware, Projekt-CYLENCE, Security, UsableSec},
    }

  • Markus Bayer, Philipp Kuehn, Ramin Shanehsaz, Christian Reuter (2024)
    CySecBERT: A Domain-Adapted Language Model for the Cybersecurity Domain
    ACM Transactions on Privacy and Security (TOPS) ;27(2). doi:10.1145/3652594
    [BibTeX] [Abstract] [Download PDF]

    The field of cybersecurity is evolving fast. Security professionals are in need of intelligence on past, current and – ideally – on upcoming threats, because attacks are becoming more advanced and are increasingly targeting larger and more complex systems. Since the processing and analysis of such large amounts of information cannot be addressed manually, cybersecurity experts rely on machine learning techniques. In the textual domain, pre-trained language models like BERT have proven to be helpful as they provide a good baseline for further fine-tuning. However, due to the domain-knowledge and the many technical terms in cybersecurity, general language models might miss the gist of textual information. For this reason, we create a high-quality dataset and present a language model specifically tailored to the cybersecurity domain which can serve as a basic building block for cybersecurity systems. The model is compared on 15 tasks: Domain-dependent extrinsic tasks for measuring the performance on specific problems, intrinsic tasks for measuring the performance of the internal representations of the model as well as general tasks from the SuperGLUE benchmark. The results of the intrinsic tasks show that our model improves the internal representation space of domain words compared to the other models. The extrinsic, domain-dependent tasks, consisting of sequence tagging and classification, show that the model performs best in cybersecurity scenarios. In addition, we pay special attention to the choice of hyperparameters against catastrophic forgetting, as pre-trained models tend to forget the original knowledge during further training.

    @article{bayer_cysecbert_2024,
    title = {{CySecBERT}: {A} {Domain}-{Adapted} {Language} {Model} for the {Cybersecurity} {Domain}},
    volume = {27},
    issn = {2471-2566},
    url = {https://peasec.de/paper/2024/2024_BayerKuehnShanesazReuter_CySecBERT_TOPS.pdf},
    doi = {10.1145/3652594},
    abstract = {The field of cybersecurity is evolving fast. Security professionals are in need of intelligence on past, current and - ideally - on upcoming threats, because attacks are becoming more advanced and are increasingly targeting larger and more complex systems. Since the processing and analysis of such large amounts of information cannot be addressed manually, cybersecurity experts rely on machine learning techniques. In the textual domain, pre-trained language models like BERT have proven to be helpful as they provide a good baseline for further fine-tuning. However, due to the domain-knowledge and the many technical terms in cybersecurity, general language models might miss the gist of textual information. For this reason, we create a high-quality dataset and present a language model specifically tailored to the cybersecurity domain which can serve as a basic building block for cybersecurity systems. The model is compared on 15 tasks: Domain-dependent extrinsic tasks for measuring the performance on specific problems, intrinsic tasks for measuring the performance of the internal representations of the model as well as general tasks from the SuperGLUE benchmark. The results of the intrinsic tasks show that our model improves the internal representation space of domain words compared to the other models. The extrinsic, domain-dependent tasks, consisting of sequence tagging and classification, show that the model performs best in cybersecurity scenarios. In addition, we pay special attention to the choice of hyperparameters against catastrophic forgetting, as pre-trained models tend to forget the original knowledge during further training.},
    number = {2},
    journal = {ACM Transactions on Privacy and Security (TOPS)},
    author = {Bayer, Markus and Kuehn, Philipp and Shanehsaz, Ramin and Reuter, Christian},
    month = apr,
    year = {2024},
    note = {Place: New York, NY, USA
    Publisher: Association for Computing Machinery},
    keywords = {Student, Security, UsableSec, Projekt-CYWARN, Projekt-ATHENE-CyAware, Projekt-CYLENCE, A-Paper, Ranking-CORE-A, Ranking-ImpactFactor},
    }

  • Markus Bayer, Markus Neiczer, Maximilian Samsinger, Björn Buchhold, Christian Reuter (2024)
    XAI-Attack: Utilizing Explainable AI to Find Incorrectly Learned Patterns for Black-Box Adversarial Example Creation
    Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING) Torino, Italia.
    [BibTeX] [Abstract] [Download PDF]

    Adversarial examples, capable of misleading machine learning models into making erroneous predictions, pose significant risks in safety-critical domains such as crisis informatics, medicine, and autonomous driving. To counter this, we introduce a novel textual adversarial example method that identifies falsely learned word indicators by leveraging explainable AI methods as importance functions on incorrectly predicted instances, thus revealing and understanding the weaknesses of a model. Coupled with adversarial training, this approach guides models to adopt complex decision rules when necessary and simpler ones otherwise, enhancing their robustness. To evaluate the effectiveness of our approach, we conduct a human and a transfer evaluation and propose a novel adversarial training evaluation setting for better robustness assessment. While outperforming current adversarial example and training methods, the results also show our method’s potential in facilitating the development of more resilient transformer models by detecting and rectifying biases and patterns in training data, showing baseline improvements of up to 23 percentage points in accuracy on adversarial tasks. The code of our approach is freely available for further exploration and use.

    @inproceedings{bayer_xai-attack_2024,
    address = {Torino, Italia},
    title = {{XAI}-{Attack}: {Utilizing} {Explainable} {AI} to {Find} {Incorrectly} {Learned} {Patterns} for {Black}-{Box} {Adversarial} {Example} {Creation}},
    url = {https://aclanthology.org/2024.lrec-main.1542},
    abstract = {Adversarial examples, capable of misleading machine learning models into making erroneous predictions, pose significant risks in safety-critical domains such as crisis informatics, medicine, and autonomous driving. To counter this, we introduce a novel textual adversarial example method that identifies falsely learned word indicators by leveraging explainable AI methods as importance functions on incorrectly predicted instances, thus revealing and understanding the weaknesses of a model. Coupled with adversarial training, this approach guides models to adopt complex decision rules when necessary and simpler ones otherwise, enhancing their robustness. To evaluate the effectiveness of our approach, we conduct a human and a transfer evaluation and propose a novel adversarial training evaluation setting for better robustness assessment. While outperforming current adversarial example and training methods, the results also show our method's potential in facilitating the development of more resilient transformer models by detecting and rectifying biases and patterns in training data, showing baseline improvements of up to 23 percentage points in accuracy on adversarial tasks. The code of our approach is freely available for further exploration and use.},
    booktitle = {Proceedings of the 2024 {Joint} {International} {Conference} on {Computational} {Linguistics}, {Language} {Resources} and {Evaluation} ({LREC}-{COLING})},
    publisher = {ELRA and ICCL},
    author = {Bayer, Markus and Neiczer, Markus and Samsinger, Maximilian and Buchhold, Björn and Reuter, Christian},
    month = may,
    year = {2024},
    keywords = {Security, UsableSec, Projekt-ATHENE-CyAware, Projekt-CYLENCE, Ranking-CORE-A},
    pages = {17725--17738},
    }

  • Markus Bayer (2024)
    Deep Learning in Textual Low-Data Regimes for Cybersecurity
    Darmstadt, Germany: Dissertation (Dr. rer. nat.), Department of Computer Science, Technische Universität Darmstadt.
    [BibTeX]

    @book{bayer_deep_2024,
    address = {Darmstadt, Germany},
    title = {Deep {Learning} in {Textual} {Low}-{Data} {Regimes} for {Cybersecurity}},
    publisher = {Dissertation (Dr. rer. nat.), Department of Computer Science, Technische Universität Darmstadt},
    author = {Bayer, Markus},
    year = {2024},
    keywords = {Security, Projekt-CYWARN, Projekt-ATHENE-CyAware, Projekt-CYLENCE, Dissertation},
    }

  • Julian Bäumler, Marc-André Kaufhold, Georg Voronin, Christian Reuter (2024)
    Towards an Online Hate Speech Classification Scheme for German Law Enforcement and Reporting Centers: Insights from Research and Practice
    Mensch und Computer 2024 – Workshopband Karlsruhe, Germany. doi:10.18420/muc2024-mci-ws13-124
    [BibTeX] [Abstract] [Download PDF]

    In Germany, both law enforcement agencies and dedicated reporting centers engage in various activities to counter illegal online hate speech. Due to the high volume of such content and against the background of limited resources, their personnel can be confronted with the issue of information overload. To mitigate this issue, technologies for information filtering, classification, prioritization, and visualization offer great potential. However, domainspecific classification schemes that differentiate subtypes of online hate speech are a prerequisite for the development of such assistive tools. There is a gap in research with regard to an empirically substantiated classification scheme for subtypes of hate speech for the German law enforcement and reporting center domain. Based on a review of relevant computer science publications (N=24) and qualitative interviews with practitioners (N=18), this work investigates practice-relevant subtypes of hate speech and finds that it is primarily differentiated with regard to targeted group affiliations, the conveyance of an immediate security threat, and criminal relevance. It contributes to the state of research with an empirically grounded online hate speech classification scheme for German law enforcement agencies and reporting centers (C1) and five implications for the user-centered design of hate speech classification tools (C2).

    @inproceedings{baumler_towards_2024,
    address = {Karlsruhe, Germany},
    title = {Towards an {Online} {Hate} {Speech} {Classification} {Scheme} for {German} {Law} {Enforcement} and {Reporting} {Centers}: {Insights} from {Research} and {Practice}},
    url = {https://dl.gi.de/items/2fa0ec97-d562-41d2-bab9-0b0539432c87},
    doi = {10.18420/muc2024-mci-ws13-124},
    abstract = {In Germany, both law enforcement agencies and dedicated reporting centers engage in various activities to counter illegal online hate speech. Due to the high volume of such content and against the background of limited resources, their personnel can be confronted with the issue of information overload. To mitigate this issue, technologies for information filtering, classification, prioritization, and visualization offer great potential. However, domainspecific classification schemes that differentiate subtypes of online hate speech are a prerequisite for the development of such assistive tools. There is a gap in research with regard to an empirically substantiated classification scheme for subtypes of hate speech for the German law enforcement and reporting center domain. Based on a review of relevant computer science publications (N=24) and qualitative interviews with practitioners (N=18), this work investigates practice-relevant subtypes of hate speech and finds that it is primarily differentiated with regard to targeted group affiliations, the conveyance of an immediate security threat, and criminal relevance. It contributes to the state of research with an empirically grounded online hate speech classification scheme for German law enforcement agencies and reporting centers (C1) and five implications for the user-centered design of hate speech classification tools (C2).},
    language = {en},
    booktitle = {Mensch und {Computer} 2024 - {Workshopband}},
    publisher = {Gesellschaft für Informatik e.V.},
    author = {Bäumler, Julian and Kaufhold, Marc-André and Voronin, Georg and Reuter, Christian},
    year = {2024},
    keywords = {UsableSec, HCI, Projekt-ATHENE-CyAware, Projekt-CYLENCE},
    }

  • Katrin Hartwig, Ruslan Sandler, Christian Reuter (2024)
    Navigating Misinformation in Voice Messages: Identification of User-Centered Features for Digital Interventions
    Risk, Hazards, & Crisis in Public Policy (RHCPP) . doi:10.1002/rhc3.12296
    [BibTeX] [Abstract] [Download PDF]

    Misinformation presents a challenge to democracies, particularly in times of crisis. One way in which misinformation is spread is through voice messages sent via messenger groups, which enable members to share information on a larger scale. Gaining user perspectives on digital misinformation interventions as countermeasure after detection is crucial. In this paper, we extract potential features of misinformation in voice messages from literature, implement them within a program that automatically processes voice messages, and evaluate their perceived usefulness and comprehensibility as user-centered indicators.We propose 35 features extracted from audio files at the character, word, sentence, audio and creator levels to assist (1) private individuals in conducting credibility assessments, (2) government agencies faced with data overload during crises, and (3) researchers seeking to gather features for automatic detection approaches. We conducted a think-aloud study with laypersons (N = 20) to provide initial insight into how individuals autonomously assess the credibility of voice messages, as well as which automatically extracted features they find to be clear and convincing indicators of misinformation. Our study provides qualitative and quantitative insights into valuable indicators, particularly when they relate directly to the content or its creator, and uncovers challenges in user interface design.

    @article{hartwig_navigating_2024,
    title = {Navigating {Misinformation} in {Voice} {Messages}: {Identification} of {User}-{Centered} {Features} for {Digital} {Interventions}},
    issn = {1944-4079},
    url = {https://peasec.de/paper/2024/2024_HartwigSandlerReuter_NavigatingMisinfoVoiceMessages_RiskHazards.pdf},
    doi = {10.1002/rhc3.12296},
    abstract = {Misinformation presents a challenge to democracies, particularly in times of crisis. One way in which misinformation is spread is through voice messages sent via messenger groups, which enable members to share information on a larger scale. Gaining user perspectives on digital misinformation interventions as countermeasure after detection is crucial. In this paper, we extract potential features of misinformation in voice messages from literature, implement them within a program that automatically processes voice messages, and evaluate their perceived usefulness and comprehensibility as user-centered indicators.We propose 35 features extracted from audio files at the character, word, sentence, audio and creator levels to assist (1) private individuals in conducting credibility assessments, (2) government agencies faced with data overload during crises, and (3) researchers seeking to gather features for automatic detection approaches. We conducted a think-aloud study with laypersons (N = 20) to provide initial insight into how individuals autonomously assess the credibility of voice messages, as well as which automatically extracted features they find to be clear and convincing indicators of misinformation. Our study provides qualitative and quantitative insights into valuable indicators, particularly when they relate directly to the content or its creator, and uncovers challenges in user interface design.},
    journal = {Risk, Hazards, \& Crisis in Public Policy (RHCPP)},
    author = {Hartwig, Katrin and Sandler, Ruslan and Reuter, Christian},
    year = {2024},
    note = {Publisher: John Wiley \& Sons, Ltd},
    keywords = {Student, UsableSec, Crisis, HCI, Projekt-CYLENCE, A-Paper, Projekt-NEBULA, Projekt-ATHENE, Ranking-ImpactFactor, SocialMedia, Cyberwar},
    }

  • Marc-André Kaufhold, Thea Riebe, Markus Bayer, Christian Reuter (2024)
    ‚We Do Not Have the Capacity to Monitor All Media‘: A Design Case Study on Cyber Situational Awareness in Computer Emergency Response Teams
    Proceedings of the Conference on Human Factors in Computing Systems (CHI) (Best Paper Award) New York, NY, USA. doi:10.1145/3613904.3642368
    [BibTeX] [Abstract] [Download PDF]

    Computer Emergency Response Teams (CERTs) have been established in the public sector globally to provide advisory, preventive and reactive cybersecurity services for government agencies, citizens, and businesses. Nevertheless, their responsibility of monitoring, analyzing, and communicating cyber threats and security vulnerabilities have become increasingly challenging due to the growing volume and varying quality of information disseminated through public and social channels. Based on a design case study conducted from 2021 to 2023, this paper combines three iterations of expert interviews (N=25), design workshops (N=4) and cognitive walkthroughs (N=25) to design an automated, cross-platform and real-time cybersecurity dashboard. By adopting the notion of cyber situational awareness, the study further extracts user requirements and design heuristics for enhanced threat intelligence and mission awareness in CERTs, discussing the aspects of source integration, data management, customizable visualization, relationship awareness, information assessment, software integration, (inter-)organizational collaboration, and communication of stakeholder warnings.

    @inproceedings{kaufhold_we_2024,
    address = {New York, NY, USA},
    series = {{CHI} '24},
    title = {'{We} {Do} {Not} {Have} the {Capacity} to {Monitor} {All} {Media}': {A} {Design} {Case} {Study} on {Cyber} {Situational} {Awareness} in {Computer} {Emergency} {Response} {Teams}},
    isbn = {9798400703300},
    url = {https://peasec.de/paper/2024/2024_KaufholdRiebeBayerReuter_CertDesignCaseStudy_CHI.pdf},
    doi = {10.1145/3613904.3642368},
    abstract = {Computer Emergency Response Teams (CERTs) have been established in the public sector globally to provide advisory, preventive and reactive cybersecurity services for government agencies, citizens, and businesses. Nevertheless, their responsibility of monitoring, analyzing, and communicating cyber threats and security vulnerabilities have become increasingly challenging due to the growing volume and varying quality of information disseminated through public and social channels. Based on a design case study conducted from 2021 to 2023, this paper combines three iterations of expert interviews (N=25), design workshops (N=4) and cognitive walkthroughs (N=25) to design an automated, cross-platform and real-time cybersecurity dashboard. By adopting the notion of cyber situational awareness, the study further extracts user requirements and design heuristics for enhanced threat intelligence and mission awareness in CERTs, discussing the aspects of source integration, data management, customizable visualization, relationship awareness, information assessment, software integration, (inter-)organizational collaboration, and communication of stakeholder warnings.},
    booktitle = {Proceedings of the {Conference} on {Human} {Factors} in {Computing} {Systems} ({CHI}) ({Best} {Paper} {Award})},
    publisher = {Association for Computing Machinery},
    author = {Kaufhold, Marc-André and Riebe, Thea and Bayer, Markus and Reuter, Christian},
    year = {2024},
    keywords = {Security, UsableSec, HCI, Projekt-CYWARN, Projekt-ATHENE-CyAware, Projekt-CYLENCE, A-Paper, AuswahlKaufhold, AuswahlUsableSec, Ranking-CORE-A*, Selected},
    }

  • Marc-André Kaufhold, Jasmin Haunschild, Christian Reuter (2024)
    Cultural Violence and Peace Interventions in Social Media
    In: Christian Reuter: Information Technology for Peace and Security – IT Applications and Infrastructures in Conflicts, Crises, War, and Peace. Wiesbaden, Germany: Springer Vieweg.
    [BibTeX] [Abstract] [Download PDF]

    Over the last decade, social media services had an enormous impact on modern culture. They are nowadays widely established in everyday life, but also during natural and man-made crises and conflicts. For instance, Facebook was part of the Arabic Spring, in which the tool facilitated the communication and interaction between participants of political protests. On the contrary, terrorists may recruit new members and disseminate ideologies, and social bots may influence social and political processes. Based on the notions of cultural violence and cultural peace as well as the phenomena of fake news, terrorism and social bots, this exploratory review firstly presents human cultural inter-ventions in social media (e.g. dissemination of fake news and terroristic propaganda) and respective countermeasures (e.g. fake news detection and counter-narratives). Sec-ondly, it discusses automatic cultural interventions realised via social bots (e.g. astro-turfing, misdirection and smoke screening) and countermeasures (e.g. crowdsourcing and social bot detection). Finally, this chapter concludes with a range of cultural inter-ventions and information and communication technology (ICT) in terms of actors and intentions to identify future research potential for supporting situational assessments during conflicts.

    @incollection{kaufhold_cultural_2024,
    address = {Wiesbaden, Germany},
    title = {Cultural {Violence} and {Peace} {Interventions} in {Social} {Media}},
    url = {https://link.springer.com/chapter/10.1007/978-3-658-44810-3_18},
    abstract = {Over the last decade, social media services had an enormous impact on modern culture. They are nowadays widely established in everyday life, but also during natural and man-made crises and conflicts. For instance, Facebook was part of the Arabic Spring, in which the tool facilitated the communication and interaction between participants of political protests. On the contrary, terrorists may recruit new members and disseminate ideologies, and social bots may influence social and political processes. Based on the notions of cultural violence and cultural peace as well as the phenomena of fake news, terrorism and social bots, this exploratory review firstly presents human cultural inter-ventions in social media (e.g. dissemination of fake news and terroristic propaganda) and respective countermeasures (e.g. fake news detection and counter-narratives). Sec-ondly, it discusses automatic cultural interventions realised via social bots (e.g. astro-turfing, misdirection and smoke screening) and countermeasures (e.g. crowdsourcing and social bot detection). Finally, this chapter concludes with a range of cultural inter-ventions and information and communication technology (ICT) in terms of actors and intentions to identify future research potential for supporting situational assessments during conflicts.},
    booktitle = {Information {Technology} for {Peace} and {Security} - {IT} {Applications} and {Infrastructures} in {Conflicts}, {Crises}, {War}, and {Peace}},
    publisher = {Springer Vieweg},
    author = {Kaufhold, Marc-André and Haunschild, Jasmin and Reuter, Christian},
    editor = {Reuter, Christian},
    year = {2024},
    note = {https://doi.org/10.1007/978-3-658-44810-3\_18},
    keywords = {Peace, Crisis, HCI, Projekt-CYLENCE, SocialMedia},
    }

  • Marc-André Kaufhold (2024)
    Exploring the evolving landscape of human-centred crisis informatics: current challenges and future trends
    i-com – Journal of Interactive Media ;23(2):155–163. doi:10.1515/icom-2024-0002
    [BibTeX] [Abstract] [Download PDF]

    Modern Information and Communication Technology (ICT) has been used in safety-critical situations for over twenty years. Rooted in Human-Computer Interaction (HCI) and related disciplines, the field of crisis informatics made considerable efforts to investigate social media use and role patterns in crises, facilitate the collection, processing and refinement of social media data, design and evaluate supportive ICT, and provide cumulative and longitudinal research. This narrative review examines contemporary challenges of human-centred crisis informatics and envision trends for the following decade, including (I) a broadening scope of crisis informatics, (II) the professionalisation of cross-platform collaboration of citizen communities and emergency services, (III) expert interfaces for explainable and multimodal artificial intelligence for user-generated content assessment, (IV) internet of things and mobile apps for bidirectional communication and warnings in disruption-tolerant networks, as well as (V) digital twins and virtual reality for the effective training of multi-agency collaboration in hybrid hazards.

    @article{kaufhold_exploring_2024,
    title = {Exploring the evolving landscape of human-centred crisis informatics: current challenges and future trends},
    volume = {23},
    issn = {2196-6826},
    url = {https://doi.org/10.1515/icom-2024-0002},
    doi = {10.1515/icom-2024-0002},
    abstract = {Modern Information and Communication Technology (ICT) has been used in safety-critical situations for over twenty years. Rooted in Human-Computer Interaction (HCI) and related disciplines, the field of crisis informatics made considerable efforts to investigate social media use and role patterns in crises, facilitate the collection, processing and refinement of social media data, design and evaluate supportive ICT, and provide cumulative and longitudinal research. This narrative review examines contemporary challenges of human-centred crisis informatics and envision trends for the following decade, including (I) a broadening scope of crisis informatics, (II) the professionalisation of cross-platform collaboration of citizen communities and emergency services, (III) expert interfaces for explainable and multimodal artificial intelligence for user-generated content assessment, (IV) internet of things and mobile apps for bidirectional communication and warnings in disruption-tolerant networks, as well as (V) digital twins and virtual reality for the effective training of multi-agency collaboration in hybrid hazards.},
    number = {2},
    journal = {i-com - Journal of Interactive Media},
    author = {Kaufhold, Marc-André},
    year = {2024},
    keywords = {Crisis, HCI, Projekt-emergenCITY, Projekt-ATHENE-CyAware, Projekt-CYLENCE, AuswahlCrisis},
    pages = {155--163},
    }

  • Marc-André Kaufhold, Julian Bäumler, Nicolai Koukal, Christian Reuter (2024)
    Towards a Security Advisory Content Retrieval and Extraction System for Computer Emergency Response Teams
    Mensch und Computer 2024 – Workshopband Karlsruhe, Germany. doi:10.18420/muc2024-mci-ws13-133
    [BibTeX] [Abstract] [Download PDF]

    Computer Emergency Response Teams provide advisory, preventive, and reactive cybersecurity services for authorities, citizens, and businesses. However, their responsibility of establishing cyber situational awareness by monitoring and analyzing security advisories and vulnerabilities has become challenging due to the growing volume of information disseminated through public channels. Thus, this paper presents the preliminary design of a system for automatically retrieving and extracting security advisory documents from Common Security Advisory Framework (CSAF), HTML, and RSS sources. The evaluation with various security advisory sources (N=53) shows that the developed system can retrieve 90\% of the published advisory documents, which is a significant improvement over systems only relying on the retrieval from RSS feeds (30\%).

    @inproceedings{kaufhold_towards_2024,
    address = {Karlsruhe, Germany},
    title = {Towards a {Security} {Advisory} {Content} {Retrieval} and {Extraction} {System} for {Computer} {Emergency} {Response} {Teams}},
    url = {https://dl.gi.de/items/6ee00080-4245-44c0-ae9c-1a9cdea7fa3a},
    doi = {10.18420/muc2024-mci-ws13-133},
    abstract = {Computer Emergency Response Teams provide advisory, preventive, and reactive cybersecurity services for authorities, citizens, and businesses. However, their responsibility of establishing cyber situational awareness by monitoring and analyzing security advisories and vulnerabilities has become challenging due to the growing volume of information disseminated through public channels. Thus, this paper presents the preliminary design of a system for automatically retrieving and extracting security advisory documents from Common Security Advisory Framework (CSAF), HTML, and RSS sources. The evaluation with various security advisory sources (N=53) shows that the developed system can retrieve 90\% of the published advisory documents, which is a significant improvement over systems only relying on the retrieval from RSS feeds (30\%).},
    language = {en},
    booktitle = {Mensch und {Computer} 2024 - {Workshopband}},
    publisher = {Gesellschaft für Informatik e.V.},
    author = {Kaufhold, Marc-André and Bäumler, Julian and Koukal, Nicolai and Reuter, Christian},
    year = {2024},
    keywords = {UsableSec, HCI, Projekt-ATHENE-CyAware, Projekt-CYLENCE},
    }

  • Marc-André Kaufhold, Tilo Mentler, Simon Nestler, Christian Reuter (2024)
    11. Workshop Mensch-Maschine-Interaktion in sicherheitskritischen Systemen
    Mensch und Computer 2024 – Workshopband Karlsruhe, Germany. doi:10.18420/muc2024-mci-ws13-101
    [BibTeX] [Abstract] [Download PDF]

    Im Zentrum dieses Workshops steht die Interaktion von Mensch und Technik in sicherheitskritischen Kontexten. Hierzu zählen Bereiche, die bereits seit Jahrzehnten Gegenstand der Forschung sind (z.B. Prozessführung in Leitwarten), aber auch aktuelle Herausforderungen (z.B. Social Media im Katastrophenschutz). In diesen und vielen weiteren Bereichen gilt, dass sichere Systemzustände nur durch die ganzheitliche Betrachtung von Mensch, Technik und Organisation gewährleistet bzw. schnellstmöglich wieder erreicht werden können. In diesem Zusammenhang ist der Workshop auch der Nutzbarkeit und Akzeptanz von Sicherheitskonzepten sowie einer bewussteren Auseinandersetzung der Nutzenden mit diesem Thema gewidmet.

    @inproceedings{kaufhold_11_2024,
    address = {Karlsruhe, Germany},
    title = {11. {Workshop} {Mensch}-{Maschine}-{Interaktion} in sicherheitskritischen {Systemen}},
    url = {https://dl.gi.de/items/6a526522-0cbf-4672-af8d-d7580cf97f92},
    doi = {10.18420/muc2024-mci-ws13-101},
    abstract = {Im Zentrum dieses Workshops steht die Interaktion von Mensch und Technik in sicherheitskritischen Kontexten. Hierzu zählen Bereiche, die bereits seit Jahrzehnten Gegenstand der Forschung sind (z.B. Prozessführung in Leitwarten), aber auch aktuelle Herausforderungen (z.B. Social Media im Katastrophenschutz). In diesen und vielen weiteren Bereichen gilt, dass sichere Systemzustände nur durch die ganzheitliche Betrachtung von Mensch, Technik und Organisation gewährleistet bzw. schnellstmöglich wieder erreicht werden können. In diesem Zusammenhang ist der Workshop auch der Nutzbarkeit und Akzeptanz von Sicherheitskonzepten sowie einer bewussteren Auseinandersetzung der Nutzenden mit diesem Thema gewidmet.},
    language = {de},
    booktitle = {Mensch und {Computer} 2024 - {Workshopband}},
    publisher = {Gesellschaft für Informatik e.V.},
    author = {Kaufhold, Marc-André and Mentler, Tilo and Nestler, Simon and Reuter, Christian},
    year = {2024},
    keywords = {Security, UsableSec, HCI, Projekt-CYLENCE},
    }

    2023

  • Marc-André Kaufhold, Markus Bayer, Julian Bäumler, Christian Reuter, Stefan Stieglitz, Ali Sercan Basyurt, Milad Mirabaie, Christoph Fuchß, Kaan Eyilmez (2023)
    CYLENCE: Strategies and Tools for Cross-Media Reporting, Detection, and Treatment of Cyberbullying and Hatespeech in Law Enforcement Agencies
    Mensch und Computer 2023 – Workshopband Rapperswil, Switzerland. doi:10.18420/muc2023-mci-ws01-211
    [BibTeX] [Abstract] [Download PDF]

    Despite the merits of public and social media in private and professional spaces, citizens and professionals are increasingly exposed to cyberabuse, such as cyberbullying and hate speech. Thus, Law Enforcement Agencies (LEA) are deployed in many countries and organisations to enhance the preventive and reactive capabilities against cyberabuse. However, their tasks are getting more complex by the increasing amount and varying quality of information disseminated into public channels. Adopting the perspectives of Crisis Informatics and safety-critical Human-Computer Interaction (HCI) and based on both a narrative literature review and group discussions, this paper first outlines the research agenda of the CYLENCE project, which seeks to design strategies and tools for cross-media reporting, detection, and treatment of cyberbullying and hatespeech in investigative and law enforcement agencies. Second, it identifies and elaborates seven research challenges with regard to the monitoring, analysis and communication of cyberabuse in LEAs, which serve as a starting point for in-depth research within the project.

    @inproceedings{kaufhold_cylence_2023,
    address = {Rapperswil, Switzerland},
    title = {{CYLENCE}: {Strategies} and {Tools} for {Cross}-{Media} {Reporting}, {Detection}, and {Treatment} of {Cyberbullying} and {Hatespeech} in {Law} {Enforcement} {Agencies}},
    url = {https://dl.gi.de/items/0e0efe8f-64bf-400c-85f7-02b65f83189d},
    doi = {10.18420/muc2023-mci-ws01-211},
    abstract = {Despite the merits of public and social media in private and professional spaces, citizens and professionals are increasingly exposed to cyberabuse, such as cyberbullying and hate speech. Thus, Law Enforcement Agencies (LEA) are deployed in many countries and organisations to enhance the preventive and reactive capabilities against cyberabuse. However, their tasks are getting more complex by the increasing amount and varying quality of information disseminated into public channels. Adopting the perspectives of Crisis Informatics and safety-critical Human-Computer Interaction (HCI) and based on both a narrative literature review and group discussions, this paper first outlines the research agenda of the CYLENCE project, which seeks to design strategies and tools for cross-media reporting, detection, and treatment of cyberbullying and hatespeech in investigative and law enforcement agencies. Second, it identifies and elaborates seven research challenges with regard to the monitoring, analysis and communication of cyberabuse in LEAs, which serve as a starting point for in-depth research within the project.},
    language = {de},
    booktitle = {Mensch und {Computer} 2023 - {Workshopband}},
    publisher = {Gesellschaft für Informatik e.V.},
    author = {Kaufhold, Marc-André and Bayer, Markus and Bäumler, Julian and Reuter, Christian and Stieglitz, Stefan and Basyurt, Ali Sercan and Mirabaie, Milad and Fuchß, Christoph and Eyilmez, Kaan},
    year = {2023},
    keywords = {UsableSec, HCI, Projekt-CYLENCE},
    }

  • Christian Reuter, Marc-André Kaufhold, Tom Biselli, Helene Pleil (2023)
    Increasing Adoption Despite Perceived Limitations of Social Media in Emergencies: Representative Insights on German Citizens’ Perception and Trends from 2017 to 2021
    International Journal of Disaster Risk Reduction (IJDRR) ;96. doi:https://doi.org/10.1016/j.ijdrr.2023.103880
    [BibTeX] [Abstract] [Download PDF]

    The value of social media in crises, disasters, and emergencies across different events, participants, and states is now well-examined in crisis informatics research. Previous research has contributed to the state of the art with empirical insights on the use of social media, approaches for the gathering and processing of big social data, the design and evaluation of information systems, and the analysis of cumulative and longitudinal data. While some studies examined social media use representatively for their target audience, these usually only comprise a single point of inquiry and do not allow for a trend analysis. This work provides results (1) of a representative survey with German citizens from 2021 on use patterns, perceptions, and expectations regarding social media during emergencies. Furthermore, it (2) compares these results to previous surveys and provides insights on temporal changes and trends from 2017, over 2019 to 2021. Our findings highlight that social media use in emergencies increased in 2021 and 2019 compared to 2017. Between 2019 and 2021, the amount of information shared on social media remained on a similar level, while the perceived disadvantages of social media in emergencies significantly increased. In light of demographic variables, the results of the 2021 survey confirm previous findings, according to which older individuals (45+ years) use social media in emergencies less often than younger individuals (18-24 years). Furthermore, while the quicker availability of information was one of the reasons for social media use, especially the potential information overload was a key factor for not using social media in emergencies. The results are discussed in light of the dynamic nature of attitudes regarding social media in emergencies and the need to account for heterogeneity in user expectations to build trustworthy information ecosystems in social media.

    @article{reuter_increasing_2023,
    title = {Increasing {Adoption} {Despite} {Perceived} {Limitations} of {Social} {Media} in {Emergencies}: {Representative} {Insights} on {German} {Citizens}’ {Perception} and {Trends} from 2017 to 2021},
    volume = {96},
    issn = {2212-4209},
    url = {https://peasec.de/paper/2023/2023_ReuterKaufholdBiselliPleil_SocialMediaEmergenciesSurvey_IJDRR.pdf},
    doi = {https://doi.org/10.1016/j.ijdrr.2023.103880},
    abstract = {The value of social media in crises, disasters, and emergencies across different events, participants, and states is now well-examined in crisis informatics research. Previous research has contributed to the state of the art with empirical insights on the use of social media, approaches for the gathering and processing of big social data, the design and evaluation of information systems, and the analysis of cumulative and longitudinal data. While some studies examined social media use representatively for their target audience, these usually only comprise a single point of inquiry and do not allow for a trend analysis. This work provides results (1) of a representative survey with German citizens from 2021 on use patterns, perceptions, and expectations regarding social media during emergencies. Furthermore, it (2) compares these results to previous surveys and provides insights on temporal changes and trends from 2017, over 2019 to 2021. Our findings highlight that social media use in emergencies increased in 2021 and 2019 compared to 2017. Between 2019 and 2021, the amount of information shared on social media remained on a similar level, while the perceived disadvantages of social media in emergencies significantly increased. In light of demographic variables, the results of the 2021 survey confirm previous findings, according to which older individuals (45+ years) use social media in emergencies less often than younger individuals (18-24 years). Furthermore, while the quicker availability of information was one of the reasons for social media use, especially the potential information overload was a key factor for not using social media in emergencies. The results are discussed in light of the dynamic nature of attitudes regarding social media in emergencies and the need to account for heterogeneity in user expectations to build trustworthy information ecosystems in social media.},
    journal = {International Journal of Disaster Risk Reduction (IJDRR)},
    author = {Reuter, Christian and Kaufhold, Marc-André and Biselli, Tom and Pleil, Helene},
    year = {2023},
    keywords = {Student, Crisis, Projekt-emergenCITY, Projekt-CYLENCE, A-Paper, AuswahlCrisis, Projekt-NEBULA, Ranking-ImpactFactor, SocialMedia},
    }

    Förderung

    Projektname: Entwicklung von Strategien und Werkzeugen zur medienübergreifenden Meldung, Erkennung und Behandlung von Cybermobbing und Hassbotschaften in Ermittlungs- und Strafverfolgungsbehörden (CYLENCE)

    Schlüsselwörter: Cybermobbing, Hassbotschaften, Meldestelle, Lagebilderfassung, Ermittlungs- und Strafverfolgungsbehörden, Social Media Analytics, Künstliche Intelligenz, Visual Analytics

    Förderung: Programm „Zivile Sicherheit – Bedrohungen aus dem digitalen Raum“ vom Bundesministerium für Bildung und Forschung (BMBF).

    Förderkennzeichen: 13N16636 bis 13N16639

    Laufzeit: 08.2023 – 07.2026

    Projektträger: VDI Technologiezentrum GmbH

    Kontakt

    CYLENCE ist ein vom Bundesministerium für Bildung und Forschung (BMBF) gefördertes Forschungsprojekt unter Koordination der Technischen Universität Darmstadt.

    Technische Universität Darmstadt

    Fachbereich Informatik
    Wissenschaft und Technik für Frieden und Sicherheit (PEASEC)
    Pankratiusstraße 2, 64289 Darmstadt
    www.peasec.de

    Verbundkoodinator

    Prof. Dr. Dr. Christian Reuter
    www.peasec.de/team/reuter

    Projektmanager

    Dr. Marc-André Kaufhold
    www.peasec.de/team/kaufhold

    Sekretariat