Research Associate / Post-Doctoral Researcher (Wissenschaftlicher Mitarbeiter / Post-Doktorand)

Contact:

  • Email: bayer(at)peasec.tu-darmstadt.de
  • Address: Technical University of Darmstadt, Department of Computer Science, Science and Technology for Peace and Security (PEASEC), Pankratiusstraße 2, 64289 Darmstadt, Germany

EN

Dr. rer. nat. Markus Bayer is a research associate and post-doctoral researcher at the Chair of Science and Technology for Peace and Security (PEASEC) in the Department of Computer Science at the Technical University of Darmstadt. His doctoral thesis (Dr. rer. nat.), titled Deep Learning in Textual Low-Data Regimes for Cybersecurity, explores innovative approaches to applying machine learning models effectively in the cybersecurity domain, even in environments with limited annotated data.

Dr. Bayer specializes in developing cutting-edge techniques such as active learning, data augmentation, transfer learning, and adversarial training to enhance the robustness and precision of machine learning models in cybersecurity applications. Early in his research, he recognized the transformative potential of large language models like GPT-4 and successfully integrated them into his methodologies, which have had a notable impact on both the academic community and real-world applications.

During his studies (B.Sc. and M.Sc.) in computer science at the Technical University of Darmstadt, Dr. Bayer focused on machine learning and its applications in peace and security research. Currently, he contributes his expertise to high-impact projects such as CYLENCE, CYWARN, and ATHENE, where he leverages deep learning to extract critical cybersecurity insights and combat hate speech.

His overarching goal is to address pressing, practical challenges, such as training deep learning models with minimal data, by conducting theoretically sound and targeted research that bridges the gap between academia and industry needs.

DE

Dr. rer. nat. Markus Bayer ist wissenschaftlicher Mitarbeiter und Post-Doktorand am Lehrstuhl Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) im Fachbereich Informatik der Technischen Universität Darmstadt. In seiner Dissertation (Dr. rer. nat.) mit dem Titel Deep Learning in Textual Low-Data Regimes for Cybersecurity untersucht er neuartige Ansätze, wie maschinelle Lernmodelle trotz begrenzter annotierter Daten im Bereich der Cybersicherheit effizient eingesetzt werden können.

Dr. Bayer hat sich auf die Entwicklung fortschrittlicher Techniken wie Active Learning, Data Augmentation, Transfer Learning und Adversarial Training spezialisiert, um die Robustheit und Genauigkeit maschineller Lernmodelle in der Cybersicherheit zu verbessern. Schon früh erkannte er das transformative Potenzial von Large Language Models wie GPT-4 und integrierte diese erfolgreich in seine Methoden, was sowohl in der akademischen Gemeinschaft als auch in der Praxis einen bemerkenswerten Einfluss hatte.

Während seines Informatikstudiums (B.Sc. und M.Sc.) an der Technischen Universität Darmstadt lag Dr. Bayers Fokus auf maschinellem Lernen und dessen Anwendung in der Friedens- und Sicherheitsforschung. Derzeit bringt er seine Expertise in hochkarätige Projekte wie CYLENCE, CYWARN, und ATHENE ein, in denen er mithilfe von Deep Learning wichtige Informationen zur Cybersicherheit gewinnt und Hate Speech bekämpft.

Sein übergeordnetes Ziel ist es, drängende praktische Herausforderungen wie das Training von Modellen mit minimalen Daten durch theoretisch fundierte und zielgerichtete Forschung zu bewältigen und so die Lücke zwischen akademischen Erkenntnissen und den Anforderungen der Industrie zu schließen.

Publications

2025

  • Markus Bayer (2025)
    Deep Learning in Textual Low-Data Regimes for Cybersecurity
    Wiesbaden, Germany: Springer Vieweg.
    [BibTeX]

    @book{bayer_deep_2025,
    address = {Wiesbaden, Germany},
    title = {Deep {Learning} in {Textual} {Low}-{Data} {Regimes} for {Cybersecurity}},
    publisher = {Springer Vieweg},
    author = {Bayer, Markus},
    year = {2025},
    keywords = {DissPublisher, Projekt-ATHENE-CyAware, Projekt-CYLENCE, Projekt-CYWARN, Security},
    }

    2024

  • Markus Bayer, Christian Reuter (2024)
    ActiveLLM: Large Language Model-based Active Learning for Textual Few-Shot Scenarios
    arXiv .
    [BibTeX] [Abstract] [Download PDF]

    Active learning is designed to minimize annotation efforts by prioritizing instances that most enhance learning. However, many active learning strategies struggle with a ‚cold start‘ problem, needing substantial initial data to be effective. This limitation often reduces their utility for pre-trained models, which already perform well in few-shot scenarios. To address this, we introduce ActiveLLM, a novel active learning approach that leverages large language models such as GPT-4, Llama 3, and Mistral Large for selecting instances. We demonstrate that ActiveLLM significantly enhances the classification performance of BERT classifiers in few-shot scenarios, outperforming both traditional active learning methods and the few-shot learning method SetFit. Additionally, ActiveLLM can be extended to non-few-shot scenarios, allowing for iterative selections. In this way, ActiveLLM can even help other active learning strategies to overcome their cold start problem. Our results suggest that ActiveLLM offers a promising solution for improving model performance across various learning setups.

    @article{bayer_activellm_2024,
    title = {{ActiveLLM}: {Large} {Language} {Model}-based {Active} {Learning} for {Textual} {Few}-{Shot} {Scenarios}},
    url = {https://arxiv.org/pdf/2405.10808},
    abstract = {Active learning is designed to minimize annotation efforts by prioritizing instances that most enhance learning. However, many active learning strategies struggle with a 'cold start' problem, needing substantial initial data to be effective. This limitation often reduces their utility for pre-trained models, which already perform well in few-shot scenarios. To address this, we introduce ActiveLLM, a novel active learning approach that leverages large language models such as GPT-4, Llama 3, and Mistral Large for selecting instances. We demonstrate that ActiveLLM significantly enhances the classification performance of BERT classifiers in few-shot scenarios, outperforming both traditional active learning methods and the few-shot learning method SetFit. Additionally, ActiveLLM can be extended to non-few-shot scenarios, allowing for iterative selections. In this way, ActiveLLM can even help other active learning strategies to overcome their cold start problem. Our results suggest that ActiveLLM offers a promising solution for improving model performance across various learning setups.},
    journal = {arXiv},
    author = {Bayer, Markus and Reuter, Christian},
    year = {2024},
    keywords = {Security, UsableSec, Projekt-ATHENE-CyAware, Projekt-CYLENCE},
    }

  • Markus Bayer, Philipp Kuehn, Ramin Shanehsaz, Christian Reuter (2024)
    CySecBERT: A Domain-Adapted Language Model for the Cybersecurity Domain
    ACM Transactions on Privacy and Security (TOPS) ;27(2). doi:10.1145/3652594
    [BibTeX] [Abstract] [Download PDF]

    The field of cybersecurity is evolving fast. Security professionals are in need of intelligence on past, current and – ideally – on upcoming threats, because attacks are becoming more advanced and are increasingly targeting larger and more complex systems. Since the processing and analysis of such large amounts of information cannot be addressed manually, cybersecurity experts rely on machine learning techniques. In the textual domain, pre-trained language models like BERT have proven to be helpful as they provide a good baseline for further fine-tuning. However, due to the domain-knowledge and the many technical terms in cybersecurity, general language models might miss the gist of textual information. For this reason, we create a high-quality dataset and present a language model specifically tailored to the cybersecurity domain which can serve as a basic building block for cybersecurity systems. The model is compared on 15 tasks: Domain-dependent extrinsic tasks for measuring the performance on specific problems, intrinsic tasks for measuring the performance of the internal representations of the model as well as general tasks from the SuperGLUE benchmark. The results of the intrinsic tasks show that our model improves the internal representation space of domain words compared to the other models. The extrinsic, domain-dependent tasks, consisting of sequence tagging and classification, show that the model performs best in cybersecurity scenarios. In addition, we pay special attention to the choice of hyperparameters against catastrophic forgetting, as pre-trained models tend to forget the original knowledge during further training.

    @article{bayer_cysecbert_2024,
    title = {{CySecBERT}: {A} {Domain}-{Adapted} {Language} {Model} for the {Cybersecurity} {Domain}},
    volume = {27},
    issn = {2471-2566},
    url = {https://doi.org/10.1145/3652594},
    doi = {10.1145/3652594},
    abstract = {The field of cybersecurity is evolving fast. Security professionals are in need of intelligence on past, current and - ideally - on upcoming threats, because attacks are becoming more advanced and are increasingly targeting larger and more complex systems. Since the processing and analysis of such large amounts of information cannot be addressed manually, cybersecurity experts rely on machine learning techniques. In the textual domain, pre-trained language models like BERT have proven to be helpful as they provide a good baseline for further fine-tuning. However, due to the domain-knowledge and the many technical terms in cybersecurity, general language models might miss the gist of textual information. For this reason, we create a high-quality dataset and present a language model specifically tailored to the cybersecurity domain which can serve as a basic building block for cybersecurity systems. The model is compared on 15 tasks: Domain-dependent extrinsic tasks for measuring the performance on specific problems, intrinsic tasks for measuring the performance of the internal representations of the model as well as general tasks from the SuperGLUE benchmark. The results of the intrinsic tasks show that our model improves the internal representation space of domain words compared to the other models. The extrinsic, domain-dependent tasks, consisting of sequence tagging and classification, show that the model performs best in cybersecurity scenarios. In addition, we pay special attention to the choice of hyperparameters against catastrophic forgetting, as pre-trained models tend to forget the original knowledge during further training.},
    number = {2},
    journal = {ACM Transactions on Privacy and Security (TOPS)},
    author = {Bayer, Markus and Kuehn, Philipp and Shanehsaz, Ramin and Reuter, Christian},
    month = apr,
    year = {2024},
    note = {Place: New York, NY, USA
    Publisher: Association for Computing Machinery},
    keywords = {Student, Security, UsableSec, Projekt-CYWARN, Projekt-ATHENE-CyAware, Projekt-CYLENCE, A-Paper, Ranking-CORE-A, Ranking-ImpactFactor},
    }

  • Markus Bayer, Markus Neiczer, Maximilian Samsinger, Björn Buchhold, Christian Reuter (2024)
    XAI-Attack: Utilizing Explainable AI to Find Incorrectly Learned Patterns for Black-Box Adversarial Example Creation
    Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING) Torino, Italia.
    [BibTeX] [Abstract] [Download PDF]

    Adversarial examples, capable of misleading machine learning models into making erroneous predictions, pose significant risks in safety-critical domains such as crisis informatics, medicine, and autonomous driving. To counter this, we introduce a novel textual adversarial example method that identifies falsely learned word indicators by leveraging explainable AI methods as importance functions on incorrectly predicted instances, thus revealing and understanding the weaknesses of a model. Coupled with adversarial training, this approach guides models to adopt complex decision rules when necessary and simpler ones otherwise, enhancing their robustness. To evaluate the effectiveness of our approach, we conduct a human and a transfer evaluation and propose a novel adversarial training evaluation setting for better robustness assessment. While outperforming current adversarial example and training methods, the results also show our method’s potential in facilitating the development of more resilient transformer models by detecting and rectifying biases and patterns in training data, showing baseline improvements of up to 23 percentage points in accuracy on adversarial tasks. The code of our approach is freely available for further exploration and use.

    @inproceedings{bayer_xai-attack_2024,
    address = {Torino, Italia},
    title = {{XAI}-{Attack}: {Utilizing} {Explainable} {AI} to {Find} {Incorrectly} {Learned} {Patterns} for {Black}-{Box} {Adversarial} {Example} {Creation}},
    url = {https://aclanthology.org/2024.lrec-main.1542},
    abstract = {Adversarial examples, capable of misleading machine learning models into making erroneous predictions, pose significant risks in safety-critical domains such as crisis informatics, medicine, and autonomous driving. To counter this, we introduce a novel textual adversarial example method that identifies falsely learned word indicators by leveraging explainable AI methods as importance functions on incorrectly predicted instances, thus revealing and understanding the weaknesses of a model. Coupled with adversarial training, this approach guides models to adopt complex decision rules when necessary and simpler ones otherwise, enhancing their robustness. To evaluate the effectiveness of our approach, we conduct a human and a transfer evaluation and propose a novel adversarial training evaluation setting for better robustness assessment. While outperforming current adversarial example and training methods, the results also show our method's potential in facilitating the development of more resilient transformer models by detecting and rectifying biases and patterns in training data, showing baseline improvements of up to 23 percentage points in accuracy on adversarial tasks. The code of our approach is freely available for further exploration and use.},
    booktitle = {Proceedings of the 2024 {Joint} {International} {Conference} on {Computational} {Linguistics}, {Language} {Resources} and {Evaluation} ({LREC}-{COLING})},
    publisher = {ELRA and ICCL},
    author = {Bayer, Markus and Neiczer, Markus and Samsinger, Maximilian and Buchhold, Björn and Reuter, Christian},
    month = may,
    year = {2024},
    keywords = {Security, UsableSec, Projekt-ATHENE-CyAware, Projekt-CYLENCE, Ranking-CORE-A},
    pages = {17725--17738},
    }

  • Markus Bayer (2024)
    Deep Learning in Textual Low-Data Regimes for Cybersecurity
    Darmstadt, Germany: Dissertation (Dr. rer. nat.), Department of Computer Science, Technische Universität Darmstadt.
    [BibTeX]

    @book{bayer_deep_2024,
    address = {Darmstadt, Germany},
    title = {Deep {Learning} in {Textual} {Low}-{Data} {Regimes} for {Cybersecurity}},
    publisher = {Dissertation (Dr. rer. nat.), Department of Computer Science, Technische Universität Darmstadt},
    author = {Bayer, Markus},
    year = {2024},
    keywords = {Security, Projekt-CYWARN, Projekt-ATHENE-CyAware, Projekt-CYLENCE, Dissertation},
    }

  • Marc-André Kaufhold, Thea Riebe, Markus Bayer, Christian Reuter (2024)
    ‚We Do Not Have the Capacity to Monitor All Media‘: A Design Case Study on Cyber Situational Awareness in Computer Emergency Response Teams
    Proceedings of the Conference on Human Factors in Computing Systems (CHI) (Best Paper Award) New York, NY, USA. doi:10.1145/3613904.3642368
    [BibTeX] [Abstract] [Download PDF]

    Computer Emergency Response Teams (CERTs) have been established in the public sector globally to provide advisory, preventive and reactive cybersecurity services for government agencies, citizens, and businesses. Nevertheless, their responsibility of monitoring, analyzing, and communicating cyber threats and security vulnerabilities have become increasingly challenging due to the growing volume and varying quality of information disseminated through public and social channels. Based on a design case study conducted from 2021 to 2023, this paper combines three iterations of expert interviews (N=25), design workshops (N=4) and cognitive walkthroughs (N=25) to design an automated, cross-platform and real-time cybersecurity dashboard. By adopting the notion of cyber situational awareness, the study further extracts user requirements and design heuristics for enhanced threat intelligence and mission awareness in CERTs, discussing the aspects of source integration, data management, customizable visualization, relationship awareness, information assessment, software integration, (inter-)organizational collaboration, and communication of stakeholder warnings.

    @inproceedings{kaufhold_we_2024,
    address = {New York, NY, USA},
    series = {{CHI} '24},
    title = {'{We} {Do} {Not} {Have} the {Capacity} to {Monitor} {All} {Media}': {A} {Design} {Case} {Study} on {Cyber} {Situational} {Awareness} in {Computer} {Emergency} {Response} {Teams}},
    isbn = {9798400703300},
    url = {https://peasec.de/paper/2024/2024_KaufholdRiebeBayerReuter_CertDesignCaseStudy_CHI.pdf},
    doi = {10.1145/3613904.3642368},
    abstract = {Computer Emergency Response Teams (CERTs) have been established in the public sector globally to provide advisory, preventive and reactive cybersecurity services for government agencies, citizens, and businesses. Nevertheless, their responsibility of monitoring, analyzing, and communicating cyber threats and security vulnerabilities have become increasingly challenging due to the growing volume and varying quality of information disseminated through public and social channels. Based on a design case study conducted from 2021 to 2023, this paper combines three iterations of expert interviews (N=25), design workshops (N=4) and cognitive walkthroughs (N=25) to design an automated, cross-platform and real-time cybersecurity dashboard. By adopting the notion of cyber situational awareness, the study further extracts user requirements and design heuristics for enhanced threat intelligence and mission awareness in CERTs, discussing the aspects of source integration, data management, customizable visualization, relationship awareness, information assessment, software integration, (inter-)organizational collaboration, and communication of stakeholder warnings.},
    booktitle = {Proceedings of the {Conference} on {Human} {Factors} in {Computing} {Systems} ({CHI}) ({Best} {Paper} {Award})},
    publisher = {Association for Computing Machinery},
    author = {Kaufhold, Marc-André and Riebe, Thea and Bayer, Markus and Reuter, Christian},
    year = {2024},
    keywords = {Security, UsableSec, HCI, Projekt-CYWARN, Projekt-ATHENE-CyAware, Projekt-CYLENCE, A-Paper, AuswahlKaufhold, AuswahlUsableSec, Ranking-CORE-A*, Selected},
    }

    2023

  • Markus Bayer, Tobias Frey, Christian Reuter (2023)
    Multi-Level Fine-Tuning, Data Augmentation, and Few-Shot Learning for Specialized Cyber Threat Intelligence
    Computers & Security . doi:10.1016/j.cose.2023.103430
    [BibTeX] [Abstract] [Download PDF]

    A Design Science Artefact for Cyber Threat Detection and Actor Specific Communication

    @article{bayer_multi-level_2023,
    title = {Multi-{Level} {Fine}-{Tuning}, {Data} {Augmentation}, and {Few}-{Shot} {Learning} for {Specialized} {Cyber} {Threat} {Intelligence}},
    issn = {0167-4048},
    url = {https://peasec.de/paper/2023/2023_BayerFreyReuter_MultiLevelFineTuningForCyberThreatIntelligence_CS.pdf},
    doi = {10.1016/j.cose.2023.103430},
    abstract = {A Design Science Artefact for Cyber Threat Detection and Actor Specific Communication},
    journal = {Computers \& Security},
    author = {Bayer, Markus and Frey, Tobias and Reuter, Christian},
    year = {2023},
    keywords = {Student, Security, Projekt-CYWARN, Projekt-CROSSING, A-Paper, Projekt-ATHENE, Ranking-ImpactFactor},
    }

  • Markus Bayer, Marc-André Kaufhold, Christian Reuter (2023)
    A Survey on Data Augmentation for Text Classification
    ACM Computing Surveys (CSUR) ;55(7):1–39. doi:10.1145/3544558
    [BibTeX] [Abstract] [Download PDF]

    Data augmentation, the artificial creation of training data for machine learning by transformations, is a widely studied research field across machine learning disciplines. While it is useful for increasing a model’s generalization capabilities, it can also address many other challenges and problems, from overcoming a limited amount of training data, to regularizing the objective, to limiting the amount data used to protect privacy. Based on a precise description of the goals and applications of data augmentation and a taxonomy for existing works, this survey is concerned with data augmentation methods for textual classification and aims to provide a concise and comprehensive overview for researchers and practitioners. Derived from the taxonomy, we divide more than 100 methods into 12 different groupings and give state-of-the-art references expounding which methods are highly promising by relating them to each other. Finally, research perspectives that may constitute a building block for future work are provided.

    @article{bayer_survey_2023,
    title = {A {Survey} on {Data} {Augmentation} for {Text} {Classification}},
    volume = {55},
    url = {https://dl.acm.org/doi/pdf/10.1145/3544558},
    doi = {10.1145/3544558},
    abstract = {Data augmentation, the artificial creation of training data for machine learning by transformations, is a widely studied research field across machine learning disciplines. While it is useful for increasing a model's generalization capabilities, it can also address many other challenges and problems, from overcoming a limited amount of training data, to regularizing the objective, to limiting the amount data used to protect privacy. Based on a precise description of the goals and applications of data augmentation and a taxonomy for existing works, this survey is concerned with data augmentation methods for textual classification and aims to provide a concise and comprehensive overview for researchers and practitioners. Derived from the taxonomy, we divide more than 100 methods into 12 different groupings and give state-of-the-art references expounding which methods are highly promising by relating them to each other. Finally, research perspectives that may constitute a building block for future work are provided.},
    number = {7},
    journal = {ACM Computing Surveys (CSUR)},
    author = {Bayer, Markus and Kaufhold, Marc-André and Reuter, Christian},
    year = {2023},
    keywords = {Crisis, Projekt-CYWARN, Projekt-emergenCITY, Projekt-ATHENE-SecUrban, A-Paper, AuswahlKaufhold, Ranking-CORE-A*, Selected, AuswahlCrisis, Ranking-ImpactFactor},
    pages = {1--39},
    }

  • Marc-André Kaufhold, Markus Bayer, Julian Bäumler, Christian Reuter, Stefan Stieglitz, Ali Sercan Basyurt, Milad Mirabaie, Christoph Fuchß, Kaan Eyilmez (2023)
    CYLENCE: Strategies and Tools for Cross-Media Reporting, Detection, and Treatment of Cyberbullying and Hatespeech in Law Enforcement Agencies
    Mensch und Computer 2023 – Workshopband Rapperswil, Switzerland. doi:10.18420/muc2023-mci-ws01-211
    [BibTeX] [Abstract] [Download PDF]

    Despite the merits of public and social media in private and professional spaces, citizens and professionals are increasingly exposed to cyberabuse, such as cyberbullying and hate speech. Thus, Law Enforcement Agencies (LEA) are deployed in many countries and organisations to enhance the preventive and reactive capabilities against cyberabuse. However, their tasks are getting more complex by the increasing amount and varying quality of information disseminated into public channels. Adopting the perspectives of Crisis Informatics and safety-critical Human-Computer Interaction (HCI) and based on both a narrative literature review and group discussions, this paper first outlines the research agenda of the CYLENCE project, which seeks to design strategies and tools for cross-media reporting, detection, and treatment of cyberbullying and hatespeech in investigative and law enforcement agencies. Second, it identifies and elaborates seven research challenges with regard to the monitoring, analysis and communication of cyberabuse in LEAs, which serve as a starting point for in-depth research within the project.

    @inproceedings{kaufhold_cylence_2023,
    address = {Rapperswil, Switzerland},
    title = {{CYLENCE}: {Strategies} and {Tools} for {Cross}-{Media} {Reporting}, {Detection}, and {Treatment} of {Cyberbullying} and {Hatespeech} in {Law} {Enforcement} {Agencies}},
    url = {https://dl.gi.de/items/0e0efe8f-64bf-400c-85f7-02b65f83189d},
    doi = {10.18420/muc2023-mci-ws01-211},
    abstract = {Despite the merits of public and social media in private and professional spaces, citizens and professionals are increasingly exposed to cyberabuse, such as cyberbullying and hate speech. Thus, Law Enforcement Agencies (LEA) are deployed in many countries and organisations to enhance the preventive and reactive capabilities against cyberabuse. However, their tasks are getting more complex by the increasing amount and varying quality of information disseminated into public channels. Adopting the perspectives of Crisis Informatics and safety-critical Human-Computer Interaction (HCI) and based on both a narrative literature review and group discussions, this paper first outlines the research agenda of the CYLENCE project, which seeks to design strategies and tools for cross-media reporting, detection, and treatment of cyberbullying and hatespeech in investigative and law enforcement agencies. Second, it identifies and elaborates seven research challenges with regard to the monitoring, analysis and communication of cyberabuse in LEAs, which serve as a starting point for in-depth research within the project.},
    language = {de},
    booktitle = {Mensch und {Computer} 2023 - {Workshopband}},
    publisher = {Gesellschaft für Informatik e.V.},
    author = {Kaufhold, Marc-André and Bayer, Markus and Bäumler, Julian and Reuter, Christian and Stieglitz, Stefan and Basyurt, Ali Sercan and Mirabaie, Milad and Fuchß, Christoph and Eyilmez, Kaan},
    year = {2023},
    keywords = {UsableSec, HCI, Projekt-CYLENCE},
    }

  • Philipp Kuehn, Mike Schmidt, Markus Bayer, Christian Reuter (2023)
    ThreatCrawl: A BERT-based Focused Crawler for the Cybersecurity Domain
    2023.
    [BibTeX] [Abstract] [Download PDF]

    Publicly available information contains valuable information for Cyber Threat Intelligence (CTI). This can be used to prevent attacks that have already taken place on other systems. Ideally, only the initial attack succeeds and all subsequent ones are detected and stopped. But while there are different standards to exchange this information, a lot of it is shared in articles or blog posts in non-standardized ways. Manually scanning through multiple online portals and news pages to discover new threats and extracting them is a time-consuming task. To automize parts of this scanning process, multiple papers propose extractors that use Natural Language Processing (NLP) to extract Indicators of Compromise (IOCs) from documents. However, while this already solves the problem of extracting the information out of documents, the search for these documents is rarely considered. In this paper, a new focused crawler is proposed called ThreatCrawl, which uses Bidirectional Encoder Representations from Transformers (BERT)-based models to classify documents and adapt its crawling path dynamically. While ThreatCrawl has difficulties to classify the specific type of Open Source Intelligence (OSINT) named in texts, e.g., IOC content, it can successfully find relevant documents and modify its path accordingly. It yields harvest rates of up to 52\%, which are, to the best of our knowledge, better than the current state of the art.

    @techreport{kuehn_threatcrawl_2023,
    title = {{ThreatCrawl}: {A} {BERT}-based {Focused} {Crawler} for the {Cybersecurity} {Domain}},
    shorttitle = {{ThreatCrawl}},
    url = {http://arxiv.org/abs/2304.11960},
    abstract = {Publicly available information contains valuable information for Cyber Threat Intelligence (CTI). This can be used to prevent attacks that have already taken place on other systems. Ideally, only the initial attack succeeds and all subsequent ones are detected and stopped. But while there are different standards to exchange this information, a lot of it is shared in articles or blog posts in non-standardized ways. Manually scanning through multiple online portals and news pages to discover new threats and extracting them is a time-consuming task. To automize parts of this scanning process, multiple papers propose extractors that use Natural Language Processing (NLP) to extract Indicators of Compromise (IOCs) from documents. However, while this already solves the problem of extracting the information out of documents, the search for these documents is rarely considered. In this paper, a new focused crawler is proposed called ThreatCrawl, which uses Bidirectional Encoder Representations from Transformers (BERT)-based models to classify documents and adapt its crawling path dynamically. While ThreatCrawl has difficulties to classify the specific type of Open Source Intelligence (OSINT) named in texts, e.g., IOC content, it can successfully find relevant documents and modify its path accordingly. It yields harvest rates of up to 52\%, which are, to the best of our knowledge, better than the current state of the art.},
    number = {arXiv:2304.11960},
    urldate = {2023-04-27},
    institution = {arXiv},
    author = {Kuehn, Philipp and Schmidt, Mike and Bayer, Markus and Reuter, Christian},
    month = apr,
    year = {2023},
    note = {arXiv:2304.11960 [cs]},
    keywords = {Student, Security, Projekt-CYWARN, Projekt-ATHENE-SecUrban},
    }

    2022

  • Markus Bayer, Marc-André Kaufhold, Björn Buchhold, Marcel Keller, Jörg Dallmeyer, Christian Reuter (2022)
    Data Augmentation in Natural Language Processing: A Novel Text Generation Approach for Long and Short Text Classifiers
    International Journal of Machine Learning and Cybernetics (IJMLC) . doi:10.1007/s13042-022-01553-3
    [BibTeX] [Abstract] [Download PDF]

    In many cases of machine learning, research suggests that the development of training data might have a higher relevance than the choice and modelling of classifiers themselves. Thus, data augmentation methods have been developed to improve classifiers by artificially created training data. In NLP, there is the challenge of establishing universal rules for text transformations which provide new linguistic patterns. In this paper, we present and evaluate a text generation method suitable to increase the performance of classifiers for long and short texts. We achieved promising improvements when evaluating short as well as long text tasks with the enhancement by our text generation method. Especially with regard to small data analytics, additive accuracy gains of up to 15.53\% and 3.56\% are achieved within a constructed low data regime, compared to the no augmentation baseline and another data augmentation technique. As the current track of these constructed regimes is not universally applicable, we also show major improvements in several real world low data tasks (up to +4.84 F1-score). Since we are evaluating the method from many perspectives (in total 11 datasets), we also observe situations where the method might not be suitable. We discuss implications and patterns for the successful application of our approach on different types of datasets.

    @article{bayer_data_2022,
    title = {Data {Augmentation} in {Natural} {Language} {Processing}: {A} {Novel} {Text} {Generation} {Approach} for {Long} and {Short} {Text} {Classifiers}},
    url = {https://link.springer.com/article/10.1007/s13042-022-01553-3},
    doi = {10.1007/s13042-022-01553-3},
    abstract = {In many cases of machine learning, research suggests that the development of training data might have a higher relevance than the choice and modelling of classifiers themselves. Thus, data augmentation methods have been developed to improve classifiers by artificially created training data. In NLP, there is the challenge of establishing universal rules for text transformations which provide new linguistic patterns. In this paper, we present and evaluate a text generation method suitable to increase the performance of classifiers for long and short texts. We achieved promising improvements when evaluating short as well as long text tasks with the enhancement by our text generation method. Especially with regard to small data analytics, additive accuracy gains of up to 15.53\% and 3.56\% are achieved within a constructed low data regime, compared to the no augmentation baseline and another data augmentation technique. As the current track of these constructed regimes is not universally applicable, we also show major improvements in several real world low data tasks (up to +4.84 F1-score). Since we are evaluating the method from many perspectives (in total 11 datasets), we also observe situations where the method might not be suitable. We discuss implications and patterns for the successful application of our approach on different types of datasets.},
    journal = {International Journal of Machine Learning and Cybernetics (IJMLC)},
    author = {Bayer, Markus and Kaufhold, Marc-André and Buchhold, Björn and Keller, Marcel and Dallmeyer, Jörg and Reuter, Christian},
    year = {2022},
    keywords = {Student, Security, Projekt-CYWARN, Projekt-emergenCITY, A-Paper, Ranking-ImpactFactor},
    }

    2021

  • Markus Bayer, Marc-André Kaufhold, Christian Reuter (2021)
    Information Overload in Crisis Management: Bilingual Evaluation of Embedding Models for Clustering Social Media Posts in Emergencies
    Proceedings of the European Conference on Information Systems (ECIS) .
    [BibTeX] [Abstract] [Download PDF]

    Past studies in the domains of information systems have analysed the potentials and barriers of social media in emergencies. While information disseminated in social media can lead to valuable insights, emergency services and researchers face the challenge of information overload as data quickly exceeds the manageable amount. We propose an embedding-based clustering approach and a method for the automated labelling of clusters. Given that the clustering quality is highly dependent on embeddings, we evaluate 19 embedding models with respect to time, internal cluster quality, and language invariance. The results show that it may be sensible to use embedding models that were already trained on other crisis datasets. However, one must ensure that the training data generalizes enough, so that the clustering can adapt to new situations. Confirming this, we found out that some embeddings were not able to perform as well on a German dataset as on an English dataset.

    @inproceedings{bayer_information_2021,
    title = {Information {Overload} in {Crisis} {Management}: {Bilingual} {Evaluation} of {Embedding} {Models} for {Clustering} {Social} {Media} {Posts} in {Emergencies}},
    url = {https://peasec.de/paper/2021/2021_BayerKaufholdReuter_InformationOverloadInCrisisManagementBilingualEvaluation_ECIS.pdf},
    abstract = {Past studies in the domains of information systems have analysed the potentials and barriers of social media in emergencies. While information disseminated in social media can lead to valuable insights, emergency services and researchers face the challenge of information overload as data quickly exceeds the manageable amount. We propose an embedding-based clustering approach and a method for the automated labelling of clusters. Given that the clustering quality is highly dependent on embeddings, we evaluate 19 embedding models with respect to time, internal cluster quality, and language invariance. The results show that it may be sensible to use embedding models that were already trained on other crisis datasets. However, one must ensure that the training data generalizes enough, so that the clustering can adapt to new situations. Confirming this, we found out that some embeddings were not able to perform as well on a German dataset as on an English dataset.},
    booktitle = {Proceedings of the {European} {Conference} on {Information} {Systems} ({ECIS})},
    author = {Bayer, Markus and Kaufhold, Marc-André and Reuter, Christian},
    year = {2021},
    keywords = {Crisis, Projekt-CYWARN, Projekt-ATHENE-SecUrban, A-Paper, Ranking-CORE-A, SocialMedia},
    pages = {1--18},
    }

  • Marc-André Kaufhold, Jennifer Fromm, Thea Riebe, Milad Mirbabaie, Philipp Kuehn, Ali Sercan Basyurt, Markus Bayer, Marc Stöttinger, Kaan Eyilmez, Reinhard Möller, Christoph Fuchß, Stefan Stieglitz, Christian Reuter (2021)
    CYWARN: Strategy and Technology Development for Cross-Platform Cyber Situational Awareness and Actor-Specific Cyber Threat Communication
    Mensch und Computer 2018 – Workshopband Bonn. doi:10.18420/muc2021-mci-ws08-263
    [BibTeX] [Abstract] [Download PDF]

    Despite the merits of digitisation in private and professional spaces, critical infrastructures and societies are increasingly ex-posed to cyberattacks. Thus, Computer Emergency Response Teams (CERTs) are deployed in many countries and organisations to enhance the preventive and reactive capabilities against cyberattacks. However, their tasks are getting more complex by the increasing amount and varying quality of information dissem-inated into public channels. Adopting the perspectives of Crisis Informatics and safety-critical Human-Computer Interaction (HCI) and based on both a narrative literature review and group discussions, this paper first outlines the research agenda of the CYWARN project, which seeks to design strategies and technolo-gies for cross-platform cyber situational awareness and actor-spe-cific cyber threat communication. Second, it identifies and elabo-rates eight research challenges with regard to the monitoring, analysis and communication of cyber threats in CERTs, which serve as a starting point for in-depth research within the project.

    @inproceedings{kaufhold_cywarn_2021,
    address = {Bonn},
    series = {Mensch und {Computer} 2021 - {Workshopband}},
    title = {{CYWARN}: {Strategy} and {Technology} {Development} for {Cross}-{Platform} {Cyber} {Situational} {Awareness} and {Actor}-{Specific} {Cyber} {Threat} {Communication}},
    url = {https://dl.gi.de/server/api/core/bitstreams/8f470f6b-5050-4fb9-b923-d08cf84c17b7/content},
    doi = {10.18420/muc2021-mci-ws08-263},
    abstract = {Despite the merits of digitisation in private and professional spaces, critical infrastructures and societies are increasingly ex-posed to cyberattacks. Thus, Computer Emergency Response Teams (CERTs) are deployed in many countries and organisations to enhance the preventive and reactive capabilities against cyberattacks. However, their tasks are getting more complex by the increasing amount and varying quality of information dissem-inated into public channels. Adopting the perspectives of Crisis Informatics and safety-critical Human-Computer Interaction (HCI) and based on both a narrative literature review and group discussions, this paper first outlines the research agenda of the CYWARN project, which seeks to design strategies and technolo-gies for cross-platform cyber situational awareness and actor-spe-cific cyber threat communication. Second, it identifies and elabo-rates eight research challenges with regard to the monitoring, analysis and communication of cyber threats in CERTs, which serve as a starting point for in-depth research within the project.},
    booktitle = {Mensch und {Computer} 2018 - {Workshopband}},
    publisher = {Gesellschaft für Informatik},
    author = {Kaufhold, Marc-André and Fromm, Jennifer and Riebe, Thea and Mirbabaie, Milad and Kuehn, Philipp and Basyurt, Ali Sercan and Bayer, Markus and Stöttinger, Marc and Eyilmez, Kaan and Möller, Reinhard and Fuchß, Christoph and Stieglitz, Stefan and Reuter, Christian},
    year = {2021},
    keywords = {Security, Projekt-CYWARN},
    }

  • Marc-André Kaufhold, Markus Bayer, Daniel Hartung, Christian Reuter (2021)
    Design and Evaluation of Deep Learning Models for Real-Time Credibility Assessment in Twitter
    30th International Conference on Artificial Neural Networks (ICANN2021) Bratislava. doi:https://doi.org/10.1007/978-3-030-86383-8_32
    [BibTeX] [Abstract] [Download PDF]

    Social media have an enormous impact on modern life but are prone to the dissemination of false information. In several domains, such as crisis management or political communication, it is of utmost importance to detect false and to promote credible information. Although educational measures might help individuals to detect false information, the sheer volume of social big data, which sometimes need to be anal- ysed under time-critical constraints, calls for automated and (near) real- time assessment methods. Hence, this paper reviews existing approaches before designing and evaluating three deep learning models (MLP, RNN, BERT) for real-time credibility assessment using the example of Twitter posts. While our BERT implementation achieved best results with an accuracy of up to 87.07\% and an F1 score of 0.8764 when using meta- data, text, and user features, MLP and RNN showed lower classification quality but better performance for real-time application. Furthermore, the paper contributes with a novel dataset for credibility assessment.

    @inproceedings{kaufhold_design_2021,
    address = {Bratislava},
    title = {Design and {Evaluation} of {Deep} {Learning} {Models} for {Real}-{Time} {Credibility} {Assessment} in {Twitter}},
    url = {https://peasec.de/paper/2021/2021_KaufholdBayerHartungReuter_DeepLearningCredibilityAssessmentTwitter_ICANN.pdf},
    doi = {https://doi.org/10.1007/978-3-030-86383-8_32},
    abstract = {Social media have an enormous impact on modern life but are prone to the dissemination of false information. In several domains, such as crisis management or political communication, it is of utmost importance to detect false and to promote credible information. Although educational measures might help individuals to detect false information, the sheer volume of social big data, which sometimes need to be anal- ysed under time-critical constraints, calls for automated and (near) real- time assessment methods. Hence, this paper reviews existing approaches before designing and evaluating three deep learning models (MLP, RNN, BERT) for real-time credibility assessment using the example of Twitter posts. While our BERT implementation achieved best results with an accuracy of up to 87.07\% and an F1 score of 0.8764 when using meta- data, text, and user features, MLP and RNN showed lower classification quality but better performance for real-time application. Furthermore, the paper contributes with a novel dataset for credibility assessment.},
    booktitle = {30th {International} {Conference} on {Artificial} {Neural} {Networks} ({ICANN2021})},
    author = {Kaufhold, Marc-André and Bayer, Markus and Hartung, Daniel and Reuter, Christian},
    year = {2021},
    keywords = {Student, Security, Projekt-CYWARN, Projekt-ATHENE-SecUrban, Ranking-CORE-B},
    pages = {1--13},
    }

  • Philipp Kuehn, Markus Bayer, Marc Wendelborn, Christian Reuter (2021)
    OVANA: An Approach to Analyze and Improve the Information Quality of Vulnerability Databases
    Proceedings of the 16th International Conference on Availability, Reliability and Security (ARES 2021) . doi:10.1145/3465481.3465744
    [BibTeX] [Abstract] [Download PDF]

    Vulnerability databases are one of the main information sources for IT security experts. Hence, the quality of their information is of utmost importance for anyone working in this area. Previous work has shown that machine readable information is either missing, incorrect, or inconsistent with other data sources. In this paper, we introduce a system called Overt Vulnerability source ANAlysis (OVANA), utilizing state-of-the-art machine learning (ML) and natural-language processing (NLP) techniques, which analyzes the information quality (IQ) of vulnerability databases, searches the free-form description for relevant information missing from structured fields, and updates it accordingly. Our paper shows that OVANA is able to improve the IQ of the National Vulnerability Database by 51.23\% based on the indicators of accuracy, completeness, and uniqueness. Moreover, we present information which should be incorporated into the structured fields to increase the uniqueness of vulnerability entries and improve the discriminability of different vulnerability entries. The identified information from OVANA enables a more targeted vulnerability search and provides guidance for IT security experts in finding relevant information in vulnerability descriptions for severity assessment.

    @inproceedings{kuehn_ovana_2021,
    title = {{OVANA}: {An} {Approach} to {Analyze} and {Improve} the {Information} {Quality} of {Vulnerability} {Databases}},
    isbn = {978-1-4503-9051-4},
    url = {https://peasec.de/paper/2021/2021_KuehnBayerWendelbornReuter_OVANAQualityVulnerabilityDatabases_ARES.pdf},
    doi = {10.1145/3465481.3465744},
    abstract = {Vulnerability databases are one of the main information sources for IT security experts. Hence, the quality of their information is of utmost importance for anyone working in this area. Previous work has shown that machine readable information is either missing, incorrect, or inconsistent with other data sources. In this paper, we introduce a system called Overt Vulnerability source ANAlysis (OVANA), utilizing state-of-the-art machine learning (ML) and natural-language processing (NLP) techniques, which analyzes the information quality (IQ) of vulnerability databases, searches the free-form description for relevant information missing from structured fields, and updates it accordingly. Our paper shows that OVANA is able to improve the IQ of the National Vulnerability Database by 51.23\% based on the indicators of accuracy, completeness, and uniqueness. Moreover, we present information which should be incorporated into the structured fields to increase the uniqueness of vulnerability entries and improve the discriminability of different vulnerability entries. The identified information from OVANA enables a more targeted vulnerability search and provides guidance for IT security experts in finding relevant information in vulnerability descriptions for severity assessment.},
    booktitle = {Proceedings of the 16th {International} {Conference} on {Availability}, {Reliability} and {Security} ({ARES} 2021)},
    publisher = {ACM},
    author = {Kuehn, Philipp and Bayer, Markus and Wendelborn, Marc and Reuter, Christian},
    year = {2021},
    keywords = {Peace, Security, Projekt-CYWARN, Projekt-ATHENE-SecUrban, AuswahlPeace, Ranking-CORE-B},
    pages = {1--11},
    }

  • Thea Riebe, Tristan Wirth, Markus Bayer, Philipp Kuehn, Marc-André Kaufhold, Volker Knauthe, Stefan Guthe, Christian Reuter (2021)
    CySecAlert: An Alert Generation System for Cyber Security Events Using Open Source Intelligence Data
    Information and Communications Security (ICICS) . doi:10.1007/978-3-030-86890-1_24
    [BibTeX] [Abstract] [Download PDF]

    Receiving relevant information on possible cyber threats, attacks, and data breaches in a timely manner is crucial for early response. The social media platform Twitter hosts an active cyber security community. Their activities are often monitored manually by security experts, such as Computer Emergency Response Teams (CERTs). We thus propose a Twitter-based alert generation system that issues alerts to a system operator as soon as new relevant cyber security related topics emerge. Thereby, our system allows us to monitor user accounts with significantly less workload. Our system applies a supervised classifier, based on active learning, that detects tweets containing relevant information. The results indicate that uncertainty sampling can reduce the amount of manual relevance classification effort and enhance the classifier performance substantially compared to random sampling. Our approach reduces the number of accounts and tweets that are needed for the classifier training, thus making the tool easily and rapidly adaptable to the specific context while also supporting data minimization for Open Source Intelligence (OSINT). Relevant tweets are clustered by a greedy stream clustering algorithm in order to identify significant events. The proposed system is able to work near real-time within the required 15-minutes time frame and detects up to 93.8\% of relevant events with a false alert rate of 14.81\%.

    @inproceedings{riebe_cysecalert_2021,
    title = {{CySecAlert}: {An} {Alert} {Generation} {System} for {Cyber} {Security} {Events} {Using} {Open} {Source} {Intelligence} {Data}},
    url = {https://peasec.de/paper/2021/2021_RiebeWirthBayerKuehnKaufholdKnautheGutheReuter_CySecAlertOpenSourceIntelligence_ICICS.pdf},
    doi = {10.1007/978-3-030-86890-1_24},
    abstract = {Receiving relevant information on possible cyber threats, attacks, and data breaches in a timely manner is crucial for early response. The social media platform Twitter hosts an active cyber security community. Their activities are often monitored manually by security experts, such as Computer Emergency Response Teams (CERTs). We thus propose a Twitter-based alert generation system that issues alerts to a system operator as soon as new relevant cyber security related topics emerge. Thereby, our system allows us to monitor user accounts with significantly less workload. Our system applies a supervised classifier, based on active learning, that detects tweets containing relevant information. The results indicate that uncertainty sampling can reduce the amount of manual relevance classification effort and enhance the classifier performance substantially compared to random sampling. Our approach reduces the number of accounts and tweets that are needed for the classifier training, thus making the tool easily and rapidly adaptable to the specific context while also supporting data minimization for Open Source Intelligence (OSINT). Relevant tweets are clustered by a greedy stream clustering algorithm in order to identify significant events. The proposed system is able to work near real-time within the required 15-minutes time frame and detects up to 93.8\% of relevant events with a false alert rate of 14.81\%.},
    booktitle = {Information and {Communications} {Security} ({ICICS})},
    author = {Riebe, Thea and Wirth, Tristan and Bayer, Markus and Kuehn, Philipp and Kaufhold, Marc-André and Knauthe, Volker and Guthe, Stefan and Reuter, Christian},
    year = {2021},
    keywords = {Student, Security, UsableSec, Projekt-CYWARN, Projekt-ATHENE-SecUrban, Ranking-CORE-B},
    pages = {429--446},
    }

    2020

  • Marc-André Kaufhold, Markus Bayer, Christian Reuter (2020)
    Rapid relevance classification of social media posts in disasters and emergencies: A system and evaluation featuring active, incremental and online learning
    Information Processing & Management (IPM) ;57(1):1–32.
    [BibTeX] [Abstract] [Download PDF]

    The research field of crisis informatics examines, amongst others, the potentials and barriers of social media use during disasters and emergencies. Social media allow emergency services to receive valuable information (e.g., eyewitness reports, pictures, or videos) from social media. However, the vast amount of data generated during large-scale incidents can lead to issue of information overload. Research indicates that supervised machine learning techniques are sui- table for identifying relevant messages and filter out irrelevant messages, thus mitigating in- formation overload. Still, they require a considerable amount of labeled data, clear criteria for relevance classification, a usable interface to facilitate the labeling process and a mechanism to rapidly deploy retrained classifiers. To overcome these issues, we present (1) a system for social media monitoring, analysis and relevance classification, (2) abstract and precise criteria for re- levance classification in social media during disasters and emergencies, (3) the evaluation of a well-performing Random Forest algorithm for relevance classification incorporating metadata from social media into a batch learning approach (e.g., 91.28\%/89.19\% accuracy, 98.3\%/89.6\% precision and 80.4\%/87.5\% recall with a fast training time with feature subset selection on the European floods/BASF SE incident datasets), as well as (4) an approach and preliminary eva- luation for relevance classification including active, incremental and online learning to reduce the amount of required labeled data and to correct misclassifications of the algorithm by feed- back classification. Using the latter approach, we achieved a well-performing classifier based on the European floods dataset by only requiring a quarter of labeled data compared to the tradi- tional batch learning approach. Despite a lesser effect on the BASF SE incident dataset, still a substantial improvement could be determined.

    @article{kaufhold_rapid_2020,
    title = {Rapid relevance classification of social media posts in disasters and emergencies: {A} system and evaluation featuring active, incremental and online learning},
    volume = {57},
    url = {https://peasec.de/paper/2020/2020_KaufholdBayerReuter_RapidRelevanceClassification_IPM.pdf},
    abstract = {The research field of crisis informatics examines, amongst others, the potentials and barriers of social media use during disasters and emergencies. Social media allow emergency services to receive valuable information (e.g., eyewitness reports, pictures, or videos) from social media. However, the vast amount of data generated during large-scale incidents can lead to issue of information overload. Research indicates that supervised machine learning techniques are sui- table for identifying relevant messages and filter out irrelevant messages, thus mitigating in- formation overload. Still, they require a considerable amount of labeled data, clear criteria for relevance classification, a usable interface to facilitate the labeling process and a mechanism to rapidly deploy retrained classifiers. To overcome these issues, we present (1) a system for social media monitoring, analysis and relevance classification, (2) abstract and precise criteria for re- levance classification in social media during disasters and emergencies, (3) the evaluation of a well-performing Random Forest algorithm for relevance classification incorporating metadata from social media into a batch learning approach (e.g., 91.28\%/89.19\% accuracy, 98.3\%/89.6\% precision and 80.4\%/87.5\% recall with a fast training time with feature subset selection on the European floods/BASF SE incident datasets), as well as (4) an approach and preliminary eva- luation for relevance classification including active, incremental and online learning to reduce the amount of required labeled data and to correct misclassifications of the algorithm by feed- back classification. Using the latter approach, we achieved a well-performing classifier based on the European floods dataset by only requiring a quarter of labeled data compared to the tradi- tional batch learning approach. Despite a lesser effect on the BASF SE incident dataset, still a substantial improvement could be determined.},
    number = {1},
    journal = {Information Processing \& Management (IPM)},
    author = {Kaufhold, Marc-André and Bayer, Markus and Reuter, Christian},
    year = {2020},
    keywords = {Crisis, Projekt-emergenCITY, Projekt-ATHENE-SecUrban, A-Paper, AuswahlKaufhold, Ranking-CORE-A, Ranking-ImpactFactor, SocialMedia, Ranking-WKWI-B},
    pages = {1--32},
    }

    2018

  • Christian Reuter, Wolfgang Schneider, Daniel Eberz, Markus Bayer, Daniel Hartung, Cemal Kaygusuz (2018)
    Resiliente Digitalisierung der kritischen Infrastruktur Landwirtschaft – mobil, dezentral, ausfallsicher
    Mensch und Computer 2018 – Workshopband Dresden, Germany.
    [BibTeX] [Abstract] [Download PDF]

    Diese Arbeit befasst sich mit der zunehmenden Digitalisierung der kritischen Infrastruktur Ernährungswirtschaft und setzt den Fokus insbesondere auf die dadurch resultierenden in-formationstechnologischen Folgen bezüglich der Angriffs- und Ausfallsicherheit in der Landwirtschaft und von ihr abhängigen Sektoren. In diesem Kontext wird die Modernisie-rungen der Landmaschinen und deren Vernetzung sowie das Cloud-Computing in der Landwirtschaft analysiert und zu treffende Maßnahmen bezüglich einer resilienten Struktur erläutert. In vielen Bereichen wird dabei aufgezeigt, dass das Ausfallrisiko der Produktion zugunsten von Vorteilen wie Ertrags- und Qualitätssteigerung vernachlässigt wird.

    @inproceedings{reuter_resiliente_2018,
    address = {Dresden, Germany},
    title = {Resiliente {Digitalisierung} der kritischen {Infrastruktur} {Landwirtschaft} - mobil, dezentral, ausfallsicher},
    url = {https://dl.gi.de/bitstream/handle/20.500.12116/16930/Beitrag_330_final__a.pdf},
    abstract = {Diese Arbeit befasst sich mit der zunehmenden Digitalisierung der kritischen Infrastruktur Ernährungswirtschaft und setzt den Fokus insbesondere auf die dadurch resultierenden in-formationstechnologischen Folgen bezüglich der Angriffs- und Ausfallsicherheit in der Landwirtschaft und von ihr abhängigen Sektoren. In diesem Kontext wird die Modernisie-rungen der Landmaschinen und deren Vernetzung sowie das Cloud-Computing in der Landwirtschaft analysiert und zu treffende Maßnahmen bezüglich einer resilienten Struktur erläutert. In vielen Bereichen wird dabei aufgezeigt, dass das Ausfallrisiko der Produktion zugunsten von Vorteilen wie Ertrags- und Qualitätssteigerung vernachlässigt wird.},
    booktitle = {Mensch und {Computer} 2018 - {Workshopband}},
    publisher = {Gesellschaft für Informatik e.V.},
    author = {Reuter, Christian and Schneider, Wolfgang and Eberz, Daniel and Bayer, Markus and Hartung, Daniel and Kaygusuz, Cemal},
    editor = {Dachselt, Raimund and Weber, Gerhard},
    year = {2018},
    keywords = {Student, Projekt-GeoBox, RSF, Crisis, Projekt-KontiKat, Infrastructure, Projekt-MAKI, Projekt-HyServ},
    pages = {623--632},
    }