Wissenschaftlicher Mitarbeiter / Doktorand
Kontakt: +49 (0) 6151 / 1620943 | biselli(at)peasec.tu-darmstadt.de
Technische Universität Darmstadt, Fachbereich Informatik,
Wissenschaft und Technik für Frieden und Sicherheit (PEASEC)
Pankratiusstraße 2, 64289 Darmstadt
Online-Profile: ORCID | Google Scholar
Tom Biselli, M.Sc. ist wissenschaftlicher Mitarbeiter und Doktorand am Lehrstuhl Wissenschaft und Technik für Frieden und Sicherheit (PEASEC) im Fachbereich Informatik der Technischen Universität Darmstadt.
Er studierte Psychologie (B.Sc.) an der Leopold-Franzens Universität Innsbruck sowie Psychologie mit Vertiefung in Cognitive-Affective Neuroscience (M.Sc.) an der Technischen Universität Dresden. Aufgrund seines Interesses an interdisziplinären Themen an der Schnittstelle von Psychologie und Informatik verfasste er bereits seine Masterarbeit zum Thema Brain-Computer-Interfaces in der klinischen Anwendung bei Patienten nach Schlaganfall. In diesem Zusammenhang beschäftigte er sich intensiv mit der Analyse komplexer Daten und den Möglichkeiten neuester technologischer Entwicklungen. Die gesellschaftlichen Auswirkungen solch moderner Technologien im Allgemeineren sind nun Gegenstand seiner Forschung bei PEASEC. Hier forscht er im Bereich der Mensch-Computer-Interaktion zu den Themen Privacy und Desinformation zu der Frage, wie technologische Unterstützung zur Stärkung der Informationssouveränität nutzergerecht gestaltet werden kann.
Publikationen
2024
[BibTeX] [Abstract] [Download PDF]
Browser cookies, especially those from third parties, pose a threat to individual privacy. While it is possible in principle to control the number of cookies accepted, this choice is often neither usable nor truly informed. To address this issue, this study used semi-structured interviews (N=19) to identify attitudes and user requirements to develop an alternative personalised cookie banner, which was evaluated in an online experiment (N=157). The cookie banner explanations were tailored to the privacy knowledge of three groups of users: low, medium and high. The online experiment measured cookie choices and perceived usability of the cookie banner across three groups: an experimental group that viewed the novel cookie banner with personalisation (personalised privacy assistant), a control group that viewed the novel cookie banner without personalisation (privacy assistant) and a control group that viewed the standard cookie banner provided by the website. The results indicate that the novel cookie banner (with or without personalisation) generally resulted in significantly fewer accepted cookies and increased usability compared to the standard cookie window. In addition, the personalised cookie banner resulted in significantly fewer accepted cookies and higher usability than the non-personalised cookie banner. These results suggest that tailoring cookie banners to users‘ privacy knowledge can be an effective approach to empowering users to make informed choices and better protect their privacy.
@article{biselli_supporting_2024,
title = {Supporting {Informed} {Choices} about {Browser} {Cookies}: {The} {Impact} of {Personalised} {Cookie} {Banners}},
url = {https://petsymposium.org/popets/2024/popets-2024-0011.pdf},
doi = {https://doi.org/10.56553/popets-2024-0011},
abstract = {Browser cookies, especially those from third parties, pose a threat to individual privacy. While it is possible in principle to control the number of cookies accepted, this choice is often neither usable nor truly informed. To address this issue, this study used semi-structured interviews (N=19) to identify attitudes and user requirements to develop an alternative personalised cookie banner, which was evaluated in an online experiment (N=157). The cookie banner explanations were tailored to the privacy knowledge of three groups of users: low, medium and high. The online experiment measured cookie choices and perceived usability of the cookie banner across three groups: an experimental group that viewed the novel cookie banner with personalisation (personalised privacy assistant), a control group that viewed the novel cookie banner without personalisation (privacy assistant) and a control group that viewed the standard cookie banner provided by the website. The results indicate that the novel cookie banner (with or without personalisation) generally resulted in significantly fewer accepted cookies and increased usability compared to the standard cookie window. In addition, the personalised cookie banner resulted in significantly fewer accepted cookies and higher usability than the non-personalised cookie banner. These results suggest that tailoring cookie banners to users' privacy knowledge can be an effective approach to empowering users to make informed choices and better protect their privacy.},
number = {1},
journal = {Proceedings on Privacy Enhancing Technologies (PoPETs)},
author = {Biselli, Tom and Utz, Laura and Reuter, Christian},
year = {2024},
keywords = {Student, Security, UsableSec, HCI, Projekt-CROSSING, A-Paper, AuswahlUsableSec, Selected, Ranking-CORE-A},
pages = {171--191},
}
[BibTeX] [Abstract] [Download PDF]
In many applications and websites people use in their everyday life, their privacy and data is threatened, e.g., by script tracking during browsing. Although researchers and companies have developed privacy-enhancing technologies (PETs), they are often difficult to use for lay users. In this paper, we conducted a literature review to classify users into different support personas based on their privacy competence and privacy concern. With developers of PETs in mind, support personas were envisioned to facilitate the customization of software according to the support needs of different users. In order to demonstrate the usefulness of support personas and based on workshop sessions with 15 participants, we designed a browser extension which supports users with the issue of script tracking by providing different user interfaces for different support personas. The following qualitative evaluation with 31 participants showed that the developed UI elements worked as intended for the different support personas. Therefore, we conclude the concept of support personas is useful in the development process of usable applications that enhance the privacy of the users while also educating them and thus potentially increasing their privacy literacy.
@article{demuth_support_2024,
title = {Support {Personas}: {A} {Concept} for {Tailored} {Support} of {Users} of {Privacy}-{Enhancing} {Technologies}},
url = {https://petsymposium.org/popets/2024/popets-2024-0142.pdf},
abstract = {In many applications and websites people use in their everyday life, their privacy and data is threatened, e.g., by script tracking during browsing. Although researchers and companies have developed privacy-enhancing technologies (PETs), they are often difficult to use for lay users. In this paper, we conducted a literature review to classify users into different support personas based on their privacy competence and privacy concern. With developers of PETs in mind, support personas were envisioned to facilitate the customization of software according to the support needs of different users. In order to demonstrate the usefulness of support personas and based on workshop sessions with 15 participants, we designed a browser extension which supports users with the issue of script tracking by providing different user interfaces for different support personas. The following qualitative evaluation with 31 participants showed that the developed UI elements worked as intended for the different support personas. Therefore, we conclude the concept of support personas is useful in the development process of usable applications that enhance the privacy of the users while also educating them and thus potentially increasing their privacy literacy.},
number = {4},
journal = {Proceedings on Privacy Enhancing Technologies (PoPETs)},
author = {Demuth, Kilian and Linsner, Sebastian and Biselli, Tom and Kaufhold, Marc-André and Reuter, Christian},
year = {2024},
keywords = {Security, UsableSec, HCI, Projekt-CROSSING, A-Paper, Projekt-ATHENE-PriVis, Ranking-CORE-A},
}
[BibTeX] [Abstract] [Download PDF]
Misinformation poses a recurrent challenge for video-sharing platforms (VSPs) like TikTok. Obtaining user perspectives on digital interventions addressing the need for transparency (e.g., through indicators) is essential. This article offers a thorough examination of the comprehensibility, usefulness, and limitations of an indicator-based intervention from an adolescents’ perspective. This study (𝑁 = 39; aged 13-16 years) comprised two qualitative steps: (1) focus group discussions and (2) think-aloud sessions, where participants engaged with a smartphone-app for TikTok. The results offer new insights into how video-based indicators can assist adolescents’ assessments. The intervention received positive feedback, especially for its transparency, and could be applicable to new content. This paper sheds light on how adolescents are expected to be experts while also being prone to video-based misinformation, with limited understanding of an intervention’s limitations. By adopting teenagers’ perspectives, we contribute to HCI research and provide new insights into the chances and limitations of interventions for VSPs.
@inproceedings{hartwig_adolescents_2024,
address = {New York, NY, USA},
series = {{CHI} '24},
title = {From {Adolescents}' {Eyes}: {Assessing} an {Indicator}-{Based} {Intervention} to {Combat} {Misinformation} on {TikTok}},
isbn = {9798400703300},
url = {https://doi.org/10.1145/3613904.3642264},
doi = {10.1145/3613904.3642264},
abstract = {Misinformation poses a recurrent challenge for video-sharing platforms (VSPs) like TikTok. Obtaining user perspectives on digital interventions addressing the need for transparency (e.g., through indicators) is essential. This article offers a thorough examination of the comprehensibility, usefulness, and limitations of an indicator-based intervention from an adolescents’ perspective. This study (𝑁 = 39; aged 13-16 years) comprised two qualitative steps: (1) focus group discussions and (2) think-aloud sessions, where participants
engaged with a smartphone-app for TikTok. The results offer new insights into how video-based indicators can assist adolescents’ assessments. The intervention received positive feedback, especially for its transparency, and could be applicable to new content. This paper sheds light on how adolescents are expected to be experts while also being prone to video-based misinformation, with limited understanding of an intervention’s limitations. By adopting
teenagers’ perspectives, we contribute to HCI research and provide new insights into the chances and limitations of interventions for VSPs.},
booktitle = {Proceedings of the {Conference} on {Human} {Factors} in {Computing} {Systems} ({CHI})},
publisher = {Association for Computing Machinery},
author = {Hartwig, Katrin and Biselli, Tom and Schneider, Franziska and Reuter, Christian},
year = {2024},
keywords = {Security, UsableSec, HCI, A-Paper, Ranking-CORE-A*, Selected, AuswahlCrisis, Projekt-ATHENE-PriVis, Projekt-NEBULA},
}
[BibTeX] [Abstract] [Download PDF]
Die Verbreitung falscher und irreführender Informationen – insbesondere über soziale Medien wie TikTok, Twitter, Facebook und Co. – nehmen eine immer größer werdende Relevanz in sicherheitsrelevanten Situationen ein. Gerade im Kontext des russischen Angriffskrieges gegen die Ukraine spielen derartige Plattformen eine besondere Rolle, indem gefälschte Videos oder Inhalte mit falscher zeitlicher Einordnung in kürzester Zeit viral gehen und somit das Potential für Verunsicherung und Meinungsmanipulation bergen. Problematisch sind dabei nicht nur absichtliche, sondern auch unabsichtlich irreführende Informationen. Ziel des interdisziplinären BMBF-Projekts NEBULA (Laufzeit: 1.7.2022-30.6.2025) ist die transparente, KI- basierte Erkennung von Falsch- und Fehlinformationen in sicherheitsrelevanten Situationen sowie die zielgruppengerechte Darstellung der Detektionsergebnisse zur Förderung der Medienkompetenz. Die nutzerzentrierten Ansätze adressieren dabei sowohl Behörden und Organisationen mit Sicherheitsaufgaben (BOS) in der akkuraten Lagebilderstellung und Krisenkommunikation, als auch vulnerable Personengruppen durch partizipative Entwicklung von technischen Unterstützungswerkzeugen. Innerhalb des Projekts entstehen Demonstratoren in Form von Smartphone-Apps, Browser-Plugins und Webanwendungen, um Einzelpersonen und Behörden dazu zu befähigen, Falsch- und Fehlinformationen eigenständig kritisch zu reflektieren und Umgangsstrategien zur Informationseinordnung anzueignen.
@inproceedings{hartwig_nebula_2024,
address = {München},
title = {{NEBULA}: {Nutzerzentrierte} {KI}-basierte {Erkennung} von {Fake} {News} und {Fehlinformationen}},
url = {https://peasec.de/paper/2024/2024_HartwigBiselliSchneiderReuter_NEBULA_BfSTagungsband.pdf},
abstract = {Die Verbreitung falscher und irreführender Informationen – insbesondere über soziale Medien wie TikTok,
Twitter, Facebook und Co. – nehmen eine immer größer werdende Relevanz in sicherheitsrelevanten
Situationen ein. Gerade im Kontext des russischen Angriffskrieges gegen die Ukraine spielen derartige
Plattformen eine besondere Rolle, indem gefälschte Videos oder Inhalte mit falscher zeitlicher Einordnung
in kürzester Zeit viral gehen und somit das Potential für Verunsicherung und Meinungsmanipulation
bergen. Problematisch sind dabei nicht nur absichtliche, sondern auch unabsichtlich irreführende
Informationen.
Ziel des interdisziplinären BMBF-Projekts NEBULA (Laufzeit: 1.7.2022-30.6.2025) ist die transparente, KI-
basierte Erkennung von Falsch- und Fehlinformationen in sicherheitsrelevanten Situationen sowie die
zielgruppengerechte Darstellung der Detektionsergebnisse zur Förderung der Medienkompetenz. Die
nutzerzentrierten Ansätze adressieren dabei sowohl Behörden und Organisationen mit Sicherheitsaufgaben
(BOS) in der akkuraten Lagebilderstellung und Krisenkommunikation, als auch vulnerable Personengruppen
durch partizipative Entwicklung von technischen Unterstützungswerkzeugen. Innerhalb des Projekts
entstehen Demonstratoren in Form von Smartphone-Apps, Browser-Plugins und Webanwendungen, um
Einzelpersonen und Behörden dazu zu befähigen, Falsch- und Fehlinformationen eigenständig kritisch zu
reflektieren und Umgangsstrategien zur Informationseinordnung anzueignen.},
booktitle = {Aktuelle {Themen} und {Herausforderungen} behördlicher {Risikokommunikation} - {Tagungsband}},
publisher = {Bundesamt für Strahlenschutz},
author = {Hartwig, Katrin and Biselli, Tom and Schneider, Franziska and Reuter, Christian},
year = {2024},
}
[BibTeX] [Abstract] [Download PDF]
Recent crises like the COVID-19 pandemic provoked an increasing appearance of misleading information, emphasizing the need for effective user-centered countermeasures as an important field in HCI research. This work investigates how content-specific user-centered indicators can contribute to an informed approach to misleading information. In a threefold study, we conducted an in-depth content analysis of 2,382 German tweets on Twitter (now X) to identify topical (e.g., 5G), formal (e.g., links), and rhetorical (e.g., sarcasm) characteristics through manual coding, followed by a qualitative online survey to evaluate which indicators users already use autonomously to assess a tweet’s credibility. Subsequently, in a think-aloud study participants qualitatively evaluated the identified indicators in terms of perceived comprehensibility and usefulness. While a number of indicators were found to be particularly comprehensible and useful (e.g., claim for absolute truth and rhetorical questions), our findings reveal limitations of indicator-based interventions, particularly for people with entrenched conspiracy theory views. We derive four implications for digitally supporting users in dealing with misleading information, especially during crises.
@article{hartwig_misleading_2024,
title = {Misleading {Information} in {Crises}: {Exploring} {Content}-specific {Indicators} on {Twitter} from a {User} {Perspective}},
issn = {0144-929X},
url = {https://doi.org/10.1080/0144929X.2024.2373166},
doi = {10.1080/0144929X.2024.2373166},
abstract = {Recent crises like the COVID-19 pandemic provoked an increasing appearance of misleading information,
emphasizing the need for effective user-centered countermeasures as an important field in HCI research. This
work investigates how content-specific user-centered indicators can contribute to an informed approach to
misleading information. In a threefold study, we conducted an in-depth content analysis of 2,382 German
tweets on Twitter (now X) to identify topical (e.g., 5G), formal (e.g., links), and rhetorical (e.g., sarcasm)
characteristics through manual coding, followed by a qualitative online survey to evaluate which indicators
users already use autonomously to assess a tweet’s credibility. Subsequently, in a think-aloud study participants
qualitatively evaluated the identified indicators in terms of perceived comprehensibility and usefulness. While
a number of indicators were found to be particularly comprehensible and useful (e.g., claim for absolute truth
and rhetorical questions), our findings reveal limitations of indicator-based interventions, particularly for
people with entrenched conspiracy theory views. We derive four implications for digitally supporting users in
dealing with misleading information, especially during crises.},
journal = {Behaviour \& Information Technology (BIT)},
author = {Hartwig, Katrin and Schmid, Stefka and Biselli, Tom and Pleil, Helene and Reuter, Christian},
year = {2024},
keywords = {Crisis, HCI, A-Paper, Projekt-ATHENE-PriVis, Projekt-NEBULA, Ranking-CORE-A, Ranking-ImpactFactor},
pages = {1--34},
}
[BibTeX] [Download PDF]
@techreport{reuter_informatik_2024,
address = {FIfF-Kommunikation},
title = {Informatik für den {Frieden}: {Perspektive} von {PEASEC} zu 40 {Jahren} {FIfF}},
url = {https://peasec.de/paper/2024/2024_Reuteretal_InformatikFuerFrieden_fiff.pdf},
author = {Reuter, Christian and Franken, Jonas and Reinhold, Thomas and Kuehn, Philipp and Kaufhold, Marc-André and Riebe, Thea and Hartwig, Katrin and Biselli, Tom and Schmid, Stefka and Guntrum, Laura and Haesler, Steffen},
year = {2024},
keywords = {Peace, Security},
}
2023
[BibTeX] [Abstract] [Download PDF]
The value of social media in crises, disasters, and emergencies across different events, participants, and states is now well-examined in crisis informatics research. Previous research has contributed to the state of the art with empirical insights on the use of social media, approaches for the gathering and processing of big social data, the design and evaluation of information systems, and the analysis of cumulative and longitudinal data. While some studies examined social media use representatively for their target audience, these usually only comprise a single point of inquiry and do not allow for a trend analysis. This work provides results (1) of a representative survey with German citizens from 2021 on use patterns, perceptions, and expectations regarding social media during emergencies. Furthermore, it (2) compares these results to previous surveys and provides insights on temporal changes and trends from 2017, over 2019 to 2021. Our findings highlight that social media use in emergencies increased in 2021 and 2019 compared to 2017. Between 2019 and 2021, the amount of information shared on social media remained on a similar level, while the perceived disadvantages of social media in emergencies significantly increased. In light of demographic variables, the results of the 2021 survey confirm previous findings, according to which older individuals (45+ years) use social media in emergencies less often than younger individuals (18-24 years). Furthermore, while the quicker availability of information was one of the reasons for social media use, especially the potential information overload was a key factor for not using social media in emergencies. The results are discussed in light of the dynamic nature of attitudes regarding social media in emergencies and the need to account for heterogeneity in user expectations to build trustworthy information ecosystems in social media.
@article{reuter_increasing_2023,
title = {Increasing {Adoption} {Despite} {Perceived} {Limitations} of {Social} {Media} in {Emergencies}: {Representative} {Insights} on {German} {Citizens}’ {Perception} and {Trends} from 2017 to 2021},
volume = {96},
issn = {2212-4209},
url = {https://peasec.de/paper/2023/2023_ReuterKaufholdBiselliPleil_SocialMediaEmergenciesSurvey_IJDRR.pdf},
doi = {https://doi.org/10.1016/j.ijdrr.2023.103880},
abstract = {The value of social media in crises, disasters, and emergencies across different events, participants, and states is now well-examined in crisis informatics research. Previous research has contributed to the state of the art with empirical insights on the use of social media, approaches for the gathering and processing of big social data, the design and evaluation of information systems, and the analysis of cumulative and longitudinal data. While some studies examined social media use representatively for their target audience, these usually only comprise a single point of inquiry and do not allow for a trend analysis. This work provides results (1) of a representative survey with German citizens from 2021 on use patterns, perceptions, and expectations regarding social media during emergencies. Furthermore, it (2) compares these results to previous surveys and provides insights on temporal changes and trends from 2017, over 2019 to 2021. Our findings highlight that social media use in emergencies increased in 2021 and 2019 compared to 2017. Between 2019 and 2021, the amount of information shared on social media remained on a similar level, while the perceived disadvantages of social media in emergencies significantly increased. In light of demographic variables, the results of the 2021 survey confirm previous findings, according to which older individuals (45+ years) use social media in emergencies less often than younger individuals (18-24 years). Furthermore, while the quicker availability of information was one of the reasons for social media use, especially the potential information overload was a key factor for not using social media in emergencies. The results are discussed in light of the dynamic nature of attitudes regarding social media in emergencies and the need to account for heterogeneity in user expectations to build trustworthy information ecosystems in social media.},
journal = {International Journal of Disaster Risk Reduction (IJDRR)},
author = {Reuter, Christian and Kaufhold, Marc-André and Biselli, Tom and Pleil, Helene},
year = {2023},
keywords = {Student, Crisis, Projekt-emergenCITY, Projekt-CYLENCE, A-Paper, AuswahlCrisis, Projekt-NEBULA, Ranking-ImpactFactor, SocialMedia},
}
[BibTeX] [Abstract] [Download PDF]
The use of Open Source Intelligence (OSINT) to monitor and detect cybersecurity threats is gaining popularity among Cybersecurity Emergency or Incident Response Teams (CERTs/CSIRTs). They increasingly use semi-automated OSINT approaches when monitoring cyber threats for public infrastructure services and incident response. Most of the systems use publicly available data, often focusing on social media due to timely data for situational assessment. As indirect and affected stakeholders, the acceptance of OSINT systems by users, as well as the conditions which influence the acceptance, are relevant for the development of OSINT systems for cybersecurity. Therefore, as part of the ethical and social technology assessment, we conducted a survey (N=1,093), in which we asked participants about their acceptance of OSINT systems, their perceived need for open source surveillance, as well as their privacy behavior and concerns. Further, we tested if the awareness of OSINT is an interactive factor that affects other factors. Our results indicate that cyber threat perception and the perceived need for OSINT are positively related to acceptance, while privacy concerns are negatively related. The awareness of OSINT, however, has only shown effects on people with higher privacy concerns. Here, particularly high OSINT awareness and limited privacy concerns were associated with higher OSINT acceptance. Lastly, we provide implications for further research and the use of OSINT systems for cybersecurity by authorities. As OSINT is a framework rather than a single technology, approaches can be selected and combined to adhere to data minimization and anonymization as well as to leverage improvements in privacy-preserving computation and machine learning innovations. Regarding the use of OSINT, the results suggest to favor approaches that provide transparency to users regarding the use of the systems and the data they gather.
@article{riebe_privacy_2023,
title = {Privacy {Concerns} and {Acceptance} {Factors} of {OSINT} for {Cybersecurity}: {A} {Representative} {Survey}},
url = {https://petsymposium.org/popets/2023/popets-2023-0028.pdf},
doi = {https://doi.org/10.56553/popets-2023-0028},
abstract = {The use of Open Source Intelligence (OSINT) to monitor and detect cybersecurity threats is gaining popularity among Cybersecurity Emergency or Incident Response Teams (CERTs/CSIRTs). They increasingly use semi-automated OSINT approaches when monitoring cyber threats for public infrastructure services and incident response. Most of the systems use publicly available data, often focusing on social media due to timely data for situational assessment. As indirect and affected stakeholders, the acceptance of OSINT systems by users, as well as the conditions which influence the acceptance, are relevant for the development of OSINT systems for cybersecurity. Therefore, as part of the ethical and social technology assessment, we conducted a survey (N=1,093), in which we asked participants about their acceptance of OSINT systems, their perceived need for open source surveillance, as well as their privacy behavior and concerns. Further, we tested if the awareness of OSINT is an interactive factor that affects other factors. Our results indicate that cyber threat perception and the perceived need for OSINT are positively related to acceptance, while privacy concerns are negatively related. The awareness of OSINT, however, has only shown effects on people with higher privacy concerns. Here, particularly high OSINT awareness and limited privacy concerns were associated with higher OSINT acceptance. Lastly, we provide implications for further research and the use of OSINT systems for cybersecurity by authorities. As OSINT is a framework rather than a single technology, approaches can be selected and combined to adhere to data minimization and anonymization as well as to leverage improvements in privacy-preserving computation and machine learning innovations. Regarding the use of OSINT, the results suggest to favor approaches that provide transparency to users regarding the use of the systems and the data they gather.},
number = {1},
journal = {Proceedings on Privacy Enhancing Technologies (PoPETs)},
author = {Riebe, Thea and Biselli, Tom and Kaufhold, Marc-André and Reuter, Christian},
year = {2023},
keywords = {Security, UsableSec, HCI, Projekt-ATHENE-FANCY, Projekt-CYWARN, A-Paper, AuswahlUsableSec, Ranking-CORE-A},
pages = {477--493},
}
[BibTeX] [Abstract] [Download PDF]
When considering privacy, context, and environmental circumstances can have a strong influence on individual decisions and user behavior. Especially in crises or threatening situations, privacy may conflict with other values, such as personal safety and health. In other cases, personal or public safety can also be dependent on privacy: the context of flight shows how, for those affected, the value of data protection can increase as a result of an increased threat situation. Thus, when individual sovereignty—the autonomous development of one’s own will—or safety is highly dependent on information flows, people tend to be more protective of their privacy in order to maintain their information sovereignty. But also, the context of agriculture, as part of the critical infrastructure, shows how privacy concerns can affect the adoption of digital tools. With these two examples, flight and migration as well as agriculture, this chapter presents some exemplary results that illustrate the importance of the influence of situational factors on perceived information sovereignty and the evaluation of privacy.
@incollection{steinbrink_privacy_2023,
address = {Cham},
title = {Privacy {Perception} and {Behaviour} in {Safety}-{Critical} {Environments}},
isbn = {978-3-031-28643-8},
url = {https://doi.org/10.1007/978-3-031-28643-8_12},
abstract = {When considering privacy, context, and environmental circumstances can have a strong influence on individual decisions and user behavior. Especially in crises or threatening situations, privacy may conflict with other values, such as personal safety and health. In other cases, personal or public safety can also be dependent on privacy: the context of flight shows how, for those affected, the value of data protection can increase as a result of an increased threat situation. Thus, when individual sovereignty—the autonomous development of one’s own will—or safety is highly dependent on information flows, people tend to be more protective of their privacy in order to maintain their information sovereignty. But also, the context of agriculture, as part of the critical infrastructure, shows how privacy concerns can affect the adoption of digital tools. With these two examples, flight and migration as well as agriculture, this chapter presents some exemplary results that illustrate the importance of the influence of situational factors on perceived information sovereignty and the evaluation of privacy.},
booktitle = {Human {Factors} in {Privacy} {Research}},
publisher = {Springer International Publishing},
author = {Steinbrink, Enno and Biselli, Tom and Linsner, Sebastian and Herbert, Franziska and Reuter, Christian},
editor = {Gerber, Nina and Stöver, Alina and Marky, Karola},
year = {2023},
keywords = {Security, UsableSec, HCI, Projekt-ATHENE-FANCY, Projekt-CROSSING, Projekt-GRKPrivacy},
pages = {237--251},
}
2022
[BibTeX] [Abstract] [Download PDF]
Concise instruments to determine privacy personas – typical privacy-related user groups – are not available at present. Consequently, we aimed to identify them on a privacy knowledge–privacy behavior ratio based on a self-developed instrument. To achieve this, we conducted an item analysis (N = 820) and a confirmatory factor analysis (CFA) (N = 656) of data based on an online study with German participants. Starting with 81 items, we reduced those to an eleven-item questionnaire with the two scales privacy knowledge and privacy behavior. A subsequent cluster analysis (N = 656) revealed three distinct user groups: (1) Fundamentalists scoring high in privacy knowledge and behavior, (2) Pragmatists scoring average in privacy knowledge and behavior and (3) Unconcerned scoring low in privacy knowledge and behavior. In a closer inspection of the questionnaire, the CFAs supported the model with a close global fit based on RMSEA in a training and to a lesser extent in a cross-validation sample. Deficient local fit as well as validity and reliability coefficients well below generally accepted thresholds, however, revealed that the questionnaire in its current form cannot be considered a suitable measurement instrument for determining privacy personas. The results are discussed in terms of related persona conceptualizations, the importance of a methodologically sound investigation of corresponding privacy dimensions and our lessons learned.
@article{biselli_challenges_2022,
title = {On the {Challenges} of {Developing} a {Concise} {Questionnaire} to {Identify} {Privacy} {Personas}},
url = {https://petsymposium.org/2022/files/papers/issue4/popets-2022-0126.pdf},
doi = {10.56553/popets-2022-0126},
abstract = {Concise instruments to determine privacy personas – typical privacy-related user groups – are not available at present. Consequently, we aimed to identify them on a privacy knowledge–privacy behavior ratio based on a self-developed instrument. To achieve this, we conducted an item analysis (N = 820) and a confirmatory factor analysis (CFA) (N = 656) of data based on an online study with German participants. Starting with 81 items, we reduced those to an eleven-item questionnaire with the two scales privacy knowledge and privacy behavior. A subsequent cluster analysis (N = 656) revealed three distinct user groups: (1) Fundamentalists scoring high in privacy knowledge and behavior, (2) Pragmatists scoring average in privacy knowledge and behavior and (3) Unconcerned scoring low in privacy knowledge and behavior. In a closer inspection of the questionnaire, the CFAs supported the model with a close global fit based on RMSEA in a training and to a lesser extent in a cross-validation sample. Deficient local fit as well as validity and reliability coefficients well below generally accepted thresholds, however, revealed that the questionnaire in its current form cannot be considered a suitable measurement instrument for determining privacy personas. The results are discussed in terms of related persona conceptualizations, the importance of a methodologically sound investigation of corresponding privacy dimensions and our lessons learned.},
number = {4},
journal = {Proceedings on Privacy Enhancing Technologies (PoPETs)},
author = {Biselli, Tom and Steinbrink, Enno and Herbert, Franziska and Schmidbauer-Wolf, Gina Maria and Reuter, Christian},
year = {2022},
keywords = {Security, UsableSec, HCI, Projekt-ATHENE-FANCY, Projekt-CROSSING, A-Paper, AuswahlUsableSec, Ranking-CORE-A, Projekt-GRKPrivacy},
pages = {645--669},
}
2021
[BibTeX] [Abstract] [Download PDF]
The relevance of adequate privacy and security behavior in the digital space is higher than ever. However, the exact relationship between privacy and security behavior is rarely discussed in the literature. This study investigates this relationship and the role of socio-demographic factors (gender, age, education, political opinions) in such behavior. Exploratory results of a survey of German private users (N=1,219) show that privacy and security behavior are only weakly correlated and not similarly influenced by socio-demographic factors. While se-curity behavior significantly differs between age and education groups (younger and less educated show less security behavior), no such differences exist for pri-vacy behavior. Additionally, political orientation and opinion has no influence on privacy and security behavior. Thus, this study sheds light on the concepts of privacy, security and corresponding behavior and emphasizes the need for a fine-grained differentiation if either privacy or security behavior is to be improved.
@inproceedings{biselli_relationship_2021,
address = {Potsdam, Germany},
title = {On the {Relationship} between {IT} {Privacy} and {Security} {Behavior}: {A} {Survey} among {German} {Private} {Users}},
url = {https://peasec.de/paper/2021/2021_BiselliReuter_RelationshipITPrivacyandSecurityBehavior_WI.pdf},
abstract = {The relevance of adequate privacy and security behavior in the digital space is higher than ever. However, the exact relationship between privacy and security behavior is rarely discussed in the literature. This study investigates this relationship and the role of socio-demographic factors (gender, age, education, political opinions) in such behavior. Exploratory results of a survey of German private users (N=1,219) show that privacy and security behavior are only weakly correlated and not similarly influenced by socio-demographic factors. While se-curity behavior significantly differs between age and education groups (younger and less educated show less security behavior), no such differences exist for pri-vacy behavior. Additionally, political orientation and opinion has no influence on privacy and security behavior. Thus, this study sheds light on the concepts of privacy, security and corresponding behavior and emphasizes the need for a fine-grained differentiation if either privacy or security behavior is to be improved.},
booktitle = {Proceedings of the {International} {Conference} on {Wirtschaftsinformatik} ({WI})},
publisher = {AIS},
author = {Biselli, Tom and Reuter, Christian},
year = {2021},
keywords = {Security, UsableSec, HCI, Projekt-ATHENE-FANCY, Ranking-CORE-C, Ranking-WKWI-A},
pages = {1--17},
}