The UMBC Cyber Defense Lab presents
Asymmetric Responsibility Framing to Deepen Adolescents' Adversarial Reasoning about Phishing
Professor Sanorita Dey
UMBC CSEE Department
12–1 pm Friday, March 6, 2026 via Webex
Adolescents regularly navigate digital environments where persuasive tactics, social engineering, and phishing attempts are embedded in everyday communication. While many can recognize obvious scams, they often struggle to explain why a message is manipulative, how tactics unfold over time, or what protective actions should follow. This gap reflects a limitation not only in knowledge, but in adversarial reasoning: the ability to infer intent, anticipate harm, and respond strategically under uncertainty. This project investigates whether asymmetric responsibility framing can deepen adolescents' adversarial reasoning in phishing contexts. We test whether positioning participants as accountable for guiding a vulnerable peer, rather than reasoning independently, reshapes how they analyze and respond to emerging threats. Grounded in theories of accountability and cognitive engagement, we examine how responsibility structures influence the depth and structure of reasoning.
We developed a staged, dual-conversation simulation modeling gradual phishing escalation. Participants were assigned either to a solo condition, where they independently assessed a suspicious interaction, or to a responsibility condition, where they advised a "buddy" engaged in an unfolding exchange. This design isolates the effect of responsibility framing beyond content exposure. We measured explanation depth, exploit identification accuracy, detection timing, and quality of protective recommendations while accounting for cognitive demand. Findings show that responsibility framing significantly improves explanation quality and protective guidance. These effects persist after controlling for effort and are strongest during gradual escalation, suggesting that accountability reshapes reasoning processes rather than simply increasing engagement. The talk will cover the theoretical framing, experimental design, and implications for AI-mediated cybersecurity education, along with open questions about scaffolding and generalizability to other digital risk domains.
Sanorita Dey is an assistant professor of computer science and electrical engineering at UMBC. Her research sits at the intersection of human-centered AI, STEM education, and ethical computing, with a focus on designing AI systems that meaningfully augment human learning processes. She develops AI-assisted learning environments that support critical thinking, adversarial reasoning, digital risk awareness, and reflective practice in STEM contexts. Her work emphasizes human-centric design principles, integrating empirical methods, experimental evaluation, and sociotechnical analysis to ensure AI tools are pedagogically grounded, ethically responsible, and developmentally appropriate. Across projects spanning cybersecurity education, AI-mediated mentorship, and responsible computing, Dr. Dey investigates how interaction design, accountability structures, and scaffolded dialogue can deepen learning outcomes while preserving learner agency. Her scholarship contributes to advancing equitable and reflective AI integration in K–12 and higher education STEM environments.
Host: Dr. Alan T. Sherman, sherman@umbc.edu. Support for this event was provided in part by the NSF under SFS grants DGE-1753681 and 2438185.