What is it about?

Through interviews and design sessions with 13 parent-child pairs, we explored how families envision using AI agents for household safety. Families preferred multiple AI agents in caregiving roles (a household manager, private tutor, and family therapist) rather than a standalone parental control AI. Each agent would naturally embed safety features: the manager screens scams during email tasks, the tutor teaches digital safety skills, and the therapist addresses sensitive issues like cyberbullying. Crucially, families wanted each agent to maintain separate privacy boundaries, even within one system. We designed a multi-agent system with four privacy principles: memory segregation (keeping agent memories separate), conversational consent (asking permission before sharing data), selective data sharing (sharing only safety-related information), and progressive memory management (retaining only recent data over time). These principles balance safety protection with teen autonomy and family privacy.

Featured Image

Why is it important?

This work makes three contributions. Theoretically, we extend Family Systems Theory and Communication Privacy Management theory to multi-agent AI contexts, revealing that families treat AI agents as distinct social entities requiring role-specific privacy boundaries. Empirically, we challenge surveillance-based parental control paradigms by showing families prefer safety embedded within caregiving roles. Practically, we provide a concrete multi-agent system design with four privacy-preserving principles. As tech companies deploy AI assistants in homes, this research is timely in offering both theoretical frameworks and actionable design guidelines for balancing family privacy, teen autonomy, and safety protection in AI-mediated environments.

Perspectives

This research resonates with a critical gap in current AI design: GenAI apps' memory features don't account for families sharing accounts. When teenagers and parents use the same AI assistant, there are no privacy boundaries. Everything shared is potentially accessible to all family members. Through our study, we were struck by how naturally families articulated the need for agent-specific privacy boundaries, even while understanding that agents operate within one system. They clearly felt that what a teenager shares with an AI therapist shouldn't automatically be known by an AI tutor or accessible to parents. Observing this disconnect between family expectations and current design possibilities was what motivated our theoretical work extending Communication Privacy Management theory to multi-agent contexts. Developing the four privacy-preserving principles felt like translating families' intuitive wisdom into concrete design guidance. As more families adopt AI assistants with memory capabilities, we hope our work helps developers think carefully about privacy architectures that respect the complex relational dynamics within households.

Zikai Alex Wen
University of Washington

Read the Original

This page is a summary of: Families' Vision of Generative AI Agents for Household Safety Against Digital and Physical Threats, Proceedings of the ACM on Human-Computer Interaction, October 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3757598.
You can read the full text:

Read

Contributors

The following have contributed to this page