What is it about?
Generative AI is taking the design world by storm, but can it truly enhance creativity and help designers combat design tunnel vision? This question is particularly relevant in the design of human-robot interaction (HRI), where preconceived notions of what a robot should do, look and behave like, can hinder innovative thinking. In this project, we explored the potential of using generative text-to-image models, such as StableDiffusion and DALL-E 2, to overcome design fixation and enhance creative processes in HRI design. We conducted a four-week-long design exploration using these models to ideate and visualise robotic artifacts and robot sociotechnical imaginaries. We found that generative text-to-image models can help overcome design fixation and inform divergent and desirable visions of robotic futures, while also surfacing existing assumptions and biases.
Featured Image
Photo by Gerard Siderius on Unsplash
Read the Original
This page is a summary of: Creative AI for HRI Design Explorations, March 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3568294.3580035.
You can read the full text:
Contributors
The following have contributed to this page