What is it about?

As cities grow and change, it’s important to design neighborhoods that are safe, comfortable, and enjoyable for walking. This is known as “walkability.” Urban planners often rely on human feedback and field observations to understand how walkable a place feels, but this process can take a lot of time and resources. In this study, we explored whether LLMs, specifically a model called GPT-4o, could evaluate the walkability of urban spaces by looking at images, and how its assessments compare to real people's perceptions. We showed pairs of street images to both human participants and GPT-4o. Participants were asked to decide which area in each pair looked more walkable based on factors like safety, accessibility, comfort, and liveliness. We then asked GPT-4o the same questions using visual input and custom prompts. Our goal was to find out whether LLM can interpret visual scenes in ways that align with human judgment. We found that GPT-4o often made similar choices to human respondents, especially when evaluating clear visual features like greenery, lighting, and open walking space. However, it sometimes struggled with more subtle emotional or social cues that people use to judge walkability. We argue that GPT-4o have the potential to support urban planning by providing quick, scalable assessments of walkability. While they may not fully replace human insight, they can be a valuable tool to help prioritize areas for improvement or to simulate how changes in design might affect people’s walking experience. Our findings open new doors for using LLMs to build more people-friendly cities.

Featured Image

Why is it important?

This study is one of the first to compare LLMs assessments of walkability with human perceptions using street images. As cities look for faster, more scalable ways to evaluate urban environments, our work shows that AI like GPT-4o can help identify walkable areas based on how people actually experience them. This approach could support more inclusive, data-driven city planning and improve the design of public spaces.

Perspectives

As someone deeply invested in the intersection of technology and human-centered design, this study was a meaningful step for me in exploring how AI can better reflect real human experiences in urban spaces. I’ve always believed that cities should be designed not just with data, but with empathy and this research allowed me to test whether powerful language models like GPT-4o could start to capture that nuance. It was both exciting and surprising to see how closely the model aligned with people's perceptions in many cases. I hope this work sparks further dialogue about using AI not to replace human insight, but to amplify and support it in more inclusive, efficient ways.

Fatemeh Saeidi-Rizi
Michigan State University

Read the Original

This page is a summary of: Urban walkability through different lenses: A comparative study of GPT-4o and human perceptions, PLOS One, April 2025, PLOS,
DOI: 10.1371/journal.pone.0322078.
You can read the full text:

Read
Open access logo

Contributors

The following have contributed to this page