What is it about?
Artificial intelligence has rapidly become embedded in everyday life, from online shopping recommendations and health chatbots to digital learning assistants. Since the launch of generative AI tools such as ChatGPT in late 2022, governments and industries have expressed strong optimism, often framing AI as a driver of efficiency and job creation. However, public acceptance is proving more complex than these narratives suggest. Based on a cross-cultural study of 2,327 young AI users across 11 countries in Asia and Africa, the authors identify three main user groups: robophiles (about 42%), who are enthusiastic and open to AI; robophobes (around 13%), who feel anxious or resistant due to concerns such as data security, machine decision-making, and loss of human touch; and a large ambivalent group (about 45%), who appreciate AI’s convenience in some contexts but remain cautious in others, particularly in sensitive areas like health and finance. The study highlights the growing phenomenon of AI fatigue, driven by cognitive and emotional overload as AI systems proliferate and increasingly mediate daily interactions. While overall attitudes toward AI remain broadly positive, repeated exposure to AI’s limitations, combined with reduced human warmth and empathy in digital services, risks eroding trust over time. Perceptions of AI also vary by country. Respondents in Malaysia and Ghana tend to anthropomorphize AI more positively, which increases comfort and acceptance. In contrast, users in Indonesia and Turkey are more ambivalent, balancing interest in AI’s capabilities with discomfort and a strong preference for human interaction. These differences are shaped not only by technology itself, but by social values, trust, and psychological traits such as novelty-seeking and social needs. Although robophobes are a minority, their critical voices can influence broader public opinion, especially through social media. Concerns about AI reducing critical thinking, producing inaccurate outputs (for example in health diagnoses), and being overused in education are already visible. In Indonesia, while AI use among workers and students is high, overall adoption across society remains relatively low, indicating persistent hesitation and fear among significant segments of the population. The authors conclude that AI should be positioned as a supporting tool rather than a replacement for human roles. To prevent AI fatigue and resistance, AI deployment must be socially sensitive, ethically grounded, and complemented by human-centered approaches that preserve empathy, trust, and cultural values.
Featured Image
Photo by julien Tromeur on Unsplash
Why is it important?
It’s important because AI adoption is not just a technical issue, it’s a human one. First, people don’t automatically trust useful technology. Even when AI makes life easier, many users feel uneasy about data privacy, errors, and the loss of human judgment. If this hesitation is ignored, AI tools may be underused, misused, or quietly rejected, especially in sensitive areas like health, education, and finance. Second, AI fatigue can slow or reverse adoption. When people feel overwhelmed by too many AI tools, constant automation, or impersonal interactions, they disengage. This means investments in AI can fail to deliver real value, not because the technology is weak, but because users are tired or emotionally resistant. Third, small groups of critics can shape wider public opinion. Even though strong AI opponents are a minority, their concerns spread easily through social media and can influence the large group of undecided users. This can quickly turn caution into resistance. Fourth, overreliance on AI carries real risks. Evidence already shows potential downsides, such as reduced critical thinking, overtrust in automated decisions, and errors in areas like medical advice. Understanding public discomfort helps prevent harmful or irresponsible use. Fifth, culture and social values matter. Acceptance of AI differs across countries and communities. Ignoring these differences can lead to poorly designed systems that feel alien, awkward, or untrustworthy to users. In short, this research matters because AI will only succeed if people feel comfortable using it. Treating AI as a tool that supports human judgment, rather than replaces it, is key to building trust, preventing fatigue, and ensuring AI delivers long-term social and economic benefits.
Read the Original
This page is a summary of: Masyarakat mulai lelah terhadap AI: Berpeluang makin masif di masa depan, November 2025, The Conversation,
DOI: 10.64628/aan.wc7dta3k3.
You can read the full text:
Contributors
The following have contributed to this page







