What is it about?

The article reviews various experimental evidence supporting different scenarios where artificial intelligence-based chatbots (e.g., ChatGPT, Bard...) could hinder the proper use of certain cognitive functions vital for successful adaptation to the environment. The use of ChatGPT could replace various stages in cognitive processes such as problem-solving, thereby preventing individuals from developing these skills.

Featured Image

Why is it important?

Anticipating these potential cognitive consequences could alert the educational, political, and social communities that, in the long term, such technologies might have a negative impact on the population or increase the cognitive gap within the population. The educational, political, and social communities should design and develop programs to mitigate these effects in the medium and long term.

Perspectives

This is the first time that a technology has the capability to address issues of various kinds and to provide verbal instructions to users on the different steps to follow in order to execute a plan. If this type of technology becomes ubiquitous and pervasive, it may affect cognitive functions such as executive functions, particularly in problem-solving and in other areas like decision-making. A society with a diminished ability to plan and solve problems could be at risk, as it may make poor political decisions, becoming vulnerable to more basic and simplistic narratives that do not analyze the complexity of current issues. In other words, a depletion of a society's executive functions makes it more vulnerable.

Umberto Leon-Dominguez
Universidad de Monterrey

Read the Original

This page is a summary of: Potential cognitive risks of generative transformer-based AI chatbots on higher order executive functions., Neuropsychology, February 2024, American Psychological Association (APA),
DOI: 10.1037/neu0000948.
You can read the full text:

Read

Contributors

The following have contributed to this page