Experts Warn of Cognitive Risks Amid New AI Literacy Efforts

New AI literacy initiatives seek to counteract the cognitive risks posed by generative AI reliance and algorithmic biases, emphasizing informed engagement and critical thinking.

    Key details

  • • Oregon State University launched an AI Literacy Center to educate on generative AI's data, biases, and limitations.
  • • Librarians observed AI 'hallucinations' where AI suggested non-existent resources, highlighting reliability issues.
  • • Zhigang Feng warns of a 'Middle-Intelligence Trap' where overreliance on AI erodes critical thinking and creativity.
  • • Strategies to escape this trap include building cognitive reserves, strategic friction, and redefining success toward cognitive growth.

As the integration of generative AI like ChatGPT reshapes information access and consumption, a critical examination of its cognitive and educational impacts has emerged. At Oregon State University (OSU), the establishment of the AI Literacy Center aims to address pressing questions about AI's reliability, data sources, and embedded biases. Directed by humanities librarian Laurie Bridges, the center responds to challenges observed by librarians, such as patrons requesting nonexistent materials suggested by AI, illustrating the phenomenon known as AI hallucinations. Bridges emphasizes that AI algorithms are not neutral and can reinforce societal biases, advocating for informed usage and critical engagement through the center’s educational initiatives and free trainings (source 95644).

Simultaneously, concerns about cognitive stagnation due to AI reliance are articulated by Zhigang Feng, Ph.D., an associate professor at the University of Nebraska at Omaha. Feng introduces the concept of the “Middle-Intelligence Trap,” likening it to the economic middle-income trap where excessive dependence on AI for tasks like thinking and problem-solving leads to erosion of human critical thinking and creativity. He highlights studies, including one from MIT, showing weakened neural connectivity and diminished originality in students using large language models. Feng warns that AI-generated feedback loops reinforce existing biases and stifle innovative thought. To escape this intellectual plateau, Feng proposes strategies such as building cognitive reserves through active mental practice, insisting on strategic friction in technology interaction, and redefining success to value cognitive growth over efficiency. He stresses that this trap results from habitual choices rather than inevitable technological progression (source 95645).

Together, these developments underscore an urgent need for AI literacy that equips users to critically navigate AI’s strengths and limitations. By fostering informed decisions and deliberate cognitive effort, such initiatives aim to preserve human intellectual agency amidst AI’s expanding role in information and learning contexts.