The Cognitive Hazards of Widespread Chatbot Use

My latest smartphone update came with an easy-access chatbot button that was hard to disable; I doubt most users will bother, let alone figure out how.

There is, I think, a risk of creating cognitive disabilities by providing too-easy access to chatbots, the way one can develop a physical disability by persistent restrictions of motion. To some extent, all cognitive-enhancing technologies do this; people who write don't develop oral and memory skills; people who use calculators don't learn paper and pencil computational skills, people who grow up with photography don't learn to draw. But unlike writing, calculators, and so on, chatbots are not aids to cognition but replacements for it—and a replacement that is controlled by someone else. If a child writes, there are still writing words and ideas that have at least passed through the child's mind; chatbots entirely bypass this, inserting ideas from an external source; routine chatbot use interferes with thought. There is no privacy if one routinely interacts with a chatbot; the owner of the bot can see every expressed thought.

This is a dangerous technology even for adults; for children I think it creates a risk of induced developmental deficiency. Apparently LLM chatbots are being pushed in the hope of making a large number of people dependent on the technology. If this truly improved thinking, it might be a worthwhile tradeoff, but so far there is no evidence that it does. Instead, it produces sloppy writing and graphics and less thinking. On top of that, there are the issues of privacy and freedom of thought.

This is a dangerous technology. Butlerian jihad now!

Comments

Popular posts from this blog

Nuclear Fusion, "AI," and Big Science

Why ChatGPT?

The General Intelligence of Robots