Posts

"AI" and Productivity

[I keep bringing these up on Bluesky, so I think it’s time to gather them up and make a post out of them.] This is a collection of articles on the problems of “AI.” “AI”—really, various sorts of generative machine learning models—including generative large language models (gLLMs) and generative stable diffusion models (gSDMs) so far do not live up to the promises of their marketers. A computer you can talk to is one of the great dreams of computing, and the initial releases of transformer-model based chatbots seemed to live up to this. There have been long-standing qualms about this idea, most notably Dijkstra’s argument that the imprecision of natural language was an impediment to correct thinking about computation and to accurate computing. 1 Unfortunately, so far it appears that Dijkstra was correct; gLLMs and gSDMs are notorious for errors and they are not currently designed to indicate uncertainty to their users so that people confidently rely on their erroneous output. There ar...

Honest Web Site Ratings

(A step away from my usual political posting.) There should be website ratings like: You'll never find what you want using the tools for site provides; try using a search engine instead. They usually can ship you what you want but they'll try to sell you everything in the store on your way to finding it and they treat their people and suppliers like dirt. They've got local stores with decent stock but if you go to the website it'll send so many pop-ups that you'll just wish you went to the store instead, people steal things out of their pickup bins, and if they screw up an order you will not be able to reach their customer service.

The Cognitive Hazards of Widespread Chatbot Use

My latest smartphone update came with an easy-access chatbot button that was hard to disable; I doubt most users will bother, let alone figure out how. There is, I think, a risk of creating cognitive disabilities by providing too-easy access to chatbots, the way one can develop a physical disability by persistent restrictions of motion. To some extent, all cognitive-enhancing technologies do this; people who write don't develop oral and memory skills; people who use calculators don't learn paper and pencil computational skills, people who grow up with photography don't learn to draw. But unlike writing, calculators, and so on, chatbots are not aids to cognition but replacements for it—and a replacement that is controlled by someone else. If a child writes, there are still writing words and ideas that have at least passed through the child's mind; chatbots entirely bypass this, inserting ideas from an external source; routine chatbot use interferes with thought. There i...

"AI" and Intellectual Property

If, say, I broadcast a short story on the radio, I have to license the original work. If I print a book, I have to have a license to do so. If I publish a thinly-veiled rewrite of a book without a license, that is copyright infringement. And so on and on. It ought to be copyright infringement to do that with a large language model (LLM) or, for visual art, a stable diffusion model (SDM.) LLMs and SDMs do not exist, do not operate at all, without a body of work to built the models from. Without that training data those models do not exist at all. Therefore the developer of an LLM, SDM, or any other future generative machine learning technology, ought to be required to license any work used for developing that model.

Nuclear Fusion, "AI," and Big Science

Sam Altman, who runs OpenAI, is a major investor in a firm called Helion (unlocked Bloomberg article), which claims it will be producing electricity from nuclear fusion by 2028. This is the second version of this article; physicist Stefan Urbat wrote to inform me that after 60 years there has been progress in dealing with second-order instabilities.

Why ChatGPT?

Why ChatGPT? Why Stable Diffusion? They are destructive in so many ways, expensive to operate, and are probably ultimately going to fail, polluting the information environment for years. They don’t even make good money. So, why?

The General Intelligence of Robots

“Artificial intelligence” researchers mean by the phrase “general intelligence” two things. One is simply a machine learning system that can, like humans, use its knowledge for many different tasks. This is not something that has currently been built; language models produce language, stable diffusion models produce images, and neither seem to have any concepts of the underlying realities these words and images reflect. The second is the “general intelligence,” g , derived from intelligence tests, which is believed by some to be a unifying reality behind intelligence test scores. In a paper on GPT-4 from Microsoft a Wall Street Journal Editorial by Linda Gottfredson, Mainstream Science on Intelligence is cited as providing a definition of general intelligence. It’s an appalling piece, repeating debunked claims about racial differences in intelligence and claiming as scientific consensus hypotheses that are at best debatable and at worst outright false.