Why ChatGPT?

Why ChatGPT? Why Stable Diffusion? They are destructive in so many ways, expensive to operate, and are probably ultimately going to fail, polluting the information environment for years. They don’t even make good money. So, why?

The answer seems to be that they are effective marketing: they persuade executives to fund further development. The machine learning technology on which they are based is of genuine use; from biomedicine to climate modeling they are valuable. But machine learning is also an expensive technology, both power and hardware intensive. The firms which develop machine learning models need funding, and rather than take the hard route of persuading investors, the leaders in the field, especially Sam Altman, have decided to use the illusion of intelligence and knowledge which generative language models project to persuade the executive staff of major corporations to fund the development of their technology. And it works – huge amounts of funding is pouring in. Generative ML systems are being included in all kinds of systems, even where they are largely of little or negative value. But the harm is enormous. Cryptocurrency has so far largely enabled crimes, despite the liberatory hopes of the cipherpunks. Generative language and image models so far are used as part of confidence games, filling up the public web with huge amounts of sometimes dangerous blither.

This underscores the gullibility of our business executives; a persuasive line of patter, even machine generated patter, even when they know the machine knows nothing, only unvalidated text from the public internet, gets them to lay down huge amounts of money. Perhaps some researchers are also persuaded by their own creations. But it is all an illusion.

If executives are so easily taken in, what then are we to conclude about their decision making before Sam Altman launched his great con? How competent are the executives that shape so much of our lives?

Many people believe that confident language use signifies intelligence, but that is wrong. And we are learning in the hardest way how easy it is for people to be taken in. We are not helped in this by popular fictional images of rational artificial intelligence. This is not, for the kinds of programs that we were used to, an unreasonable belief: they were deterministic, if sometimes in ways that are difficult to understand. And people understand that a search engine is a search engine; it may find false information, or nothing useful at all. But generative ML systems are stochastic. Not only are they not rational in the human sense, they are not predictable. But, oh, their style is so confident. As Karawynn Long observes, “OpenAI, Microsoft, Google, and other companies are deliberately guiding these algorithms to emulate a knowledgeable, intelligent, and friendly human, even though the software is exactly zero of those four things.”

I’ve been aware for decades that many people take voluble confidence as a sign of knowledge – it is how so many people can be got to take ideas from fiction as models for their behavior –, but it is frightening to see how generative ML models and the organizations that operate them can take advantage of this. If, as I suspect, the entire point of making these systems public is a very large confidence game, the way cryptocurrency is a very large Ponzi scheme, what then? Will there be an awakening?

Comments

Popular posts from this blog

The General Intelligence of Robots

A Grand Unified Theory of Bad New Economy Firms

Richard M. Stallman and the Failure of the Free Software Foundation