Posts

Why ChatGPT?

Why ChatGPT? Why Stable Diffusion? They are destructive in so many ways, expensive to operate, and are probably ultimately going to fail, polluting the information environment for years. They don’t even make good money. So, why?

The General Intelligence of Robots

“Artificial intelligence” researchers mean by the phrase “general intelligence” two things. One is simply a machine learning system that can, like humans, use its knowledge for many different tasks. This is not something that has currently been built; language models produce language, stable diffusion models produce images, and neither seem to have any concepts of the underlying realities these words and images reflect. The second is the “general intelligence,” g , derived from intelligence tests, which is believed by some to be a unifying reality behind intelligence test scores. In a paper on GPT-4 from Microsoft a Wall Street Journal Editorial by Linda Gottfredson, Mainstream Science on Intelligence is cited as providing a definition of general intelligence. It’s an appalling piece, repeating debunked claims about racial differences in intelligence and claiming as scientific consensus hypotheses that are at best debatable and at worst outright false.

Comments In Response to the US Copyright Office's Artificial Intelligence Study

Unfortunately, I do not know the details of the law that bears on this subject. Instead, I offer some general remarks on the technology which I hope will be useful contributions. When a painter creates a work, the brush does not hold the copyright, the painter holds the copyright. It is not different with "artificial intelligence." If a painter copies someone else's style, that's a copyright violation. It doesn't matter what tool the painter uses. If someone uses an AI model to copy an artist's writing or drawing style and publishes the result, if this does not fall under fair use, that's a copyright violation - it doesn't matter what tool is used. Because "artificial intelligence" is such an efficient violator of copyright, because there is no way to identify an AI model's sources, and because it is impossible to delete a work from an AI model it is both appropriate to insist on an opt-in model for works used to train an AI mo

The Technological Singularity: a Few Links

(I wrote, and then discarded, a reply to Claire Berlinski's articles on AI; she entirely believes in the TESCREAL arguments. On the way, I gathered a few links and I figured I'd record them here.)   AI researcher and science fiction author Vernor Vinge's 1993 essay, Technological Singularity , where the term was first used. For one of his fictional treatments of the subject, see his novel A Deepness in the Sky . “The Singularity: a Panel with Science Fiction Writers Vernor Vinge, Charlie Stross, Alastair Reynolds, and Karl Schroeder,” 2013. Link (video.)  “I believe that the creation of greater-than-human intelligence will occur during the next thirty years. I'll be surprised if this event occurs before 2005 or after 2030.” – Vernor Vinge  Seven years to go.

AI-AI-AI

During a Mastodon conversation between Signal Foundation President Meredith Whittaker and data scientist and activist Emily Gorcenski it was pointed out that most of the techniques that are called “AI” date to the 1970s and 1980s. Some predate modern computing. Ms. Whittaker: Deep learning techniques date from the 1980s, & “AI” had been hot/cold for decades, not slow until 2012. There was no new “single idea” in 2012. What WAS new, & propelled the AI boom, was concentrated resources (data/compute) controlled by tech cos. – link Ms. Gorcenski: Backpropagation dates back to the 60s. Deep learning neural nets used to be called “group methods of data handling” and date back to 1978 or so, when the first 8-layer (polynomial) neural network was developed. Fuzzy approximation networks and radial basis function networks hail from a similar era. Weiner explored the polynomial chaos in the 40s, the Karhunen-Loeve transform predates that iirc. - link Everything that could be do

Brief Reflections On "Artificial Intelligence"

Because of my limited knowledge, I have chosen to present these as disconnected notes rather than a more organized essay. However, I have not seen many of these thoughts before, so I hope this short note adds something new to the ongoing discourse. It’s not, really. Not intelligence, anyway. It doesn’t know truth from falsehood, or right from wrong. The technologies that are called artificial intelligence, ( diffusion models and large language models) , are basically very large grammars. They seem to replicate part of brain visual and speech centers, but no other neurological functions. In brain damaged people, there is a thing called confabulation . Confabulation is what happens when a damaged brain reaches for a memory and finds it’s not there. It just fills in the gaps. And I think that’s pretty much what an LLM does; it seems to be a replication of part of a brain but it has no real memory or logical capacity or ethics. A human author knows to fact check and not to plagiarize

Replacing Twitter: Uses of Microblogging

There is a tension between microblogging as social activity and reading a microblog site for news. One wants to know the official announcements of, say, the state of Texas or the Tory party. Yet if these things came from your friends or even casual contacts you would ignore or block them. This is the balance one negotiates on Twitter. To effectively replace Twitter a site or a service has to support this balance.