AI-AI-AI

During a Mastodon conversation between Signal Foundation President Meredith Whittaker and data scientist and activist Emily Gorcenski it was pointed out that most of the techniques that are called “AI” date to the 1970s and 1980s. Some predate modern computing.

Ms. Whittaker:

Deep learning techniques date from the 1980s, & “AI” had been hot/cold for decades, not slow until 2012. There was no new “single idea” in 2012. What WAS new, & propelled the AI boom, was concentrated resources (data/compute) controlled by tech cos. – link

Ms. Gorcenski:

Backpropagation dates back to the 60s. Deep learning neural nets used to be called “group methods of data handling” and date back to 1978 or so, when the first 8-layer (polynomial) neural network was developed. Fuzzy approximation networks and radial basis function networks hail from a similar era. Weiner explored the polynomial chaos in the 40s, the Karhunen-Loeve transform predates that iirc. - link

Everything that could be done with computers was possible in embryonic form from the very beginning. Thinkers like Ada Lovelace and, later, Alan Turing, anticipated most of modern computing and philosophers had been speculating about such issues for millennia; the emergence of large language models and diffusion image generators have provided some answers to age-old philosophical questions while raising many more. But there is nothing new here. It was a hope of the early AI researchers that intelligence would emerge from sufficiently large models using these techniques, but this does not appear to be the case.

Economist Brad Delong trained a chatbot which he named SubTuringBradBot on his Slouching Towards Utopia and was unable to get answers that showed understanding out of the 'bot. He writes:

Again: this is not the answer I would give. Again: it is just giving the questioner a chopped salad of mixed-up reasons, rather than an explanation grounded in a theoretical framework.

But how to fix this when the ChatBot is not grounded in a theoretical framework on which it builds explanations?

In other words, the chatbot failed the Turing imitation test. It superficially interacts like a person, but when deeper questions are raised, it fails. Apparently intelligence does not emerge from sufficiently large language models. What the marketeers are calling artificial intelligence is in fact artificial glibness.

Comments

Popular posts from this blog

The General Intelligence of Robots

A Grand Unified Theory of Bad New Economy Firms

Richard M. Stallman and the Failure of the Free Software Foundation