Amid all the hype about AI it sometimes seems as though the world has lost sight of the fact that software such as ChatGPT contains no intelligence. Instead it’s an extremely sophisticated system for extracting plausible machine generated content from the corpus on which it is trained. There’s a long history behind machine generated text, and perhaps the simplest example comes in the form of a Markov chain. [Ben Hoyt] takes us through how these work, and provides some Python code so that you can roll your own.
If you’re uncertain what a Markov chain is, consider the predictive text on your phone. It works by offering the statistically most likely next word in your sentence, and should you accept all of its choices it will deliver sentences which are superficially readable but otherwise complete nonsense. He demonstrates with very simple short source texts how a collocate probability map is generated for two-word phrases, and how from that a likely next word can be extracted. It’s not AI, but it can be a lot of fun to play with and it opens the door to the entire field of computational linguistics. We haven’t set one loose on Hackaday’s archive yet but we suspect it would talk a lot about the Arduino.
We’re talking about Markov chains here with respect to language, but it’s also worth remembering that they work for music too.
Header: Bad AI image with Dall-E prompt, “Ten thousand monkeys with typewriters”.
There’s No AI In A Markov Chain, But They’re Fun To Play With
Source: Manila Flash Report
0 Comments