Dr Pearson on Refrigeration:

The Dark Side of AI

Warning against overreliance on opaque algorithms in safety-critical engineering decisions.


Last month’s column gave a brief overview of the possible applications of artificial intelligence to the refrigeration industry and was almost entirely a verbatim response from an AI bot to the question “write a 600-word column for an industry journal on the possible advantages that AI will bring to the refrigeration industry.” The only bit not created by the chatbot was the final sentence of 21 words, which posed the question “how much of the column was written by AI?”

So 95.7% of the words published last month came unaltered from a single pass through a software portal and I didn’t disagree with anything that the bot suggested. Some of it was a little clunky and didn’t quite fit my style—for example I abhor the use of the word “impact” as a synonym for “effect”—but overall I have to say it was very impressive.

However, I’m aware that this is partly because we were deliberately playing to the bot’s strengths. It is a “large language model,” which doesn’t mean using words like “sesquipedalian,” but rather that the algorithm behind the portal draws on billions of connections between words to build a coherent, meaningful response based on all of the previous connections it has encountered. It is what is known as a “Generative Pre-Trained Transformer” (GPT) and the economy of scale that this creates is difficult to imagine. It is what makes AI so impressive, but it has some drawbacks.

I’m not talking here about the existential threats posed by AI in science fiction classics like “Terminator” or “Blade Runner,” but rather the limits imposed by reference to a finite data set, no matter how big or sophisticated it is. For example although publicly available AI tools have advanced enormously since the beginning of this year, and continue to do so, they are still not very good at number related activities (the “600 word” response last month was only 472 words).

The algorithms have also shown a tendency to “hallucinate,” creating responses that are untrue and presenting them as fact because they fit the pool of data on which the tool was trained which was itself fallible. The problem when AI’s hallucinations enter the public domain is that they reinforce the false as well as the true. This disadvantage is a bit like the fallacy of patent novelty: patent applications are thoroughly assessed by skilled examiners, but generally only by reference to previous patent applications and publications. This can prove that an idea is novel with regard to that data set, but it only takes one magazine article, academic paper or news clipping to demonstrate a lack of novelty that destroys the whole edifice.

This means that any text presented by AI needs to be checked carefully for relevance and completeness as well as accuracy. It is a fantastic tool in assisting with the effective communication of complex ideas, but like all the other tools that we use daily, such as search engines, grammar checkers or voice-activated digital assistants, it needs an occasional reality check. Discernment, as I mentioned in the column in December last year, is perhaps the most valuable skill that we can develop from an interest in mathematics. The rapid advance of AI capability has just increased that value.

The Dark Side of AI