Anyone who’s talked to me over the last couple years about the technological flea circus the impressionable SillyCon Valley business press have been calling "AI"1, will have noticed that I pointedly refuse to call it “AI”. Instead I refer to it as “MOLE” and talk about “MOLE training”. I thought it was about time I wrote about why, in some detail.
The tech that’s been hyped as “AI” relies on using truly massive amounts of data and computation (much of it arguably misappropriated) to "train" clusters of backpropagation algorithms. The result is novel forms of artificial stupidity that can be nudged into performing some very impressive parlour tricks. As you might have guessed from that description, I think the whole thing’s a shell game.
But don't take my word for it. I’m just some guy with a tech blog. Ask David Chapman, a Zen Buddhist philosophy writer with a PhD in AI research, who describes this form of “AI” technology as;
"... exceptionally error prone, deceptive, poorly understood, and dangerous. They are widely used without adequate safeguards in situations where they cause increasing harms. They are not inevitable, and we should replace them with better alternatives."
This is quoted from the intro to Gradient Dissent, a booklet he wrote as a technical supplement to a book he wrote on the subject for general audiences. Which he called, fairly unambiguously, Better Without AI (where "AI" refers to these backprop algorithms).
It’s to avoid blowing into the "AI" hype bubble, and perhaps help to deflate it a bit before it does more damage than its already doing, that I've taken to referring to this wrongheaded approach to AI as "MOLE training". Where MOLE stands for; Machine Operated Learning Emulator.
When I first came up with this backronym, the E stood for "Engine". But as Chapman explains in Better Without AI, "machine learning" is also a reputation laundering term, endowing backprop algorithms with capabilities they don't have. They don’t learn, or even simulate it, they just create a vague appearance of it (which is why I put "train" in quotes). So Learning Emulator seemed more fitting.
It all began - as do so many things - with a social media hot take. Where my frustration with "AI" hype boiled over into referring to a "Large Language Model" as a "mole"2. This tickled my fancy so much that I came up with a full backronym to fit the metaphor. But the more I explain it, the more it seems to fit.
A mole is a burrowing animal that's mostly blind, fairly stupid, and single-minded in its pursuit of food (and I guess occasionally sex, or there wouldn't be any moles). No sensible person would try to train one. However, according to their Wikipedia article;
"The star-nosed mole can detect, catch and eat food faster than the human eye can follow."
So in theory, if you're willing to spend an absurd amount of time and money on mole training, you may be able to get it to do something useful, at least some of the time. Similarly, you could probably rent out your trained moles, as long as you could convince people that the mole's erratic behaviour was the result of inadequate direction from the handler. Not the fact that, well, it's a mole.
Yes, this would require you to promote the future capabilities of trained moles with levels of hyperbole and public manipulation that would make P. T. Barnam blush. But if you’re willing to spend an absurd amount of time and money training moles, you might as well go all in.
Returning to the digital MOLEs, the hyperbole and public manipulation by companies wanting to rent out trained MOLEs has been so effective that lots of people now think they're intelligent. To the point that they seriously consider using them in place of experienced human workers. Some already are. As science fiction author Charles Stross put it;
"Unfortunately, human beings assume that LLMs are sentient and understand the questions they're asked, rather than being unthinking statistical models that cough up the highest probability answer-shaped object generated in response to any prompt, regardless of whether it's a truthful answer or not."
Some people even fear that we're in imminent danger of MOLEs taking over the world. To anyone with even a back-of-an-envelope understanding of MOLE training technology (like me), this seems... unlikely. So I have to wonder, why do the supposed tech genius CEOs of so many DataFarming companies seem to be leaning into it?
Perhaps this warning from the Post Gutenberg blog sheds some light on this?
"Treat as singing with forked tongues the chorus of alarmism about AI dangers from too many sci-tech stars. Note that their warnings are focused on the future — with no mention of damage being done by AI in the present, in which fortunes are being made by some of those alarmists as they themselves exploit the all-invading technology..."
Which segues nicely into a quote from a somewhat contentious blog piece by Simon Willison, which argues that trying to use more accurate terms in place of "AI" is a distraction. Perhaps, but I hope he'd agree that calling it MOLE training makes it fun, if nothing else. Anyway he's bang on when he says that;
"Where this gets actively harmful is when people start to deploy systems under the assumption that these tools really are trustworthy, intelligent systems—capable of making decisions that have a real impact on people’s lives."
Although whether this is really the reason institutional decision-makers are renting trained MOLEs is questionable. A more cynical explanation was summed up nicely in a 2024 comment by Jeremy Rose on the 1/200 podcast;
"I think the AI really just functions as a kind of justifying machine, rather than actually dictating what happens."
People in positions of power and responsibility can be spared a lot of trouble and hassle when they’re held to account for their decisions by activist campaigns, watchdog institions or the news media, if they can just shrug their shoulders and say “computer says no”.
Indeed, as David Chapman points out, again in Better Without AI;
“In fact, artificial intelligence is something of a red herring. It is not intelligence that is dangerous; it is power. AI is risky only inasmuch as it creates new pools of power. We should aim for ways to ameliorate that risk instead."
Maybe it’s not the MOLEs we need to worry about taking over the world, but the MOLE trainers? Because the corporations furiously engaged in promoting the million household uses of their trained MOLEs are already doing damage. Not only in the ways MOLEs are being deployed - which is bad enough - but in the effect they’re having on the future visioning and funding of the digital tech research.
A July article on The Register featured an interview with Pierre-Yves Gibello, CEO of a Free Code project incubator called OW2. Who is concerned (as am I) that the EU’s Next Generation Internet project, which has funded hundreds of pioneering Free Code projects, is facing an almost total loss of funding. Probably because;
"Our French [Horizon national contact point] was told - as an unofficial answer - that because lots of [EU technology research] budget are allocated to AI, there is not much left for Internet infrastructure."
I find it all this especially shocking because the technology they’re proposing shift most of their software research and development funding to is the ultimate proprietary software, the antithesis of Free Code. A trained MOLE can't be opened or studied to understand why it got something wrong, or right. So no amount of research or funding can improve it.
As Tara Tarakiyee, FOSS Technologist at Sovereign Tech Fund, said a month ago;
"A system that can only be built on proprietary data can only be proprietary. It doesn’t get simpler than this self-evident axiom."
This is bad enough, but it gets even worse. MOLE trainers are now on the brink of achieving what the “cloud computing” investors promoting Source Available software couldn’t. Weaking the Open Source Definition so they can get its stewards, the Open Source Initiative (OSI), to endorse their anti-freedom software as “Open Source”.
A MOLE training system that fully respects our software freedoms - so we can protect our other freedoms - is probably not even possible. The OSI ought to be defending the principled but unpopular view that "AI" isn't Open Source, unless every aspect of it respects all 4 software freedoms (to run, study, modify and distribute, all without restrictions). The same standard they've held to for other kinds of software, despite the protestations of the Ethical Source/ Source Available camps.
Instead, Stefano Maffulli, the OSI's current executive director, has led the charge on a new Open Source AI Definition. Whose whole purpose is to water down the definition of Open Source, and keeps watering it down until some MOLE training fits into it.
This is bad. Very bad. But it gets worse.
Even as the EU is about to splurge huge volumes of its digital tech research funding on training the better MOLE, and the OSI is wilfully weakening a key bastion of software freedom to help the MOLE trainers with their reputation laundering, copyright violation lawsuits may be about to flip the table. Exposing the inconvenient fact that there’s no pea under any of those shells, and the whole game is a rigged.
If even one jurisdiction rules that a trained MOLE is a derivative of any copyrighted work it's trained on (and it seems logical that it is), the whole charade is over. Because as the MOLE trainers have already admitted in hearings on the subject, paying copyright royalties to everyone whose data is used in MOLE training defeats the entire purpose. Just like paying all the money made from exhibiting the Mechanical Turk, to the guy who sits inside it making it seem intelligent.
If you find all this as shocking as I do in light of the above, please consider republishing and signing the open letter calling on the EU to continue funding to Next Generation Internet projects, as OW2 have done on their website (and I’ve done here). We can not let the MOLEs inherit the earth. They’d just eat it (or maybe have fumbly mole sex with it).
Images:
"Mole" by cowboytoast, licensed CC BY-SA 2.0.
"star-nosed-mole-3" by gordonramsaysubmissions, licensed CC BY 2.0.
"Computer says no" by sndrv, licensed CC BY 2.0.
"teaching mosfet 'shell game'" by adafruit, licensed CC BY-NC-SA 2.0.
Or more likely had me talk at them about it…
Here’s the earliest example I can find in my fediverse account, from June last year, but I’m sure the original coinage was from months before that.