The technological flea circus that's currently hyped as "AI" relies on truly massive amounts of data and computation (much of it arguably misappropriated). Which they use to "train" clusters of backpropagation algorithms, resulting in some novel forms of artificial stupidity that can be nudged into performing some very impressive parlour tricks. As might be clear from that description, I think the whole thing’s a shell game.
But don't take my word for it. I’m just some guy with a tech blog. Ask David Chapman, a Zen Buddhist philosophy writer with a PhD in AI research, who describes this form of “AI” technology as;
"... exceptionally error prone, deceptive, poorly understood, and dangerous. They are widely used without adequate safeguards in situations where they cause increasing harms. They are not inevitable, and we should replace them with better alternatives."
This is quoted from the intro to Gradient Dissent, a booklet he wrote as a technical supplement to a book he wrote on the subject for general audiences. Which he called, fairly unambiguously, Better Without AI (where "AI" refers to these backprop algorithms).
To avoid blowing into the "AI" hype bubble, and perhaps deflate it a bit before it does more damage than its already doing, I've taken to referring to this wrongheaded approach to AI as "MOLE training". Where MOLE = Machine Operated Learning Emulator.
When I first came up with this backronym, the E stood for "Engine". But as Chapman explains in Better Without AI, even the marketing term "machine learning" endows the backprop tech with capabilities it doesn't have. It's not learning, or even simulating it, just creating a vague appearance of it (which is why I put "train" in quotes). So Learning Emulator seemed more fitting.
This approach to terminology began - as do so many things - with a social media hot take. Where my frustration with "AI" hype boiled over, and I referred to a "Large Language Model" as a "mole" (here’s the earliest example I can find in my fediverse account). Then I came up with the full backronym to fit the metaphor. But the more I explain it, the more it seems to fit.
A mole is a burrowing animal that's mostly blind, fairly stupid, and single-minded in its pursuit of food (and I guess occasionally sex, or there wouldn't be any moles). No sensible person would try to train one. However, according to their Wikipedia article;
"The star-nosed mole can detect, catch and eat food faster than the human eye can follow."
So in theory, if you're willing to spend an absurd amount of time and money on mole training, you may be able to get it to do something useful, at least some of the time. Similarly, you could probably rent out your trained moles, as long as you could convince people that the mole's erratic behaviour was the result of inadequate direction from the handler. Not the fact that, well, it's a mole.
Yes, this would require you to promote the future capabilities of trained moles with levels of hyperbole and manipulation of public perception that would make P. T. Barnam blush. But if you’re willing to spend an absurd amount of time and money training moles, you might as well go all in.
Returning to the digital MOLEs, the hyperbole and manipulation of public perception by companies wanting to rent out trained MOLEs has been so effective, lots of people now think they're intelligent. To the point that they seriously consider using them in place of experienced human workers. Some already are. As science fiction author Charles Stross put it;
"Unfortunately, human beings assume that LLMs are sentient and understand the questions they're asked, rather than being unthinking statistical models that cough up the highest probability answer-shaped object generated in response to any prompt, regardless of whether it's a truthful answer or not."
Some people even fear that we're in imminent danger of MOLEs taking over the world. To anyone with even a back-of-an-envelope understanding of MOLE training technology (like me), this seems... unlikely. So I have to wonder, why do the supposed tech genius CEOs of so many DataFarming companies seem to be leaning into it?
Perhaps this quote from the Post Gutenberg blog sheds some light on this?
"Treat as singing with forked tongues the chorus of alarmism about AI dangers from too many sci-tech stars. Note that their warnings are focused on the future — with no mention of damage being done by AI in the present, in which fortunes are being made by some of those alarmists as they themselves exploit the all-invading technology..."
Which segues nicely into a quote from a somewhat contentious blog piece by Simon Willison, which argues that trying to use more accurate terms in place of "AI" is a distraction. Perhaps, but I hope he'd agree that calling it MOLE training makes it fun, if nothing else. Anyway he's bang on when he says that;
"Where this gets actively harmful is when people start to deploy systems under the assumption that these tools really are trustworthy, intelligent systems—capable of making decisions that have a real impact on people’s lives."
Although whether this is really the reason institutional decision-makers are rent trained MOLEs is questionable. A more cynical explanation was summed up nicely in a 2024 comment on the 1/200 podcast by Jeremy Rose;
"I think the AI really just functions as a kind of justifying machine, rather than actually dictating what happens."
Indeed, as David Chapman points out, again in Better Without AI;
“In fact, artificial intelligence is something of a red herring. It is not intelligence that is dangerous; it is power. AI is risky only inasmuch as it creates new pools of power. We should aim for ways to ameliorate that risk instead."
Maybe it’s the really the MOLEs we need to worry about taking over the world, but the MOLE trainers? Because the corporations furiously engaged in promoting the million household uses of their trained MOLEs are already doing damage. Not only in the ways MOLEs are being deployed - which is bad enough - but in the effect they’re having on the future visioning and funding of the digital tech research.
A few days ago on the fediverse, I linked to a July article from the Register, quoting Pierre-Yves Gibello, CEO of a Free Code project incubator called OW2. Who said that the EU’s Next Generation Internet project, which has funded hundreds of pioneering Free Code projects, was facing an almost total loss of funding because;
"Our French [Horizon national contact point] was told - as an unofficial answer - that because lots of [EU technology research] budget are allocated to AI, there is not much left for Internet infrastructure."
I find it all this especially shocking because a trained MOLE is the antithesis of Free Code, the ultimate proprietary software. It can't be opened or studied to understand why it got something wrong, or right. So no amount of research or funding can improve it.
As Tara Tarakiyee, FOSS Technologist at Sovereign Tech Fund, said a month ago;
"A system that can only be built on proprietary data can only be proprietary. It doesn’t get simpler than this self-evident axiom."
A MOLE training system that fully respects our software freedoms - so we can protect our other freedoms - is probably not possible. This is bad enough, but it gets even worse. MOLE trainers are now on the brink of achieving what the Vulture Capitalists promoting Source Available software couldn’t do; weaking the Open Source Definition so they can get its stewards, the Open Source Initiative (OSI), to endorse their anti-freedom software as “Open Source”.
The OSI ought to be defending the principled but unpopular view that "AI" isn't Open Source, unless every aspect of it respects all 4 software freedoms (to run, study, modify and distribute, all without restrictions). The same standard they've held to for other kinds of software, despite the protestations of the Ethical Source/ Source Available camps. Instead, Stefano Maffulli, the OSI's current executive director, has led the charge on a new Open Source AI Definition. Which waters down the definition of Open Source, and keeps watering it down, until some MOLE training fits into it.
This is bad. Very bad. But it gets worse. Even as the EU is about the splurge huge volumes of its digital tech research funding on training a better MOLE, and the OSI is weaking a key bastion of software freedom, copyright violation lawsuits are exposing that fact that there’s no pea under any of the shells. That the whole game is a rigged.
If even one jurisdiction rules that a trained MOLE is a derivative of any copyrighted work it's trained on (which it is), the whole charade is over. Because as the MOLE trainers have already admitted in hearings on the subject, paying copyright royalties to everyone whose data is used in MOLE training defeats the entire purpose. Just like paying all the money made from exhibiting the Mechanical Turk, to the guy who sits inside it making it seem intelligent.
If you find all this as shocking as I do in light of the above, please consider republishing and signing the open letter calling on the EU to continue funding to Next Generation Internet projects, as OW2 have done on their website. We can not let the MOLEs inherit the earth. They’d just eat it (or maybe have fumbly mole sex with it).
Images:
"Mole" by cowboytoast, licensed CC BY-SA 2.0.
"star-nosed-mole-3" by gordonramsaysubmissions, licensed CC BY 2.0.