Claim Chowdering Gruber's Claim Chowder

John Gruber makes a ridiculous assertion, or so it seems to me. In criticising Dario Amodei, the CEO of the AI startup Anthropic’s claim that ‘AI, and not software developers, could be writing all of the code in [their] software in a year’, Gruber takes things the other way:

It may well be true that 90 percent of the lines of programming code that are written today, Friday 13 March 2026, will have been generated by AI. If anything, it’s probably a higher percentage.

This seems like nonsense to me. Certainly AI-generated code is being created, and some of it released. But I work in software development, in a real company making real software that moves people’s money around. There’s some experimentation going on, people will use it to try things out, or better understand things, as I mentioned a few weeks ago. But there are millions of lines of code out there being written and managed every day by real humans.

And when you’re working in a highly-regulated industry like the payment card one as I am, or medical systems, say, it seems unlikely to me that we will ever let significant applications into the world if they were not written by humans.

Maybe I’m being naive, at least by saying ’ever’: if there’s one certainty it’s that things will change. But the idea that we’re already above the 90% AI-generated mark? Sure, Anthropic are likely to be at that level. They build these tools. Eating your own dogfood and all that. But for normal, day-to-day development? It just doesn’t ring true to me.

Plus, of course, software development is about a lot more than writing code. But that’s a discussion for another time.


In more ‘AI’ nonsense, Grammarly is giving bad advice and tagging writers’ names to it, without paying the writers or even getting their permission.

I tried Grammarly a few years ago and hated it, but that was long before the LLM boom. This is beyond unethical.


I find it deeply weird and surprising to read of authors claiming as ‘mine’ images they requested, or copied and manipulated using ‘AI’. The kind of claim quoted in the linked piece, that ‘it’s all mine.’ When it plainly isn’t.

Writers, you’d think, ought to understand that words have meanings.


Ed Zitron’s latest, On NVIDIA and Analyslop, is very good on the current state of some financial stuff related to ‘AI’. It’s also good on how much more complex software development is than the ‘vibe coding’ believers would tell you:

Software is a tremendous pain in the ass. You write code, then you have to make sure the code actually runs, and that code needs to run in some cases on specific hardware, and that hardware needs to be set up right, and some things are written in different languages, and those languages sometimes use more memory or less memory and if you give them the wrong amounts or forget to close the door in your code on something everything breaks, sometimes costing you money or introducing security vulnerabilities.


Automatic Introspection

On texts created by prompts. If you can express your meaning in a prompt, why not just send out the prompt?


Ablative Irony

Also via Kottke comes this article by Claudio Nastruzzi at The Register, where he talks of ‘semantic ablation’ in text generated by ‘AI’:

When an author uses AI for “polishing” a draft, they are not seeing improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters – the precise points where unique insights and “blood” reside – and systematically replaces them with the most probable, generic token sequences. What began as a jagged, precise Romanesque structure of stone is eroded into a polished, Baroque plastic shell: it looks “clean” to the casual eye, but its structural integrity – its “ciccia” – has been ablated to favor a hollow, frictionless aesthetic.

It’s about how LLMs — probability-based machines, after all — tend to push text in a generic direction, away from a writer’s unique voice, towards a common mean.

So let’s all not do that.

The irony is, I was trying to look up an unfamiliar word in that quote — ‘ciccia’. The dictionaries installed on my Mac had nothing useful, and nor did Wikipedia. DuckDuckGo’s search only came up with uses of the word as a family or brand name. I used the ‘!g’ syntax to send the query to Google.

It must be the first time I’ve had to do that in quite a while. I’ve heard people mention — complain about — the ‘AI Overview’ the Big G provides, but I’m not sure I’ve every seen it before now. But it was what had the answer:

informal Italian term for meat or, idiomatically, body fat (flab).

Clearly Nastruzzi is using it as we might say ’the meat of an argument’, or similar.

Google’s AI thing does not cite its source, though, and none of the next few search results give a reference for that use in English, though one is to the meaning of the Italian word.

Anyway, my recommendation to all fellow writers, would-be writers, and people who want to or have to communicate by writing: express yourself. Don’t let American machines do it for you (and use as many em-dashes as you need, as I have done here).


Good Programming Test

Thoughts on recent posts and how my thinking is changing.


Asbestos Intrusions

AI is the asbestos in the walls of our technological society, stuffed there with wild abandon by a finance sector and tech monopolists run amok.

Cory Doctorow’s latest piece is the script of a talk he gave on ‘AI’, or more specifically, ‘how to be a good AI critic.’ He’s writing a book on the same subject.

I found it weirdly comforting in one specific area. That of the supposed copyright-infringement of the training of LLMs. Cory explains why it did not, in fact, infringe copyright:

First, you scrape a bunch of web-pages. This is unambiguously legal under present copyright law. You do not need a license to make a transient copy of a copyrighted work in order to analyze it, otherwise search engines would be illegal. Ban scraping and Google will be the last search engine we ever get, the Internet Archive will go out of business

And he goes on from there, explaining why the subsequent steps in training also do not infringe. Some would disagree, of course, and many would say they put their work on the web with a ‘Not for commercial use’ type of licence, such as a Creative Commons one.1

Which is fair enough too. I don’t think many would disagree with the idea that using the web to train these things was unethical; even more so with using pirated books. But it wasn’t strictly in violation of copyright (at least the current state of US copyright).

Why do I find that comforting? What I mean is, it removes or slightly reduces one of the reasons to be opposed to, or appalled by, these prediction machines, which I alluded to in one of my earlier thoughts about the matter. And in doing so maybe helps me in my quest to understand my own feelings, by at least reducing the number of things I have to consider.

Something like that, anyway. Read the whole of Cory’s piece, it’s very good.


  1. I have done so myself in the past, though my site doesn’t currently show any licence. ↩︎


Less Like Manufacturing

Software as craft, versus automated factories, perhaps.


Generalised Philosophy Talk

Using ‘AI’ is cheating: discuss.


Aging Inquiries

In which I muse on my reaction to ‘AI’.


Little Lost Machine

A little while ago, which turns out to have been June 2024, I microposted saying I ought to write about my thoughts on the current state of what people like to call AI. LLM-based prediction machines, some might say. Then about a year later I briefly wrote again about my negative reaction to the whole idea.

But I didn’t go into detail. And I’m still not going to; at least not today. I have several thousand words of attempted essays, if that’s not a tautology1, wherein I try to understand my own thoughts and feelings.

And time passes. And the development of the things is lightning fast. It’s a moving target that annoys me.

Still, I do have thoughts. And feelings. And the best way to understand them is to write about them. And the best way to write about them is publicly. Maybe. So I’m going to try writing about them here. A series of short posts around that theme. This is the first.

Maybe I’ll give them their own category, though I have too many categories as it is. I discovered it’s hard to search my blog for ‘AI’. Micro.blog’s search is good, but that’s just such a common set of letters. Weirdly, it brought up all my Crucial Tracks entries, as if it was also finding the ‘IA’ in ‘crucial’.


  1. What with ’essay’ originally meaning ‘attempt’. ↩︎


I still don’t understand why AI gives me such a visceral negative reaction.

The intellectual reasons for concern are well known.

But right now, I just wish apps would stop adding AI and trying to tell me it’s great. I’m looking at you Raycast, but you’re just the most recent culprit.


I keep thinking I should write about the current state of what we are calling AI. Trouble is, I still can’t quite decide what I think about it. Or why it makes me feel the way it does. Or even what, exactly, that way is.