Automatic Introspection
One of my concerns about LLM-based ‘AI’ systems and their ability to generate code has always been the idea of people creating software they don’t understand. It doesn’t matter too much for an individual’s personal projects, for little utilities, and so on. But in a professional context, such as my work, it borders on the terrifying.
It can be hard enough, even as an experienced programmer, to understand the intent of an unfamiliar codebase — hell, sometimes it’s hard to understand code you wrote yourself a few months or years ago. How much more will that be the case when the code was generated by an automated process that was prompted in turn by someone who didn’t really know what they were doing?
That concern remains, even as I read stories of people getting Claude to write code, and getting it to fix it when it doesn’t work at first. In the long term — maybe the medium term — even bug fixing might be done by prompting.
But from a couple of recent experiences, one of my concerns has moved to something perhaps more familiar to non-programmers. People crafting English text that the reader doesn’t really understand.
And maybe that they, the ostensible writer, don’t fully understand. We’ve lived through decades of business jargon filling our textual brain space, read endless corporate bulletins where we wondered whether the writers understood what they were trying to say. And now the power of automated obfuscation is available to everyone.