I think there’s an analogy to Banner blindness for how I’m thinking about what LLMs are useful for.
Since I became aware of computers, there have been technologies promising to do magical things through automation, most of which never worked well enough to actually be useful:
- Spell checking - actually was pretty useful, but got confused by jargon or domain-specific words, and the red squiggles were nice but nobody really trusted it to auto-correct. People still routinely produce writing with typos and errors.
- Grammar checking - usually spat out a bunch of nonsense
- Auto summarize - produced word salad that rarely represented the meaning of the original
- No-code, high level languages
- Voice recognition - finally good enough to be useful every day, but even now Google Home and Siri, flagship products from market leaders, get things wrong all the time, and you’d never trust them to do anything important or risky.
This made it pointless to really imagine what could happen if you relied on those tools. Despite the hype, they didn’t really change anything.
LLMs aren’t magic, but they are actually good enough at some of these things to be useful. But my brain is trained by decades of bullshit to ignore claims like that.