Cognitive biases and LLM/AI

I've been a moderator on ServerFault (a Q&A site like StackOverflow) for over a decade at this point. I am deeply, intimately familiar with what people in a hurry to solve a problem look for. They want:

  • A how-to explaining exactly how to get from where they are to where they need to be.
  • No faffing about with judgment calls regarding the wisdom of doing so.

This is good for Q&A sites, because not everyone is good at generalizing their solution from 30 other answers similar but not quite like their case. They want a step by step how-to that will get them out of the hole they're in. The most trusted ServerFault answerers (I was among them once) were good at doing this. We didn't throw RTFM at them, or cite section and sub-section of the manual they should have read. We explained why it went bad, how to get out of the hole, and perhaps some next steps. The better ones weren't judgmental in describing the foot-guns in play, but described negative consequences that you might not want to experience.

ChatGPT and related technologies act like higher quality answerers. You ask it "give me a step by step guide to get to where I need to go," and that is what it will produce. A step by step guide, no judgement about any foot-guns you're playing with. So nice. You don't have to put up with the judgmental best practices chorus to do so, and it isn't judgmental every time. You don't have to hope to get a nice answerer the way you do on ServerFault/StackOverflow.

Whether that solution is actually fit for purpose is not guaranteed, but it sure looks authoritative. This is where the cognitive biases come in to play.

Humans looking for help want one thing: someone who knows how to do the thing to help them do the thing. We asses "knows how to do the thing" many ways, but a time tested strategy is buried in the definition of the word mansplaining. To save time, these are the qualities we look for as a species:

  • Is reassuring that this will actually work.
  • Is confident in their knowledge.
  • Is willing to go deeper if asked.

These three are extremely possible to fake, and humans have been doing so since humans were invented. The check and balance that Q&A sites like ServerFault and StackOverflow offer is voting on the answers, so actually wrong answers get some signal to validity. It have seen people be confidently wrong time and time again on ServerFault, and get cranky when their lack of actual knowledge gets called out.

Technology like ChatGPT automates producing answers that hit all three points, but lacks the voting aspect and any guarantee of actual expertise. In short, ChatGPT and related technologies automate the exploitation of human "sounds like an expert" heuristics. This is an exploit of the human cognitive system, and this is why you're going to start seeing it everywhere. It have complete faith that some startup somewhere is going to come up with the idea of:

What if we have generative LLM/AI create answers to questions that are then voted on by 'experts' and we sell the answers?

It's coming. May already be here.