5 Comments

Super interesting! Just a small thing - what happened with the alignment of the 3 tables? Pretty much impossible to trace what is what. The source paper doesn't have this issue.. Love you article, happy to fix the diagrams for you if you care enough to replace them.

Expand full comment

This is the second time I've seen the claim that LLMs "hallucinate". The word choice is interesting - they routinely create text that includes untruths, possibly at a rate approaching a human pathological liar. But they don't have senses, and so aren't capable of hallucination.

The first time I saw it, I thought an engineer was being tactful - and just a bit humorous - describing how they filtered a chatbot's recommendations to include only things that actually exist.

But your use of the same term suggests this is becoming the standard way to describe LLMs producing falsehoods, particularly falsehoods lacking any grounding in reality. (I.e. they didn't find the false statement in their training data.) I'm curious where the term came from.

Expand full comment