Super interesting! Just a small thing - what happened with the alignment of the 3 tables? Pretty much impossible to trace what is what. The source paper doesn't have this issue.. Love you article, happy to fix the diagrams for you if you care enough to replace them.
This is the second time I've seen the claim that LLMs "hallucinate". The word choice is interesting - they routinely create text that includes untruths, possibly at a rate approaching a human pathological liar. But they don't have senses, and so aren't capable of hallucination.
The first time I saw it, I thought an engineer was being tactful - and just a bit humorous - describing how they filtered a chatbot's recommendations to include only things that actually exist.
But your use of the same term suggests this is becoming the standard way to describe LLMs producing falsehoods, particularly falsehoods lacking any grounding in reality. (I.e. they didn't find the false statement in their training data.) I'm curious where the term came from.
I don't know where the term came from, but LLM certainly invent convincing stories out of whole cloth. For example, they'll invent URLs and academic citations.
I've encountered that on Wikipedia. Apparently people are having chat bots write articles for Wikipedia. The chat bots include URLs to non-existent pages on real sites, such as the New York Times. The new page patrollers haven't been in the habit of clicking on links in an article's references - they tend to pass the article if the references look good superficially. Oops!
Super interesting! Just a small thing - what happened with the alignment of the 3 tables? Pretty much impossible to trace what is what. The source paper doesn't have this issue.. Love you article, happy to fix the diagrams for you if you care enough to replace them.
Not sure what you mean here, alignment in what way?
This is the second time I've seen the claim that LLMs "hallucinate". The word choice is interesting - they routinely create text that includes untruths, possibly at a rate approaching a human pathological liar. But they don't have senses, and so aren't capable of hallucination.
The first time I saw it, I thought an engineer was being tactful - and just a bit humorous - describing how they filtered a chatbot's recommendations to include only things that actually exist.
But your use of the same term suggests this is becoming the standard way to describe LLMs producing falsehoods, particularly falsehoods lacking any grounding in reality. (I.e. they didn't find the false statement in their training data.) I'm curious where the term came from.
I don't know where the term came from, but LLM certainly invent convincing stories out of whole cloth. For example, they'll invent URLs and academic citations.
I've encountered that on Wikipedia. Apparently people are having chat bots write articles for Wikipedia. The chat bots include URLs to non-existent pages on real sites, such as the New York Times. The new page patrollers haven't been in the habit of clicking on links in an article's references - they tend to pass the article if the references look good superficially. Oops!