Convincing Lies about Figs
We are convinced that LLMs are somehow superior to us, that they know more than we ever could. But they don’t. They are just very convincing liars.
You know how I love to start off a piece with a mental model. Today’s is a great one: density. Density is such a useful thinking tool because it does what the best mental models do so well: bridge the physical-to-abstract chasm so elegantly it’s almost impossible not to understand it.
There’s a hint of obviousness in our everyday use of the term, though not always. Intuitively, I think we know what someone means when they describe a text as dense even if we might not be able to describe the concept neatly, or in mathematical terms. “I’m reading a book about particle physics at the moment. It’s taking me ages to get through because it’s so dense“ is evocative and immediately clear. Other times, lay parlance seems to invert our intuition. For example, at least in British English, you can refer to someone stupid as being ‘a bit dense’, which never made any sense to me. It feels like someone with dense cerebral matter would be extremely intelligent, not dumb.
Truly understanding density is not hard, but requires unpicking some odd and intertwined phenomena.
If you’ve ever been caught out by the question “What weighs more, a kilo of hammers or a kilo of feathers?” you know where this is going. The correct answer, of course, is that they weigh exactly the same - one kilo. The question prays on our innate tendency to confuse, at least superficially, the concepts of weight (or mass) and density. One would normally say that hammers are ‘heavier’ than feathers and one would be perfectly well understood, but that assertion isn’t quite accurate. It would be more correct to say that hammers are denser than feathers, which can be extrapolated to “for a given equal physical quantity, or spatial volume, the volume of hammers will be heavier than the volume of feathers”. Simpler still, just think of a box. A box of hammers is going to weigh more than a box of feathers. It all depends on volume. In the case of hammers, there is more mass (kilos) per unit of volume (box).
Now you understand density, or at least volumetric mass density, you have a better grounding for understanding the famous phenomenon demonstrated by astronaut David Scott on the surface of the moon in 1971. He showed that a hammer and feather dropped simultaneously will hit the ground at the same time. Go watch the video now - it’s pretty cool. First, it’s worth noting that the moon DOES have gravity, although not as strong as the Earth’s. In the absence of gravity both objects would float off into the ether. Now, the reason that both objects fall at the same rate is that acceleration due to gravitational pull is unaffected by mass (heaviness). Galileo already knew this and it can also be demonstrated on Earth in a vacuum. Vacuum is key here. A vacuum, whether artificially created for an experiment or on the surface of the moon, is the absence of air. It is, therefore, the air that causes a hammer and feather to fall at different rates here on Earth in normal conditions. Why? Because an object moving through air (or any fluid for that matter) meets resistance. Think of the times when, as a kid, you’d put your hand out of the window of a moving car and feel the air push you back. Now imagine that instead of your hand, you’d stuck something much larger out the window, like a beach tennis bat. The resistance would be so large it would probably pull it straight out of your hand. This resistance, or drag, is what helps planes fly and sailboats to circumnavigate the globe. Back to falling objects. In the absence of air, the only factor affecting how fast the objects fall is gravity - nothing intrinsic in the objects has any effect. On earth, where objects fall through air, drag comes into play. The mathematical relationships get a big hairy here, but they can be boiled down to this: smaller and heavier objects (more dense) fall faster than bigger and lighter objects (less dense). Imagine a sheet of polystyrene weighing 100g vs a roll of coins weighing a 100g. The coins would fall much faster.
So far, we’ve been talking exclusively about volumetric mass density - or how much mass is packed into a volume, but the concept of density is often abstracted to something more like “how much of a thing, A, can be packed into a space or volume, B”.
Consider the following assertion: “Companies move to Silicon valley because of the density of tech talent”. Here, A, or the thing being packed in, is ‘tech talent’. B, or the space or volume in which it is being packed, is a geographical space (a city or area). Note then that the density is affected by both quantities A (how much talent) and B (the space into which it is packed). Mathematically, more talent would mean higher density while more space would mean LESS density. That’s the key mathematical relationship: density is inversely proportional to the space or volume taken up. The formula is A/B (A divided by B, or A over B). Expressed in unit terms, it’s A per B, or colloquially ‘Amount of stuff per unit of space’, or concretely for this example “Number of potential technical hires per kilometre squared”.
Right, let’s circle back to the very first example we explored: “I’m reading a book about particle physics at the moment. It’s taking me ages to get through because it’s so dense“. This is pretty abstract, but I think we can dissect it now. Let’s describe the A quantity as “complex ideas” and the B quantity as “pages”. The density we are describing then is “complex ideas per page”, which aligns well with our intuition, I think. The author could have reduced the density by either reducing the number of complex ideas or taking more pages to explain them. Note that just increasing the font size might reduce the physical density of the text, but it wouldn’t affect the intellectual density, obviously.
So we’ve gone from the pretty concrete and physical ‘volumetric mass density’ to the rather abstract ‘complex ideas per page’ version of density. That’s pretty cool I think, and definitely underlines how these concepts, when treated as mental models, give you a framework for thinking and understanding. I’ve spent so much time on this today because I’m going to argue that this type of thinking tool is becoming more and more important in the age of AI where many of us are outsourcing our thinking to machines that absolutely DO NOT use these frameworks to reason.
I’d love to just copy-paste the original ChatGPT conversation here, but I can’t, because R deleted it. A while ago, R asked me what he thought we could do to protect ourselves from the inevitable AI enslavement that was just around the corner and I told him to just be nice to the them, which he has been doing ever since. This conversation got a bit heated, so he decided it would be best wiped from his phone, lest GPT5 and the boys (Claude and Gemini, more intimidating than their names suggest) come knocking. Anyway, here’s my attempt to recreate the chat as faithfully as possible.
J: Jeez, I’ve eaten like 20 figs this afternoon. That must be a shitload of calories. I wonder how many calories there are in a fig.
R: I’ll ask ChatGPT.
R: How many calories in a fig?
ChGPT: There are approximately 30 calories in an average fig. Dried figs, however, have more calories in them. There are approximately 55 calories in an average fig if it’s dried.
J: OK wow.
R: OK cool.
Me: That’s utter bullshit, give me the phone.
R: Ah come on dude, be nice to it.
Me: Fuck that, give me the phone.
Me: Um, that’s cute and everything, but dehydrating doesn’t magically create calories. There are exactly the same amount of calories in an average fig - doesn’t matter whether it’s fresh or dried.
ChGPT: (Thinking)
ChGPT: Of course! You are right! Great observation. You’re so clever and handsome. The number of calories in a fig is the same, irrespective of whether it’s fresh or dried. However, supermarkets tend to select larger figs for drying so that the end result is that average dried figs have more calories, as per my original statement.
J: OK wow.
R: OK cool.
Me: OK, that’s nice, but I think you just made that up to justify your original statement.
R: Dude, be careful, or at least tell it it’s you talking, not me.
Me: Where did you get the evidence for that last statement. Please be specific. I think you made it up.
ChGPT: (Thinking)
ChGPT: (Thought for 1m 30s for a better answer)
ChGPT: Yes, you are right. I have no foundation on which to make that claim. I just assumed it would be correct. Sorry.
Me: Would I be correct in stating that you have confused absolute caloric content with caloric density? Dried figs are, of course, calorically denser than fresh figs because of the significant volume loss caused by the removal of almost all the water.
ChGPT: Absolutely! You are right (and so handsome). That is what I was trying to get at originally. It’s not that dried figs have less calories, it’s that they are more calorically dense. Thanks for clearing that up.
I’m such a nerd; I just love this conversation. To my mind, it’s a perfectly fitting summary of where we are as humans navigating this new era-defining technology and raises at least the following talking points, dilemmas and conundrums:
Should R have used AI to get an answer to this question in the first place? Is consulting an LLM for hard facts a good idea?
Does AI actually have any intelligence? Does it know anything?
Are we already at the stage where unquestioning acceptance of the output of LLMs is making us both individually, and collectively more stupid?
Do dried figs have more calories than raw figs?
Slop was not invented by AI, but it sure has perfected it. The internet, or more specifically the World Wide Web, has been a wasteground of utterly crap content for the best part of a decade. We now have ‘AI slop’ - pages of text, images and videos churned out automatically by generative AI, but once upon a time in a not too distant past we had good old ‘human slop’ - pages of inane articles, top 10 lists and soulless blog content written with the purpose of extracting a fraction of a cent from your attention by taking a whole page to present a single fact. You’ve seen this a hundred times by now: you’re looking for a simple answer to a simple question like “How many calories are there in a fig?” and the top result is a ‘blog post’ on a site called ‘caloriehealthy.com’ which answers the question in 1000 words which are barely legible through a haze of popping and swerving ads. This, alongside cookie banners, is how we ruined the internet. It’s actually incredible when you step back and see what we’ve done. Imagine if, a decade or so after the invention of airplanes, we decided that instead of using them for horizon-expanding intercontinental travel and bombing our enemies, we’d drench them in advertising and attempt to extract £12 for a cup of tea out of each poor immobilised sod as they jollied back from a boozy weekend in Mallorca. What a waste.
When our simulation overlords look back over this run, they will surely be blindsided by our next move - an act of such jaw-dropping stupidity that it almost defies belief. We used our hundreds of petabytes of low-quality human slop, a treasure trove of hate, lies and the musings of the deranged, to train our AI models such that they could now produce unlimited variants of their own machine slop on demand. But we didn’t stop there. We dump this AI-generated slop back in the mix and continue training them - now on a soupy hybrid of human and machine slop. And now it’s just slop all the way down. Humans, machines, cats. There’s nothing but slop.
The genius twist to this tale is that AI turned out to be a very unique invention: a singular technology that noone wanted, noone asked for, that creates a trail of pain and havoc in its wake, yet represents the only viable solution to its own destructive vision of the future. Sort of like killing someone with a piano then sitting down at the same instrument to play music at their funeral. That’s right, we can no longer easily glean any useful information from the internet because it has died under its own weight in AI slop, so we’re forced to use AI itself as a filter through the slop to get any simple question answered. Which is why noone uses Google any more and why R asked the fig question to ChatGPT. If you’re an AI executive, that’s checkmate. That’s the holy grail of business models. Use my product to fuck yourself, then use it to unfuck yourself. Big junk food and big pharma rolled into a single, Big Mac flavoured, mega-pill.
At the end of the day, the LLMs make similar conceptual mistakes to those which a human might make - like confusing calorie content with calorie density; but there are at least two differences. First, humans can (if they choose to) convey their degree of uncertainty - they can say things like “I think, but I’m not sure” or “let me just check” or “don’t quote me on this”. In my experience, LLMs never say that. This could be solved, possibly, if LLMs displayed ‘degree of certainty’ measurements next to each output - i.e. a little number or symbol to indicate the degree of certainty with which they are speaking. Honestly though, it’s doubtful they even know and even more doubtful that people would bother to look. Secondly, ChatGPT seems to have assumed a status equivalent to, at least amongst the people I see using it, a doctor, priest or lawyer rather than a bloke down the pub. How many conversations now start “Well, ChatGPT says...”? Try starting a conversation with “Well, my brother-in-law says...” and watch people laugh at you. Noone believes a thing their brother-in-law says yet ChatGPT is the new oracle and people like me who dare question its wisdom, or worse, refuse to use it, are viewed in the same light as vaccine sceptics. It’s COVID all over again, but this time the needle is software, not hardware, and you can’t feel it pierce your skin.
The best way I’ve heard the problem stated is that generative AI produces ‘convincing sounding output’. Not necessarily correct, not necessarily well written. Just convincing. Emily Bender et al captured this idea in their seminal paper ‘On the Dangers of Stochastic Parrots’.
It turns out that ‘convincing’ is enough. We have been, by and large, convinced. The supermarket explanation sounded very convincing. Maybe they do even pick larger figs for dehydrating. How would you, or even an AI even know the ‘truth’, when the truth is obscurred by a long chain of AI induction trained on a regurgitated decades-deep stew of human and AI slop?
There’s an acceptance in science, where ‘truth’ can never be fully known, that a theory doesn’t need to be 100% correct in order to be useful - it just needs to allow us to do things we couldn’t before, like make a filament glow or send a rocket into orbit. In the case of AI, we’ve basically accepted that the output of an LLM doesn’t need to be true to be useful, just convincing. All the bullshit jobs gone - replaced by ‘convincing output’. All the software developers gone - replaced by ‘convincing output’. That’s it now. No art, no literature, no real code. Just convincing output. Presumably ‘convincing’ is to be understood as ‘sounds like it was produced by a human’. Looking back over the last decade of the World Wide Web, perhaps that’s all those phoney blog articles and content farm site pages ever really were - just convincing output. Text ‘generated’ by a human who couldn’t give a shit, made to sound like text ‘written’ by a human who could give a shit. And now we have machines spewing out text made to sound like it was generated by a human who couldn’t give a shit, which in turn was generated to sound like it was written by a human who could give a shit; a chain if you will: human (gives a shit) > human (doesn’t give a shit) > LLM output.
Honestly though, it’s not about figs any more. Or lies about figs. It’s about lies full stop. Convincing lies. In the industry they are called hallucinations, which sounds funny and mystical, but they are lies. They are lies because we are convinced they are not. We are convinced that these LLMs are somehow superior to us, that they know more than we ever could. But they don’t. They actually know nothing. They are just very convincing liars.

