Can trust survive AI?
“Truth is not a fact to be discovered, but a room to be furnished; and the machines have already moved in.”
The reality homeopathy
I can still smell the grey linoleum of the study hall. I was twelve. It was shortly after the start of the school year, and my friends and I had been summoned for a serious matter: a broken broom...
It had snapped "almost" by accident, while we were playing in the janitor’s closet. When she caught us standing frozen next to the splintered wood, we panicked. The lie came out instantly: "It was already broken when we got here."
The next morning, the deputy principal stared at us from behind her desk. She looked tired. I’ve never forgotten what she said: "You aren’t being punished for the broom. You’re being punished for the lie."
That day, I learned a lesson. We hadn't just broken a broom; we had broken trust. Objects can be repaired or replaced. Trust cannot. Once it’s shattered, it becomes the most expensive thing in the world to rebuild.
We're more than twenty years later and I'm observing AI hijacking our innate tendency to trust our senses. It is producing plausible lies at an industrial scale, and in doing so, it is pushing humanity into a new era.
What happens to a society when you can no longer believe anything you read, hear, or see? To understand that, we need to find Brian.
Where is Brian?
If you grew up in France, you likely just whispered: "Brian is in the kitchen."
Brian was the protagonist of a well-known English textbook used by generations of French students. By a strange coincidence, I was twelve when I first met Brian. I was also twelve when we broke that broom. And recently, a twelve-year-old student turned in a ChatGPT-generated essay claiming that Anne Frank debated Martin Luther King Jr. on live television in 1962.
These three moments share a common DNA: a fictional fact. The debate never happened. The broom was not yet broken. And Brian? Brian never existed.
Brian was a linguistic construct designed to teach grammar. Whether he was in the kitchen or in the garden didn't matter to the lesson. Humans don't learn languages by memorizing rules, we learn through example sentences and exposition to patterns.
Brian was just that, a pattern. A placeholder. We were never meant to remember him decades later. No teacher ever tested us on his exact coordinates in the house. Remembering him was a mere side effect of learning English.
This is exactly how AI "knows" things. Its memory is an accident.
The accidental memory
Large Language Models are trained on the colossal mess of the public internet: millions of books, billions of conversations. But their goal isn't to be a library. Their goal is to master the pattern of language.
They digest petabytes of text to understand how words work together. The facts inside those texts? They are collateral damage. They are accidental souvenirs picked up while learning the far more crucial skill of sounding human.
When you ask an AI for the population of Tuvalu or the date of the French Revolution, you aren't asking a database. You are asking a statistical engine to predict the next most likely word. It hasn't been optimized for truth, it has been optimized for linguistic plausibility.
It does what you would do if someone asked you about a half-forgotten school detail: it improvises. Sometimes it’s right. Sometimes Brian ends up in the garden and Anne Frank ends up on a 1960s talk show.
This isn't just text. When an AI generates a photo, it doesn't "see" a face. It calculates the probability of a pixel’s color based on its neighbor. It knows a shadow usually sits under a nose with the same mathematical certainty that "kitchen" follows "in the" in a well-formed sentence.
To blame an AI for hallucinating is like blaming Hugh Laurie (the actor from Dr. House) for not being able to perform real medicine. He’s not a doctor, he’s just mastered the art of looking like one.
Can we fix this? Can we teach a machine the concept of "Truth"?
The dilution factor
The current industry standard is a strategy called RAG (Retrieval-Augmented Generation). Instead of letting the AI guess, we give it a Bible, a trusted database of documents. When faced with a factual question, the AI doesn't just wing it. It refrains from answering directly. Instead, it scans a knowledge base and synthesizes the findings. It’s the best of both worlds: the AI provides the linguistic elegance and reasoning, while the database anchors the truth.
Problem solved? Not quite.
This Bible is only as good as its source. It can be a collection of curated files on your PC or the whole internet. Most LLM's have web searches as a default feature today. But most of the new online content is now being generated by AI. We are using the internet to fact-check AI, but the internet is increasingly made of AI.
The signal is being lost in a noisy feedback loop, an ocean of hallucinations. We are entering the era of "Reality Homeopathy".
In homeopathy, you dilute a substance so many times that, statistically, not a single molecule of the original remains. You are left with the "memory" of the substance in water. We are doing the same to our digital world. We are diluting the web with synthetic content, algorithmic truths, that looks like reality and sounds like experience, but contains zero milligrams of lived life:
The podcast about heartbreak written by a system that has never loved.
The blog post on leadership written by a model that has never taken a risk.
The "photo" of a war zone made up by a processor that has never felt fear.
Of course, propaganda and fake news aren't new. What's new is the ratio. For the first time, the fictional fact is becoming the baseline and this is likely to have a weird and dangerous effect on us.
The death of “We”
The danger isn't just that we will believe lies. Humans have believed in myths since the Stone Age. The real danger is that we will stop believing in anything at all.
Truth has a functional role: it allows us to act together. We agree that tobacco causes cancer, so we create dedicated health policies. We agree that nations exist, so we follow laws. Truth is the social glue that allows us to build projects bigger than ourselves.
These AI lies also hit different. They don't have a human agenda or a political manifesto. They are simply optimized for the click. They don't aim to convince, they aim to polarize. They aren't designed to lead you anywhere, only to widen the chasm between those who believe and those who don't. It is a world built without intent, meticulously crafted to harvest the only currency that matters in this decade: your attention.
If we let AI drive our truths, we aren't just losing facts. We are losing the ability to coordinate as a species. We are retreating into a generalized defiance, a fog where everyone has their own plausible reality and no one has a shared one.
Coming up next
The questions raised by this new phenomenon are deep and I'll try to dig the topic further in future articles of a series exploring the concept of truth and trust in the AI era:
1. The Reality Homeopathy: Are we diluting human truths?
2. The Placebo Effect: Is a useful lie better than the truth?
3. The Overdose: What happens when we stop believing?
4. The Detox: Can we rebuild trust in a synthetic world?
*This author does not exist, I made it up. Don't trust everything you read ;-)