The Laws of Physics Are Lying to You (and That’s a Good Thing)
Why The Problem of Induction is Actually a Blessing - Jack Lawrence
This one's a bit different.
Jack Lawrence and I studied philosophy of physics together at Oxford. He’s a youtube star, tik-tok celebrity, quirky improv comedian, and the sort of person who could make a lecture on entropy feel like a one-man play. Naturally, he’s written an essay that only he could write; part existential spiral, part philosophical tour de force, and somehow also a warm hug.
It explores the Problem of Induction, perhaps the most unnerving concept in philosophy, and how Jack spins it into something more hopeful.
If you’ve ever been overwhelmed by the idea that everything ends in heat death (just me?), Jack makes the compelling case that maybe we don’t know enough to be sure. And that not knowing? That might be a blessing.
Please support Jack in his writing, humour and videos:
So grab a tea (or a whiskey). Here’s Jack:
The Problem
There’s a problem at the root of all science. A crack in the foundation. It permeates through every claim, prediction and theory. It has been called
The glory of science, and the scandal of philosophy1
Most who have written about it consider it fundamentally unsolvable.
It is called the Problem of Induction.
Astronaut Edward H. White II, pilot on the Gemini-Titan IV, NASA
Attributed to David Hume, the problem of induction makes a simple claim:
Scientific predictions rely on inductive logic, inductive logic does not guarantee its predictions, therefore scientific predictions are inherently unreliable.
You are probably familiar with deductive logic. AKA arguments of the form:
All men are mortal
Socrates is a man
Therefore Socrates is mortal
I.e. if the premises are true, then the conclusion must follow.
Inductive logic is different. This kind of logic takes the following form:
I have repeatedly seen the pattern of A followed by B,
Therefore, I infer that if I see A again, B will probably follow.
The problem with inductive logic is that it’s essentially a guess. It’s an inference based on the idea that a pattern will repeat - that the future will resemble the past.
Here’s a less abstract example of an inductive argument:
It rained yesterday
It rained today
Therefore it will (probably) rain tomorrow
If all you knew about the weather is that it rained yesterday and today, you might be inclined to think that there’s a pretty decent chance of it raining tomorrow. But you’d know that rain wasn’t guaranteed.
Now let's try another that is more in alignment with our expectations, or intuitions, about how the world works. For example:
The laws of physics have been the same in every place we’ve tested them
Therefore the laws of physics are the same everywhere.
Here we have a kind of spatial-induction. Currently, our best scientific theory of what space fundamentally is comes from Einstein’s Theory of General Relativity, which assumes that (barring singularities) space and time conform to uniform laws everywhere. That is to say, the assumption of universality is built into how we typically apply physics, even with our best theory (though of course, there’s nothing to stop us coming up with a theory that asserts there are different laws in different places).
However in terms of experimental validation, we’ve obviously not been everywhere. Nor have we been everywhen.
The Andromeda Galaxy, photographed in 1978 by NASA — We’ve not been here!
Take another example that illustrates a time induction:
We have always measured the speed of light to be the same.
Therefore the speed of light is a constant value - it does not change over time.
While we’ve tested this, it’s also something we assume. You’ve almost certainly heard of:
This is Special Relativity. Einstein derived this equation by assuming that the (two way) speed of light is constant in all frames of reference.
Alongside the speed of light, there are a collection of physical constants which - whenever we’ve measured them - have always had the same value (within the error ranges of the various experiments).
But who is to say that come tomorrow a constant might, well, not be constant?
We’ve only been doing “science” in its modern form for a few hundred years. Could things change? How do we know they won’t?
Knight to E5
Richard Feynman once likened discovering the laws of physics to learning how to play a game of chess, except you don’t get to read the rulebook - you only get to watch.2
Perhaps on the first few games you observe some consistencies: how pawns can jump two spaces on their first move, how the knight can hop over other pieces in an L shaped pattern, that the bishop can move in a diagonal and so on. Perhaps by game ten, you’re fairly confident you know the “laws” of chess.
But then in game eleven, you see someone castle - two pieces of the same colour move in a single turn, the king swaps with a rook. You might watch this and go “that’s against the rules of chess, only one piece is allowed to move per turn”. But of course, it’s totally in-keeping with the rules of the game, you just hadn’t encountered it yet.
And perhaps at a certain point you realise that you haven’t even been seeing the whole board, maybe you’ve only been watching a particular corner of the board, and there’s more space that you’ve just not encountered yet.
The Chess Players by Thomas Eakins, 1876
We could go even further with this analogy. Suppose that the rules of chess were such that if a total of 40 moves had been made with no winner or draw declared, the board shrank, or expanded, or some other dramatic change which had no indication prior took place. We can imagine a version of chess that demands a sacrifice of a piece every ten moves past a certain number, or any other interesting mechanism.
You get the idea.
This is why the problem of induction is a problem.
Put simply, science (arguably) uses past evidence to inform our future predictions.
If we don’t have access to all the evidence that we will have in the future (which, given we are time-bound beings, we do not) then surely there has to be some level of guesswork to the patterns we describe. We don’t know what we don’t know.
Some questions arise from this problem. The main one being
How can we trust scientific predictions?
If science is based on inductive logic, and inductive logic is weaker than its deductive counterpart, why should we trust that things will work tomorrow as they do today? Why should we trust that planes will fly, that medicine will work, that the sun will shine?
You might say well because these things are consistent - but that in itself is using inductive logic! “They’ve been consistent before, therefore they’ll be consistent again” is another inductive argument.
And those are only probably right.
Is it really a problem though?
The general consensus among the philosophical community is that the problem of induction is a problem, namely that science does use inductive reasoning. However, some philosophers - most notably Karl Popper - believed that induction wasn’t a problem, because (according to him) induction isn’t necessary for science at all. He rejected induction outright.
Popper argued that science advances not by confirming theories inductively, but rather by proposing hypotheses and attempting to falsify them deductively.
Put another way - there’s an epistemological asymmetry when it comes to confirmation and falsification. Hypothetically, in order to prove that, say, Einstein’s Theory of General Relativity is true, one would need to test gravity everywhere in all places. But to disprove it? One needs but one concrete observation of it not working.
Popper’s approach to science (purportedly) uses evidence not as a confidence-booster but as a confidence-killer, disproving bad ideas, leaving us only with ideas that haven’t yet been disproven. If an idea hasn’t yet been disproven, that at the very least we know it isn’t wrong, which is more than what we can say for the alternatives. Thus, according to Popper, we can trust in it.
However, of course Popper’s approach doesn’t guarantee scientific predictions with certainty (nor does he claim it does). It simply makes the case - in so far as you may find it convincing - that dominant scientific theories are our best guess at what will happen, therefore it is rational to trust in them until the evidence shows otherwise.
And new evidence could always show otherwise.
The Most Reliable Science
If you asked physicists to take bets on which area of physics would be unchanged in 100 years time, they would almost certainly pick thermodynamics.
There are all sorts of fringe theories about gravity, time, space - you name it. Some of these are made by credible scientists, and are at least given due consideration, albeit ultimately being dismissed for the time being. But thermodynamics? Entropy? Hardly a soul doubts it.
To question the validity of entropy, or thermodynamics, is equivalent to believing in perpetual motion machines (no one has ever made one), which in itself is basically equivalent to believing in magic. .
In fact, the idea of entropy is so insidious, so pervasive, that it even persists in science fiction as a hard reality. The genre whose raison d'être is - among other things - to consider the implications of tweaks to the rules of our reality, rarely questions the inevitability of entropy’s reach.
The Last Question3 - one of the greatest sci-fi stories from Asimov is about the inevitability of Entropy. It’s a story of AI over eons trying to solve it, to find a way of not succumbing to entropy.
Science Fiction Quarterly, November 1956
Entropy is the final boss. The grim reaper. The thing death and taxes long to be.
Thermodynamics is the study of the flow of energy - specifically heat, temperature, work, entropy, and other physical properties. It’s the field of physics that explains why a hot cup of tea cools down to room temperature, rather than heating up further. It’s the field of physics that allows us to calculate how efficient an engine is.
And it’s also the field of physics which tells us how the universe is going to end.
When I sat down with my supervisor in the third year of my physics undergrad, he joked to me that he had to be careful when teaching thermodynamics, since it was inherently depressing. When asking him to clarify, he said that to study the history of thermodynamics was to realise that many of its founders committed suicide - the implication being that their discoveries in the field had shaped how they view the world to such a degree that they decided to opt out.
This is a joke often repeated in lectures and textbooks.
From States of Matter by David Goodstein
While there is an element of truth to it, it is misleading - if only for the fact that the vast majority of physicists learn about thermodynamics and do not immediately become suicidal. However it works as a joke because equally, there are few who learn about thermodynamics describe it as optimistic.
This is primarily due to the second law of thermodynamics. Thermodynamics has four laws (numbered 0, 1, 2, 3, as the zeroth law was added later) but it’s the second one that has legendary status. It can be articulated a bunch of different ways, but the most basic way is arguably:
Heat flows spontaneously from hot regions of matter to cold regions of matter.
That’s it. Heat spreads.
It doesn’t coalesce or collect up together in itself. It goes to where it is not. Heat up one end of a spoon, and eventually the other end will be hot as well. Put a fire on in a cold room, and the fire will generously and lavishly radiate its heat to the furnishings and people inside, rather than suck up what heat remains. In other words, heat is a communist sympathiser, it wants to evenly distribute itself everywhere.
It is from this simple law - this empirical observation that everyone is intuitively privy to, that the pessimism arises.
Energy - and heat - tend to only be useful when they’re densely packed. When they’re ordered.
Pretty much everything interesting happens when we take energy or heat that’s ordered and distribute it. From fire, to sunlight, to life itself. Energy that is unevenly distributed can be used to do useful things. And useful things almost inevitably even out the distribution of energy.
To repeat: almost every conceivable action is a mechanism by which energy is redistributing itself from concentrated useful places to spread itself out evenly. Life is only possible because of the uneven distribution of energy in our universe. Life is a process by which ordered energy becomes disordered.
There is a finite amount of useful, concentrated or ordered energy (energy itself is conserved).
Eventually, as interesting things happen and processes occur, it becomes evenly distributed, rendering it unusable to perform further work or sustain life.
This is how things end - this is the heat death of the universe - a time whereby all matter has the same amount of energy. There will be no Ragnarök, the dead will not walk. There will be no unification with god. There will be no crescendo.
The universe will instead go quietly. It will eventually reach a state where - as it expands - all energy is uniformly distributed. No other chemical or reactive processes could occur, because there would be no energy gradients. That means no life, no light, nothing of interest. It will be a dark, quiet, lonely place.
Some things to note here - this is not the only prediction as to what the ultimate fate of the universe will be, but it is currently the most widely accepted. Further, if this does come to pass, it will of course happen an unfathomably long time from now.
Still, it’s the best theory we have currently. It’s where we should place our bets.
What the heat death of the universe will probably look like - no light, no sound, no chemical reactions, no stars, no life. Dead matter, floating about endless nothingness.
The Briefest of Asides
The other way of writing the second law, which you’ve probably seen is:
In a closed system, the total entropy always increases or remains constant.
The pop-science quip about Entropy is that it’s the measure of disorder in a system, and from this we can therefore rewrite the second law as
In a closed system, the total amount of disorder always increases, or remains constant.
Which is to say,
Things inevitably, eventually and unavoidably descend into chaos.
It’s worth briefly mentioning: this definition of entropy as a “measure of disorder” is not false per se, but it’s not the whole truth either. Entropy can be defined in a bunch of different (overlapping and complementary) ways mathematically: from the classical definition whereby entropy is all about the randomness in a system resulting from the reversible/irreversible transfer of heat (which I’ve spoken about above), to the information theory definition whereby entropy is all about predictability.
The thing is, regardless of your particular flavour of entropy, one thing is clear. If you fast forward the clocks far enough, what logically follows from the second law of thermodynamics (plus some cosmological observations) is the heat death of the universe.
Also - it’s important to mention that there’s another kind of anti-science induction argument, called the pessimistic meta-induction argument.
This is different from the problem of induction in that it leverages inductive logic. The argument goes:
Every previously held scientific idea/paradigm has been, at some point, proven false, before we arrived at the current idea.
There is nothing special about the current idea, other than it is the current one.
Therefore, at some point, it is likely our our current ideas will eventually be shown to be false
This is of course a good observation. The cemetery of science is full of dead ideas. Further, most scientists agree that the current collection of best theories about the universe are incomplete, and therefore “wrong”.
That being said, this is not a good reason to dismiss them. We can understand easily why by moving away from a binary understanding of truth.
Was Newton “wrong” about gravity? Kind of - his mechanics break down at high speeds, and he doesn’t account for time dilation or space stretching or any other relativistic effects. But his theory certainly wasn’t outright false. Newtonian mechanics describe almost every type of motion we’re likely to experience on Earth. The theory that supersedes Einstein’s theories will likely show that while Einstein was “wrong” in some capacity, he was right in a lot of ways. Science iterates. It doesn’t claim absolute truth, just the best so far.
Chaos will Get Them
I feel like it’s easy to breeze over the profundity of what we currently believe the end of the universe will be.
Heat death is contrary to every single story that any religious or mythological tradition has told about how things will end.
It’s almost the most anti-life story one could imagine. An anti-crescendo, the slowest possible, gratuitously empty end. The Big Bang is often leveraged by religious folks to argue that their holy scripture was Right All Along. But the heat death of the universe? To my mind, this should be up there with Evolution and the heliocentric theory as grand displacements of religious doctrine by science.
I felt so vindicated when, in her brilliant book Why Fish Don’t Exist (which I cannot recommend highly enough) Lulu Miller illustrates the tragedy of entropy by opening it with:
Picture the person you love the most. Picture them sitting on the couch, eating cereal, ranting about something totally charming, like how it bothers them when people sign their emails with a single initial instead of taking those four extra keystrokes to just finish the job -
Chaos will get them
Chaos will crack them from the outside - with a falling branch, a speeding car, a bullet - or unravel them from the inside, with the mutiny of their very own cells. Chaos will rot your plants and kill your dog and rust your bike. It will decay your most precious memories, topple your favourite cities, wreck any sanctuary you can ever build.
It’s not if, it’s when. Chaos is the only sure thing in this world. The master that rules us all. My scientist father taught me early that there is no escaping the Second Law of Thermodynamics: entropy is only growing: it can never be diminished, no matter what we do.
Much of her book is about how she personally has wrestled with this problem.
My solution was in fact, the problem of induction itself.
The Upside of Uncertainty
I have occasionally been despondent about the conclusions of thermodynamics, like many have. The idea that decay is inevitable on a cosmic scale. That order is a temporary, fleeting state. That disorder and chaos are the norm. That not only are we each Sisyphus pushing eternal boulders up a hill, we are in fact decaying pushers - the boulder, with each lap of the mountainside, wearing us down a bit further, until only it remains.
The Course of Empire - Desolation, by Thomas Cole, 1836
This despondency has sometimes wormed its way into other forms of deterministic nihilism. I am a determinist. I think the future is written. That things are “fated”.
But at some point in my life I realised that I could not be certain of this.
It struck me that these two thoughts were mutually exclusive - the dead-end certainty of the heat death of the universe, and the problem of induction (or, if you’re a Popperian, the inherent limited certainty in scientific predictions).
Before Einstein came along, gravity was a force. Everyone felt their weight. Then, his theory dropped, and skydivers remarked that they could not feel their own weight in freefall.
For now the future is bound, set. It feels that way to me. But perhaps one day it won’t. Whether it’s a cosmic black swan, a theory of everything, or a step change brought about by the 80th chess move, the future is an undiscovered territory. We have our inklings, but there is possibility within mystery.
Have you ever had a thing happen in your life and thought to yourself “oh, this is really it, this is how the story is written?” This could equally occur during tragedy or triumph - a success, an accident, a happenstance meeting and so on. This could be a narrative around the inevitability of failure, the fairness or unfairness of the universe, and so on.
As storytellers, it is natural that we abstract and try to infer universal laws from what we see, even on a human scale. My propensity has generally been to create negative stories. And these have been reinforced by what I had learned about thermodynamics. After all, I wasn’t imagining that things inevitably decayed, this was built-in to how the universe worked.
But the problem of induction reminds me that there cannot be certainty. Even with the best tools available to me. There is still a sliver of doubt.
If I’m being honest, and aligning myself with the truth - which the dark whispers of melancholic moods always purport to be in alliance with - the truth is we don’t know. We don’t know what will happen. We have guesses sure, but to act as if there’s no space for anything else is simply false. It’s bad science. We have our bets, but the experiment hasn’t yet been run. There’s much we don’t know, and who knows how much we don’t know we don’t know.
And in that space that induction frees up, I gain back some agency. Not even necessarily hope per se, but curiosity. I don’t know how the story will end, none of us do.
This doesn’t mean that heat death won’t come to pass, or that there is a way to reverse entropy, or that any of the predictions aren’t correct. They may well be. It simply is to point out that even with the most reliable pillar of physics, certainties do not exist. So what good is the narrative I’m telling myself about my life, or the inevitability of history, or anything else? The fact of the matter is this:
The future is not logically guaranteed to resemble the past.
And that is why induction for me isn’t a problem, it’s kind of a blessing.
Astronaut Edward H. White II, pilot on the Gemini-Titan IV, 1965, NASA
Summary
Arguably, much of scientific confirmation and the scientific method relies on inductive logic. That is, an assertion that the future will resemble the past in some way.
While philosophers like Karl Popper have proposed methods to sidestep induction - such as falsification - a fundamental point remains: our theories assert truths about areas and time and space we have not visited. Absolute certainty cannot be found - nor is it claimed to be found - with any scientific method or approach.
Thermodynamics is generally considered the most reliable pillar of physics. It relies on few assumptions, and its assertions are well documented and evidenced. This field of physics states that life is inherently finite, due to the limited amount of ordered energy in the universe that can be used to do useful work.
Life itself transforms ordered energy into disordered energy. In fact, most processes do this, from the formation of stars to the flicker of a candle flame. The logical end result of all of this continuing is that eventually all energy will be disordered. This is called the heat death of the universe. It’s kind of depressing.
The idea of entropy being inevitable as a certainty cannot be reconciled with the problem of induction, or indeed the observation that there are still unknowns out there. While it may turn out to be true, we cannot be absolutely certain of it. The scientific method allows for no such certainties.
My own melancholic moods have often used the “truth” of entropy to assert that there is something fundamentally destructive and depressing about the nature of the universe and being. The problem of induction reminds me that this “truth” is not absolute. By undermining the “reality” from which I abstract much of my rationale, I give myself some psychological space to talk myself out of those darker places.
1 Specifically, this was said by the philosopher C. D. Broad.
2 From the documentary The Pleasures of Finding Things Out - video clip of said analogy is here.
3 You can read this absolute banger here.