The Scientific Association for the Study of Time in Physics and Cosmology is very honored to present Stuart Hameroff, M.D., anesthesiologist and Professor in the Departments of Anesthesiology and Psychology, and Director of the Center for Consciousness Studies at Banner-University Medical Center, The University of Arizona, as the fall speaker for the SASTPC Speaker Series Free Public Lectures.
This year, debates in physics circles took a worrying turn. Faced with difficulties in applying fundamental theories to the observed Universe, some researchers called for a change in how theoretical physics is done. They began to argue — explicitly — that if a theory is sufficiently elegant and explanatory, it need not be tested experimentally, breaking with centuries of philosophical tradition of defining scientific knowledge as empirical. We disagree. As the philosopher of science Karl Popper argued: a theory must be falsifiable to be scientific.
Chief among the ‘elegance will suffice’ advocates are some string theorists. Because string theory is supposedly the ‘only game in town’ capable of unifying the four fundamental forces, they believe that it must contain a grain of truth even though it relies on extra dimensions that we can never observe. Some cosmologists, too, are seeking to abandon experimental verification of grand hypotheses that invoke imperceptible domains such as the kaleidoscopic multiverse (comprising myriad universes), the ‘many worlds’ version of quantum reality (in which observations spawn parallel branches of reality) and pre-Big Bang concepts.
These unprovable hypotheses are quite different from those that relate directly to the real world and that are testable through observations — such as the standard model of particle physics and the existence of dark matter and dark energy. As we see it, theoretical physics risks becoming a no-man’s land between mathematics, physics and philosophy that does not truly meet the requirements of any.
The issue of testability has been lurking for a decade. String theory and multiverse theory have been criticized in popular books and articles, including some by one of us (G.E.). In March, theorist Paul Steinhardt wrote in this journal that the theory of inflationary cosmology is no longer scientific because it is so flexible that it can accommodate any observational result. Theorist and philosopher Richard Dawid and cosmologist Sean Carroll have countered those criticisms with a philosophical case to weaken the testability requirement for fundamental physics.
We applaud the fact that Dawid, Carroll and other physicists have brought the problem out into the open. But the drastic step that they are advocating needs careful debate. This battle for the heart and soul of physics is opening up at a time when scientific results — in topics from climate change to the theory of evolution — are being questioned by some politicians and religious fundamentalists. Potential damage to public confidence in science and to the nature of fundamental physics needs to be contained by deeper dialogue between scientists and philosophers.
String theory is an elaborate proposal for how minuscule strings (one-dimensional space entities) and membranes (higher-dimensional extensions) existing in higher-dimensional spaces underlie all of physics. The higher dimensions are wound so tightly that they are too small to observe at energies accessible through collisions in any practicable future particle detector.
Some aspects of string theory can be tested experimentally in principle. For example, a hypothesized symmetry between fermions and bosons central to string theory — super symmetry — predicts that each kind of particle has an as-yet-unseen partner. No such partners have yet been detected by the Large Hadron Collider at CERN, Europe’s particle physics laboratory near Geneva, Switzerland, limiting the range of energies at which super-symmetry might exist. If these partners continue to elude detection, then we may never know whether they exist. Proponents could always claim that the particles’ masses are higher than the energies probed.
Dawid argues that the veracity of string theory can be established through philosophical and probabilistic arguments about the research process. Citing Bayesian analysis, a statistical method for inferring the likelihood that an explanation fits a set of facts, Dawid equates confirmation with the increase of the probability that a theory is true or viable. But that increase of prob ability can be purely theoretical. Because “no one has found a good alternative” and “theories without alternatives tended to be viable in the past”, he reasons that string theory should be taken to be valid.
In our opinion, this is moving the goalposts. Instead of belief in a scientific theory increasing when observational evidence arises to support it, he suggests that theoretical discoveries bolster belief. But conclusions arising logically from mathematics need not apply to the real world. Experiments have proved many beautiful and simple theories wrong, from the steady state theory of cosmology to the SU(5) Grand Unified Theory of particle physics, which aimed to unify the electro-weak force and the strong force. The idea that preconceived truths about the world can be inferred beyond established facts (inductivism) was overturned by Popper and other twentieth century philosophers.
We cannot know that there are no alter native theories. We may not have found them yet. Or the premise might be wrong. There may be no need for an overarching theory of four fundamental forces and particles if gravity, an effect of space-time curvature, differs from the strong, weak and electromagnetic forces that govern particles. And with its many variants, string theory is not even well defined: in our view, it is a promissory note that there might be such a unified theory.
The multiverse is motivated by a puzzle: why fundamental constants of nature, such as the fine-structure constant that characterizes the strength of electromagnetic interactions between particles and the cosmological constant associated with the acceleration of the expansion of the Universe, have values that lie in the small range that allows life to exist. Multiverse theory claims that there are billions of unobservable sister universes out there in which all possible values of these constants can occur. So somewhere there will be a bio-friendly universe like ours, however improbable that is.
Some physicists consider that the multiverse has no challenger as an explanation of many otherwise bizarre coincidences. The low value of the cosmological constant — known to be 120 factors of 10 smaller than the value predicted by quantum field theory — is difficult to explain, for instance.
Earlier this year, championing the multiverse and the many worlds hypothesis, Carroll dismissed Popper’s falsifiability criterion as a “blunt instrument” (go.nature.com/nuj39z). He offered two other requirements: a scientific theory should be “definite” and “empirical”. By definite, Carroll means that the theory says “something clear and unambiguous about how reality functions”. By empirical, he agrees with the customary definition that a theory should be judged a success or failure by its ability to explain the data.
He argues that inaccessible domains can have a “dramatic effect” in our cosmic back yard, explaining why the cosmological constant is so small in the part we see. But in multiverse theory, that explanation could be given no matter what astronomers observe. All possible combinations of cosmological parameters would exist somewhere, and the theory has many variables that can be tweaked. Other theories, such as uni-modular gravity, a modified version of Einstein’s general theory of relativity, can also explain why the cosmological constant is not huge.
Some people have devised forms of multiverse theory that are susceptible to tests: physicist Leonard Susskind’s version can be falsified if negative spatial curvature of the Universe is ever demonstrated. But such a finding would prove nothing about the many other versions. Fundamentally, the multi verse explanation relies on string theory, which is as yet unverified, and on speculative mechanisms for realizing different physics in different sister universes. It is not, in our opinion, robust, let alone testable.
The many-worlds theory of quantum reality posed by physicist Hugh Everett is the ultimate quantum multiverse, where quantum probabilities affect the macroscopic. According to Everett, each of Schrodinger’s famous cats, the dead and the live, poisoned or not in its closed box by random radioactive decays, is real in its own universe. Each time you make a choice, even one as mundane as whether to go left or right, an alternative universe pops out of the quantum vacuum to accommodate the other action.
Billions of universes — and of galaxies and copies of each of us — accumulate with no possibility of communication between them or of testing their reality. But if a duplicate self exists in every multiverse domain and there are infinitely many, which is the real ‘me’ that I experience now? Is any version of oneself preferred over any other? How could ‘I’ ever know what the ‘true’ nature of reality is if one self favours the multiverse and another does not?
In our view, cosmologists should heed mathematician David Hilbert’s warning: although infinity is needed to complete mathematics, it occurs nowhere in the physical Universe.
PASS THE TEST
We agree with theoretical physicist Sabine Hossenfelder: postempirical science is an oxymoron (go.nature.com/p3upwp). Theories such as quantum mechanics and relativity turned out well because they made predictions that survived testing. Yet numerous historical examples point to how, in the absence of adequate data, elegant and compelling ideas led researchers in the wrong direction, from Ptolemy’s geocentric theories of the cosmos to Lord Kelvin’s ‘vortex theory’ of the atom and Fred Hoyle’s perpetual steady-state Universe. [Responce by S. Hossenfelder: via medium.com]
The consequences of over-claiming the significance of certain theories are pro found — the scientific method is at stake (go.nature.com/hh7mm6). To state that a theory is so good that its existence supplants the need for data and testing in our opinion risks misleading students and the public as to how science should be done and could open the door for pseudoscientists to claim that their ideas meet similar requirements.
What to do about it? Physicists, philosophers and other scientists should hammer out a new narrative for the scientific method that can deal with the scope of modern physics. In our view, the issue boils down to clarifying one question: what potential observational or experimental evidence is there that would persuade you that the theory is wrong and lead you to abandoning it? If there is none, it is not a scientific theory.
Such a case must be made in formal philosophical terms. A conference should be convened next year to take the first steps. People from both sides of the testability debate must be involved.
In the meantime, journal editors and publishers could assign speculative work to other research categories — such as mathematical rather than physical cosmology — according to its potential testability. And the domination of some physics departments and institutes by such activities could be rethought.
The imprimatur of science should be awarded only to a theory that is testable. Only then can we defend science from attack.
Listen to Physicists Brian Green, professor of mathematics and physics, Columbia University, and Lee Smolin, faculty member, Perimeter Institute for Theoretical Physics Debate the Merits of String Theory (via NPR):
George Ellis is professor emeritus of applied mathematics at the University of Cape Town, South Africa.
Joe Silk is professor of physics at the Paris Institute of Astrophysics, France, and at Johns Hopkins University in Baltimore, Maryland, USA.
This may seem a very strange theory but it is now well established; QCD and the electroweak theory together constitute the standard model, with spin -½ leptons and quarks and spin – 1 gauge bosons. It has been tested by innumerable experiments over the last forty years and been thoroughly vindicated.
Until recently there was however a gap, the Higgs boson. Back in 1964, the existence of this extra particle was seen as a relatively minor feature; the important thing was the mechanism for giving masses to gauge bosons. But twenty years later, it began to assume a special significance as the only remaining piece of the standard – model jigsaw that had not been found.
Finding it was one of the principal goals of the large hadron collider (LHC) at CERN. This is the largest piece of scientific apparatus every constructed, a precision instrument built in a huge 27 km – long tunnel straddling the French – Swiss border near Geneva — a truly remarkable piece of engineering. Protons are sent round in both directions, accelerated close to the speed of light, and allowed to collide at four crossing points around the ring. At two of these are large detectors, Atlas and CMS, also marvels of engineering, that over a period of twenty years have been designed, built and operated by huge international teams of physicists and engineers. In 2012 this mammoth effort paid off, with the unequivocal discovery by both teams of the Higgs boson.
So is this the end of the story? Surely not. The standard model can hardly be the last word.
It is marvelously successful, but far from simple. It has something like 20 arbitrary parameters, things like ratios of masses and coupling strengths, that we cannot predict and that seem to have no obvious pattern to them. Moreover there are many features for which we have no explanation. Why for both quarks and leptons are there three generations with very similar properties but wildly varying masses? Why do quarks come in three colours?
One theory is that all these choices are random. There may have been many big bangs, each producing a universe with its own set of parameters. Most of those universes would probably be devoid of life. But that is for many a profoundly unsatisfactory answer; we certainly hoped for a more predictive theory!
On the observational side, there are still many things we cannot explain. What is the nature of the dark matter in the universe? Why does the universe contain more matter than antimatter — leptons and quarks rather than antileptons and antiquarks? Moreover there are a few points on which the standard model definitely does not agree with observation. In particular, in the standard model the neutrinos are strictly massless. But we now know that do in fact have non-zero, albeit very tiny, masses. We really have no idea why.
Finally, there is the elephant in the room: gravity, which does not appear at all in the standard model. It is in fact very difficult to reconcile our best theory of gravity, Einstein’s general theory of relativity, with quantum theory. That is a problem we have been struggling with for the best part of a century. There are hopes that string theory, or its more modern realization, M-theory, may successfully unite the two, but that effort has been going on for decades without as yet reaching a conclusion. At any rate it does appear that there is a lot more for theoretical physicists to do!
An ANU mathematician has developed a new way to uncover simple patterns that might underlie apparently complex systems, such as clouds, cracks in materials or the movement of the stockmarket.
“Fractal Geometry is a new branch of mathematics that describes the world as it is, rather than acting as though it’s made of straight lines and spheres. There are very few straight lines and circles in nature. The shapes you find in nature are rough.” said Michael Barnsley, Professor of Mathematics at ANU.
“Fractal Fourier analysis provides a method to break complicated signals up into a set of well understood building blocks, in a similar way to how conventional Fourier analysis breaks signals up into a set of smooth sine waves,” said Professor Michael Barnsley, who who presented his work at the New Directions in Fractal Geometry conference.
“There are terrific advances to be made by breaking loose from the thrall of continuity and differentiability…The body is full of repeating branch structures – the breathing system, the blood supply system, the arrangement of skin cells, even cancer is a fractal.”
Maybe its because we don’t understand time, that we keep trying to measure it more accurately. But that desire to pin down the elusive ticking of the clock may soon be the undoing of time as we know it: The next generation of clocks will not tell time in a way that most people understand. The new clock will keep perfect time for 5 billion years.