Modern science has a whole lot going for it that Ancient Greek or Chinese science did not: advanced technologies for observation and measurement, fast and efficient communication, and well-funded and dedicated institutions for research. It also has, many thinkers have supposed, a superior (if not always flawlessly implemented) ideology, manifested in a concern for objectivity, openness to criticism, and a preference for regimented techniques for discovery, such as randomised, controlled experimentation. I want to add one more item to that list, the innovation that made modern science truly scientific: a certain, highly strategic irrationality.
‘Experiment is the sole judge of scientific “truth”,’ declared the physicist Richard Feynman in 1963. ‘All I’m concerned with is that the theory should predict the results of measurements,’ said Stephen Hawking in 1994. And dipping back a little further in time, we find the 19th-century polymath John Herschel expressing the same thought: ‘To experience we refer, as the only ground of all physical enquiry.’ These are not just personal opinions or propaganda; the principle that only empirical evidence carries weight in scientific argument is widely enforced across the scientific disciplines by scholarly journals, the principal organs of scientific communication. Indeed, it is widely agreed, both in thought and in practice, that science’s exclusive focus on empirical evidence is its greatest strength.
Yet there is more than a whiff of dogmatism about this exclusivity. Feynman, Hawking, Herschel all insist on it: ‘the sole judge’; ‘all I’m concerned with’; ‘the only ground’. Are they, perhaps, protesting too much? What about other considerations widely considered relevant to assessing scientific hypotheses: theoretical elegance, unity, or even philosophical coherence? Except insofar as such qualities make themselves useful in the prediction and explanation of observable phenomena, they are ruled out of scientific debate, declared unpublishable. It is that unpublishability, that censorship, that makes scientific argument unreasonably narrow. It is what constitutes the irrationality of modern science – and yet also what accounts for its unprecedented success.
Before I drag you any further down what might strike you as a rocky, obscure and unpromising path, let me furnish an illustration of scientific censorship in action: the case of beauty.
Driving south to teach at Stanford University in the late 1990s, I would pass over a beam of exceptionally excited electrons travelling east to west through a two-mile tunnel at about the speed of light. I took some visceral satisfaction in thinking that here, under my wheels, big science was on the roll.
Along the same tunnel, science had rolled quite as relentlessly while the summer of love cooled 30 years before. The scientists at the Stanford Linear Accelerator Center were searching, in those days, for underlying structure in the proton. The quark theory developed independently by Murray Gell-Mann and George Zweig in 1964 held that protons, once thought to be fundamental particles, were in fact tightly laced bundles of three smaller particles, or quarks. The high-energy particles racing across the San Francisco peninsula just west of Palo Alto had been recruited to test that theory a few years later, to find out whether the mass of the proton was evenly distributed across its extent or was rather concentrated in three smaller points, as the quark theory predicted. These ‘deep inelastic scattering’ experiments did indeed verify the predictions – the first direct empirical evidence for the existence of the quark – and the experimenters picked up a Nobel Prize for this achievement two decades later in 1990.
However, Gell-Mann and Zweig didn’t develop the theory of quarks in order to explain patterns of scattering such as these. Rather, they were motivated by the desire to reveal, in the proliferation of particles that had been discovered by physicists in the 1940s and ’50s, an underlying order, a hidden harmony, a fundamental mathematical coherence. In showing that a gallimaufry of baryons and mesons – protons, neutrons, pions, kaons and many more – could be generated by pairs and trios of just three basic entities, the up, down and strange quarks, they not only uncovered the existence of the new particles but also vindicated the ancient precept that theoretical beauty is a mark of truth.
Gell-Mann enthusiastically endorsed that principle: we live in a world, he said, where beauty, simplicity and elegance are ‘a chief criterion for the selection of the correct hypothesis’. Many physicists before and after have expressed much the same belief. The British physicist Paul Dirac wrote in 1963: ‘It is more important to have beauty in one’s equations than to have them fit experiment.’ The American Nobel laureate Steven Weinberg in 1992 said: ‘[W]e would not accept any theory as final unless it were beautiful.’ Brian Greene, another physicist and populariser, confirms that this regard for beauty is a significant practical influence on scientific thinking: writing in The Elegant Universe (1999), he said physicists ‘make choices and exercise judgments about the research direction in which to take [a] partially completed theory’ that are ‘founded upon an aesthetic sense – a sense of which theories have an elegance and beauty of structure on par with the world we experience.’
The importance of aesthetic thinking in physics is, I am sure, well known to many readers. Many will also have a sense that beauty is not enough; to support a theory, scientists demand empirical evidence such as that produced by the deep scattering experiments. As Greene puts it: ‘Aesthetic judgments do not arbitrate scientific discourse, however. Ultimately, theories are judged by how they fare when faced with … hard experimental facts.’ What perhaps fewer readers grasp is that aesthetic appeals are not so much confined to a subsidiary role in scientific argument as they are banned outright. That is the force of the ‘only’ in ‘only empirical evidence counts’.
Science seems to say: attending to beauty is exceptionally helpful and we should pay no attention to beauty
The form of this ban is rather subtle. As Greene observes, scientists are in no way discouraged from following the trail of beauty in their private thinking, nor from discussing this strategy in popular talks and books. Where beauty may not venture is in the arena of professional scientific debate: scientific journal articles and conference papers. There, you might find the occasional remark about the elegance of an explanation, but no one builds a case for or against a theory even partly on aesthetic considerations. Though the proponents of the new ‘post-empirical physics’ would like to change this rule, they have yet to gain much ground. The principle stands: every argument must be constructed entirely on the grounds of empirical evidence.
There is something very peculiar about this. On the one hand, beauty is held up as a guiding light. If we take Weinberg at his word, then it is simply impossible for a theory that lacks beauty to be correct. Ugliness is a decisive refutation. On the other hand, this same quality that physicists consider to be so revelatory is wholly excluded from science’s official dialogue. Science seems to be saying: yes, attending to beauty is exceptionally helpful and, also, we should pay no attention to beauty.
This is not only awkward, but irrational in the fullest sense. What philosophers call the ‘principle of total evidence’ mandates that, when deciding an important matter, we should take into account all relevant considerations. In some cases, we might decide that it’s not worth the cost in time or money to pursue a certain line of thought, but we are rationally obliged to give each important line at least a brief inspection. If we believe it to be promising – if it looks like it will supply substantial guidance to solving the problem we’ve set ourselves – then we ought to follow it up, as far as is practically possible.
Suppose you are, for example, buying a used car. You take your prospective purchase to a mechanic for an inspection. You could also pay for a vehicle history report (compiled by an independent company). That will give you further information about the car, but at a price, and you decide that the information is not worth the cost. That is reasonable; you have given the evidence due consideration. Then again, suppose that you are buying from a dealer who, as is customary, has already requested and paid for such a report. Perhaps they even place the report in your hands, in a sealed envelope. Here is valuable information about the vehicle. It is certainly worth your time to spend a few minutes browsing it. But you drive off without looking, tossing it out the window. You have turned down credible evidence that you could have acquired at a negligible price. That is irrational. That is a breach of the principle of total evidence.
Science’s edict that ‘only evidence counts’ grossly violates the same principle. As we’ve seen, physicists consider the aesthetic evaluation of theories to be highly informative, and there are few or no practical obstacles to using beauty as a measure of a theory’s plausibility: to any thinker who takes the time to understand a theory’s workings, an appraisal of its beauty comes more or less unbidden. They have the envelope, as it were, in their hands. Yet science seizes it and throws it away: it forbids any reference to the content of the envelope when a scientist formulates and publishes arguments for their theories.
What is logically objectionable, let me emphasise, is not that scientific deliberation prefers empirical evidence to aesthetic thinking. There is no need to choose one or the other; you can have both. You can privilege the evidence, the ‘hard facts’ as much as you like, especially as they accumulate and approach incontrovertibility. The principle of total evidence has no problem with that. All it says is that you must also, if you think aesthetics provides some useful intelligence, take beauty into account. But science says you must ignore it completely, regardless of how important you take it to be. Or more exactly, you must ignore it in your professional contributions, your publications. That’s what’s irrational.
Science is supposed to be objective, methodical, clear-headed, sharp. How did it come about, then, that the protocols of scientific publication should run directly counter to the canons of rationality? Could there be some positive benefit, in empirical investigation, to the right kind of narrowness or blindness? There could indeed.
Until 1999, the Stanford Linear Accelerator’s above-ground building was the longest in the United States. That year, it was supplanted by the structures housing the apparatus of the Laser Interferometer Gravitational-wave Observatory (LIGO), long tunnels down which laser beams were fired to detect vibrations that would disclose the existence of gravitational waves, predicted by Albert Einstein’s general theory of relativity but never observed.
Massive objects, according to relativity theory, distort the structure of space itself (more exactly, the structure of ‘spacetime’). Cataclysmic events involving such objects, then – such as the collision and merger of two neutron stars or black holes – ought to create a detectable tremor in the fundament, which would show up as an apparent change in the length of LIGO’s long tubes. (Yet the material from which the tubes are made wouldn’t be moving; rather, the spatial substrate in which the tubes sit would itself quiver, carrying the embedded matter with it.) Interference patterns in laser light would reveal this change.
That is what the theory, with its complex mathematics, has to say. Comprehending the math is, however, much the lesser obstacle to the science of gravitational waves. It is the practice that’s truly forbidding: even the most galactically gargantuan events would produce only the tiniest perturbations in the tubes’ lengths – in the order of 0.0001 of the diameter of a proton.
Registering such minuscule effects is challenging enough in itself, but it is made far more difficult by ambient vibration of the most mundane varieties, such as the noise created by passing traffic, industrial operations and ‘microseismic’ activity in the Earth’s crust, all of which has the same apparent effect on the tubes as gravitational waves. It is like listening for a tree falling in a storm.
They were all over 75 years old and retired. Instead of gold watches, they got a Nobel Prize
Consequently, the story of LIGO is turbulent and long. Surveyed from our present-day perspective, the project’s success might seem as inevitable as it was hard won. However, 30 years before, it looked to be on its last legs. The National Science Foundation (NSF), which had funded LIGO off and on since the 1970s, had begun to doubt whether it could succeed. It was eye-wateringly expensive; the technology was unproven; the politics complicated; the funding for science in general dwindling in the wake of the 1987 stock market crash.
At that point, the LIGO instigators had already spent 15 years – a good third of their careers – developing the laser technique for measuring gravity waves. Their work now seemed like it might go nowhere. A last-gasp effort was made to resurrect the experiment: a new director, a new proposal, a final chance for NSF funding. The money – $400 million – came through, and the construction of those long straight tubes, at two separate sites in Washington state and Louisiana (to minimise the danger of ‘detecting’ a local rumble), got underway.
Even then, the project required extraordinary patience. It ran for eight years, from 2002 to 2010, and detected nothing. It was upgraded to ‘enhanced LIGO’. Still nothing. Then a more substantial upgrade to ‘advanced LIGO’ was installed, taking almost seven years of work (beginning in 2008). This new version, switched on in 2015 – about 50 years after the physicists Rainer Weiss and Kip Thorne began to develop the idea for LIGO in the 1960s – detected the long-sought waves. By that time Weiss, Thorne and the project director Barry Barish were all over 75 years old and retired from their long-time research positions at the Massachusetts Institute of Technology and Caltech. Instead of gold watches, they got a Nobel Prize.
What the history of LIGO and so many other scientific sagas illustrate is the intense commitment and focus needed to do great science. Nature furnishes many clues to its deep structure. But those clues are for the most part barely accessible. What distinguishes Isaac Newton’s theory of gravitation from the Einsteinian theory that replaced it are minute discrepancies, such as the smidgin of a proton diameter by which LIGO’s tubes expand and contract. Differences of this order are supernally difficult to reliably detect: the effort and cost of a successful experiment is fearful. (Another test of Einstein’s theory, the Gravity Probe B experiment, took more than 40 years to complete, and cost about $750 million.)
Titanic physics experiments might be an extreme case, but they are, all the same, illustrations of a universal truth: across the domains of science, the most telling empirical evidence is exasperatingly hard to uncover. Tracking the ways in which genes drive biological development – as they collectively assemble a body with all its limbs, tentacles, eyes and antennae in the correct places – means following the course of a myriad complex chemical reactions at the molecular level. Understanding how the firing of neurons gives rise to sensation, behaviour and thought means unravelling a neural network containing, in the case of the human brain, perhaps 100 trillion synapses (and synapses might not be the only intercellular bridges relevant to cognitive functioning). Modelling an economy or the global climate requires predicting the behaviour of a vast number of interwoven processes at many different levels.
In some cases, the obstacle is size: the relevant quantities or structures are extremely small (or distant or ancient). In some cases, the obstacle is complexity: the parts of the system are profoundly interconnected. In some cases, the obstacle is noise: the processes under observation are constantly buffeted by exterior forces, drowned in environmental static, whose buzzing and clanging must be disentangled from the processes themselves. In many cases, it is some combination of these, or the whole lot. All in all, doing empirical science is complicated, expensive, time-consuming, by turns monotonous and frustrating, and frequently subject to near-total failure.
However exciting the prospect of discerning nature’s hidden truths might be, then, the day-to-day life of a scientist is prone to be not merely daunting but disheartening. The predicament of the LIGO scientists around 1990 – no results, escalating costs, an uncertain future – is rather typical.
What prompts scientists, then, to push on? To turn up at the lab week in and week out to fine-tune an apparatus that still isn’t operating to specification, or to decode noisy, partial statistical data that still isn’t demonstrating any clear effect? To endure the setbacks, the boredom, the unending existential terror that their funding might simply disappear?
If a scientist stops coming to the lab, they will lose their job. There are many things, however, that Weiss, Thorne and the other LIGO creators might have done while continuing to earn their salaries. Indeed, they did do many other things. Weiss sent up balloons to measure cosmic microwave background radiation; Thorne studied the physics of black holes and coauthored a mighty textbook on gravitation.
The puzzlingly irrational narrowness of the rulebook for scientific argument turns out to have an upside
But what could they have done about gravity waves in particular? They could have worked on the mathematics of the waves and taught it to their students (and indeed they did). They could have shown that nothing that was like relativity theory in its most pleasing aspects could be true without engendering something like gravity waves. They could have extolled the beauty of Einsteinian relativity compared with some of the alternative theories of gravity proposed during the 20th century to account for the same phenomena.
None of that, however, would have qualified as a positive argument in favour of the existence of the waves. That is because the canons of science are quite specific as to what does qualify: they say that only empirical evidence counts. If Thorne, Weiss and other gravity wave researchers wanted to make a scientific case for gravity waves – the sort of thing that could be published in a scientific journal under the heading ‘Gravity Waves Really Do Exist!’ (or some fustier equivalent) – then they had to do it using empirical evidence, in science’s rather exacting sense of that term. Something like LIGO was needed. Something like LIGO had to be built.
And so, the puzzlingly irrational narrowness of the rulebook for scientific argument turns out to have an upside, driving determined scientists to produce a great, Nobel Prize-winning experiment.
Such salutary consequences are, I believe, quite general. The rulebook says, in effect, that if you want to make an argument in science – if you want to win an argument in science – then you must undertake complex, involved, sometimes almost interminable projects that most reasonable people, even inveterate truth-seekers, would prefer to avoid. In this way, the narrowness of the rules channels scientific energy and ambition down specific, often rather long and arduous, paths. But it is at the end of just these paths that the most revealing evidence is found, the observable facts that discriminate most clearly between competing theories or that push thinkers, searching for explanations, to devise entirely new ideas.
As the philosopher of science Thomas Kuhn put it in The Structure of Scientific Revolutions (1962), the peculiar institutions of science ‘force scientists to investigate some part of nature in a detail and depth that would otherwise be unimaginable’. It is this otherwise unattainable detail and depth that endow modern science with its formidable truth-finding capacity.
Indeed, I conjecture, modern science arose in the 17th century, in the course of the so-called Scientific Revolution, precisely because it stumbled upon the extraordinary motivating power of ‘only empirical evidence counts’ – a story I tell in my book The Knowledge Machine (2020). For thousands of years, philosophers thinking deeply about nature had valued empirical evidence highly, but they had valued many other avenues of thought in addition: philosophical thought, theological thought and aesthetic thought. Consequently, they were never forced, like Kuhn’s scientists, to throw themselves wholeheartedly into experimentation and observation alone. They watched the way the world worked, but they stopped measuring and started thinking too soon. They missed out on the little details that tell us so much. Only once thinkers’ intellectual horizons were closed off by unreasonable constraints on argument was modern science born.
from Hacker News https://ift.tt/36OeJ9Q
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.