Lewis Fry Richardson's weather forecasts changed the world. But could his predictions of war do the same?

Gathering storms: Richardson spent decades collecting data on the weather before shifting his attention to global conflicts

David Berreby
Tuesday 19 August 2014 20:32 BST
Comments
Gathering storms: Richardson spent decades collecting data on the weather before shifting his attention to global conflicts
Gathering storms: Richardson spent decades collecting data on the weather before shifting his attention to global conflicts

The burial detail, which had come for the corpses in the pigpen, was surprised.

The “dead” were getting up and speaking English. Qu’est-ce que c’est? Ah, they were an ambulance crew. British volunteers, in the trenches with the French Army on the Western Front. In the ruins and wreckage near the front lines, they’d found nowhere else to sleep.

The medical corpsmen were all pacifists, serving humanity even as they refused to serve in any military. Still, they lived like the troops. They bunked in rat-infested dugouts, on the floors of shelled buildings, in hay-filled barns. They dived for cover when incoming shells moaned and screamed, and struggled with their masks when the enemy fired gas canisters. At any moment, they could be called to go to the front lines, gather wounded men, and drive – lights off on roads cratered by shells, packed with trucks and troops, with every jostle making the blood-soaked soldiers cry out – to a hospital.

It was the last place in the world to look for a scientist at work. Yet one soft-spoken corpsman, known as “Prof”, filled his downtime with experiments and calculations. “We thought nothing of seeing him wandering about in the small hours checking his instruments,” one of his fellow corpsmen recalled. Once, for example, he’d set a bowl of water on a record-player he had somehow got hold of, cranked up the machine, and measured the radius of the curve on the water’s surface. A rotating fluid, he thought, might serve as a useful model of the atmosphere. (Though his record player wasn’t up to the task, later work would prove him right.)

“Prof” was the English physicist and mathematician Lewis Fry Richardson, for whom science came as naturally as breathing. “It was just the way he looked at the world,” recalls his great-nephew, Lord Julian Hunt. “He was always questioning. Everything was an experiment.” Even at the age of four, recounts his biographer Oliver Ashford in Prophet or Professor? Life and Work of Lewis Fry Richardson, Lewis had been prone to empiricism: told that putting money in the bank would “make it grow”, he’d buried some coins in a bank of dirt. (Results: Negative.)

In 1912, the now-grown Richardson had reacted to news of the Titanic’s sinking by setting out in a rowing boat with a horn and an umbrella to test how ships might use directed blasts of noise to detect icebergs in fog.

(Onlookers might have shaken their heads, but Richardson later won a patent for the fruit of that day’s work.) Nothing – not fellow scientists’ incomprehension nor an artillery bombardment – could dissuade him when, as he once put it, “a beautiful theory held me in its thrall”.

In 1916, two beautiful ideas gripped Richardson’s attention. At the heart of both was the complex interplay of predictability and randomness that is turbulence.

His first idea was rooted in his principles as a Quaker pacifist who believed that “science should be subordinate to morals”. Everyone spoke of this Great War as if it had been a catastrophic surprise. Who could predict a lone assassin in Sarajevo? Or that the belligerents would not find a way to defuse the crisis, as they had before? Or that plans for a quick victory would sour into this stalemate in the trenches? War, Richardson thought, far from being an unforeseeable accident, might instead be the consequence of as-yet-unknown laws operating on measurable facts. Beneath its seemingly random and chaotic course were the regular patterns of these laws. With the right data and the right equations, war might be predictable – and thus preventable. He believed that humanity could some day avoid war as ships could some day avoid hidden icebergs.

Whenever he had a quiet moment, Richardson worked on a long paper on “the mathematical psychology of war”.

But it was Richardson’s other great idea that would come to fruition first, and make him famous. After many decades of obscurity it would come to be appreciated as one of the most significant technologies of the 20th century. At the front lines and in the rest billets where the corps was rotated out for a break every few weeks, Richardson was looking for a way to forecast the weather.

English mathematician, meteorologist and psychologist Lewis Fry Richardson (1881 to 1953) (Getty images)

At the turn of the last century, the notion that the laws of physics could be used to predict weather was a tantalising new idea. The general idea – model the current state of the weather, then apply the laws of physics to calculate its future state – had been described by the pioneering Norwegian meteorologist Vilhelm Bjerknes. In principle, Bjerkens held, good data could be plugged into equations that described changes in air pressure, temperature, density, humidity and wind velocity. In practice, however, the turbulence of the atmosphere made the relationships among these variables so shifty and complicated that the relevant equations could not be solved. The mathematics required to produce even an initial description of the atmosphere over a region were massively difficult.

To get a forecast without stumbling on the impossible calculus of the differential equations, Bjerknes represented atmospheric changes using charts. For example, as the historian Frederik Nebeker explains in Calculating the Weather: Meteorology in the 20th Century, the chart might show more air flowing horizontally into a region than out, allowing the forecaster to predict that the remainder of the incoming air is flowing upwards in the form of vertical winds.

As it happened, since Richardson’s graduation from Cambridge in 1903, he had confronted similarly difficult equations as he moved restlessly among posts in academia and industry. Analysing stresses in dams and the flow of water through peat, he had developed a different work-around.

Only differential equations, with their infinitely small quantities changing over infinitely small units of time, described the continuous change he wanted to model. But since those equations couldn’t be solved, Richardson reworked the maths to replace the infinitesimals of calculus with discrete measurements occurring at discrete time intervals. Like a series of snapshots of a ball flying through the air, Richardson’s “finite difference” equations only approximated the reality of the constant change they described. But they could be solved, with simple algebra or even arithmetic.

And their solutions would be far more precise than any obtained with a chart.

Richardson’s finite-difference work had been too novel and unfamiliar to win him a research post at a major university. But, in 1913, it helped get him a plum job: directing a research laboratory for Britain’s Meteorological Office, which hoped that Richardson would bring rigorous thinking and practical lab skills to the search for accurate weather forecasts. Here, with a good salary, a house to himself and a lab far from distractions, he would have ample time for research.

The following year, however, the Great War arrived. At age 32, with his important research ongoing, Richardson could have kept to his agreeable job. Yet even as his principles would not permit him to serve in the military, he still felt he should take part in the war. “In August 1914,” he later wrote, “I was torn between an intense curiosity to see war at close quarters, an intense objection to killing people, both mixed with ideas of public duty, and doubt as to whether I could endure danger.’’ Rebuffed when he requested a leave of absence to serve in the ambulance corps, in 1916 he simply quit. A few weeks later, he and his slide rule, notes and instruments were at the front.
And so, for the next few years, Richardson’s theories of war and weather advanced in and around the combat zone. Over six weeks in 1916, with a bale of hay for his desk, he patiently solved equation after equation for hundreds of variables. His aim was to demonstrate his method of “weather prediction by numerical processes” by creating a real forecast.

He decided to do a “hindcast”, so his results could be compared with real weather on a target date in the past. He chose the weather over Central Europe on 20 May 1910—a date for which Bjerknes had already published a trove of data about temperature, humidity, barometric pressure, and wind speed.

Richardson created a map of the atmosphere over the region, split into 25 equal-sized cells with sides of about 125 miles. Every block was further divided into five layers with the same mass of air in each layer. Richardson divided the 25 big blocks into two types: P cells, for which he recorded the atmospheric pressure, moisture and temperature; and M cells, for which he calculated wind speed and direction. He alternated P and M cells on his grid, creating a sort of checkerboard. He could calculate the “missing” data for each cell by looking at the data of cells adjoining it. Plugging all the available data from 7am into the equations, then patiently solving them for a time six hours later, he arrived at a “forecast” for conditions at 1pm.

Results: Negative. The recorded weather for the day showed that Richardson’s “forecast” was wrong. Many scientists, then and now, would not publish such a resounding dud of an experiment. But the Quaker and scientist in Richardson valued plain honesty over self-promotion, so when he published Weather Prediction by Numerical Process in 1922, he described his disappointing results in great detail.

A few years ago, Peter Lynch, of the Irish Meteorological Service, showed that the trouble was simply that 1910 data-collection methods failed to correct for minor noise in the data. Richardson’s model worked.

But there was another source of potential error, which Richardson realised required further research: the turbulence that knocked air out of predictable paths, sending eddies of air up or down or sideways, where they banged into other eddies, passing energy from whirl to whirl. More poetically, in the chapter devoted to turbulence in Weather Prediction by Numerical Process, Richardson explained it this way: “Big whirls have little whirls that feed on their velocity; and little whirls have lesser whirls, and so on, to viscosity – in the molecular sense.”

As his great-nephew explains in his introduction to Richardson’s collected papers, in the first decades of the 20th century meteorologists didn’t have a good grasp of turbulence, especially as it affected movements of air in the first 2km of the atmosphere. Turbulent eddies in this layer are crucial to weather prediction, because they carry heat and moisture up into the higher atmosphere and down toward the surface of the earth, shaping the weather.

Illustration by Josh Cochran

For instance, Richardson had observed how fluctuations in wind speed seemed to depend on the difference in wind speeds at different heights and the difference in temperature at those heights. When ground temperature fell, resulting in a greater difference between ground temperature and temperatures higher up, wind fluctuations became less frequent. He concluded that this was due to buoyancy forces caused by eddies moving through different temperature zones interacting with eddies moving through regions of different wind speeds. He devised an equation to predict the occurrence of turbulence based on a ratio of these effects. As Giles Foden has written in Turbulence, his novel inspired by Richardson’s work, the equation “dramatises the relationship between wind and heat”.

That ratio of heat energy to wind energy, now called the Richardson number, is used today to predict where turbulence will occur in both the atmosphere and the ocean.

Richardson’s turbulence research was quickly recognised in the 1920s, but his greatest meteorological insight – his forecasting method – was considered a failure. It was too difficult to do in real time, many thought, and it had not produced an accurate prediction. His proposal languished for decades before technology caught up with it. Only after the advent of computers capable of doing quick calculations did his numerical-process approach become the standard method for forecasting. Today, his technique is the basis for weather forecasts and climate modelling. Richardson often suffered from this sort of scientific “prematurity,” as the mathematician and father of fractal geometry Benoit Mandelbrot put it.

Richardson’s breakthrough work on turbulence in the 1920s was conducted in his spare time, as he supported his wife Dorothy and their three adopted children with a professorship at a teacher’s college. The war had marked him – one of his children recalls Richardson screaming in terror at sudden loud noises, and explaining that he had “shell shock” – and his concern for understanding collective violence was growing. By the end of the 1920s, he had enrolled in university to study psychology. War replaced the weather as his main focus.

For two decades before his death in 1953, Richardson painstakingly collected data about arms races, economic upheavals, insurgencies, revolutions, riots and combat.

“He was all the time compiling statistics about conflicts around the world,” Hunt recalls from boyhood holidays with his great-uncle.

Where others looked at war and saw unpredictable turbulence that maths could not master, Richardson was looking for measurable quantities and inexorable laws that could be modelled with equations. His goal was to build a model of the state of political and economic tensions among nations – with measurements for “war weariness”, “internationality” (a nation’s engagement with other countries, partly derived from figures about its international trade), and “preparedness for war” (a function of economic data and spending on arms and defence).

As with his weather work, he was producing papers, but also working toward another magnum opus. Posthumously published in 1960, The Statistics of Deadly Quarrels would, he hoped, help people set aside the illusions and self-serving jingoism that passed for analysis, and see, as he put it, “what has happened often is likely to happen again, whether we wish it or not”. Any attempt at a science of war would have its flaws and blind spots, but at least it would offer some much-needed clarity.

After all, then as now, most of what passes for analysis of wars and conflict consists of talk about one individual eddy or another in the vast political atmosphere. Changing relationships among world leaders, individual skirmishes and attacks, armistices, and so on are the kind of events that “may be likened to the eddying view of a wind”, Richardson wrote. His theory, by contrast, would offer a way to step out of our local turbulence, and see the larger patterns.

As he had in meteorology, Richardson sought hard data – measurements that would not vary with the politics or passion of the observer. Any interpretation would be clouded by bias. He would ignore disputes about who was a terrorist and who was a freedom fighter, and whether a military action was a fight for freedom or a bandit raid. Instead, he would simply count the dead.

A “deadly quarrel”, Richardson decided, was to be defined as any conflict in which a person’s death was deliberately caused by another. He totted up “deadly quarrels” of every type from 1820 onwards.

He then sorted his “deadly quarrels” the way geologists classify earthquakes, ranking each “quarrel” according to the base-10 logarithm of the number of deaths it produced. The base-10 logarithm of a number describes how many times 10 must be multiplied to produce that number. A riot that leaves 100 dead in this system has a magnitude of 2 (the base – 10 – must be multiplied by itself two times to yield 100). And a conflict that kills 10 million people has a magnitude of 7 (10 must be multiplied by itself seven times). Defining “deadly quarrels” on a logarithmic scale also served Richardson’s project to get people thinking about violence without illusion. Like the Richter scale for earthquakes, his logarithmic graphs let the reader see all quarrels, from murders to global war, as a single phenomenon on a single scale.

Something interesting emerges from those figures. As the atmosphere is full of small eddies, so humanity experiences many small deadly quarrels, which result in a few fatalities. But now and again come huge storms, which kill millions. These are just the sort of outbreaks, like the world war Richardson had seen for himself, that people think of as surprising. Yet when Richardson plotted the frequency of wars against the number of deaths caused by each one, he found a constant and predictable relationship. On his graphs, the violence obeyed a “power law” – a constant relationship between the size and frequency of measurements. In his turbulence work, Richardson had found that such a power law governed the relationship between the rate of diffusion of objects in a turbulent stream and their distance from one another. Now he had found evidence of an underlying law in the supposedly unpredictable realm of politics.

Anyone might observe that extremely big wars were much more rare than “deadly quarrels” that killed only a few people. But the power law relationship suggested that the giant wars were just as predictable as the smaller ones. Like giant earthquakes (which also obey a power law), giant wars were not appalling surprises, arising out of unique circumstances. They appeared to be roughly predictable.

This finding was long considered a curiosity in peace research. However, over the past 10 years, a number of researchers have found power law relationships in modern statistics about violence. Neil Johnson, a physicist at the University of Miami, Michael Spagat, an economist at University College London, and their co-authors have found a similar power law in data about both conventional war, terrorist attacks and cyber-attacks. These findings offer some suggestions as to when future attacks are most likely to occur, and when it would be most effective to try to prevent them. And they offer a starting point for the quest for an underlying law governing outbreaks of violence once considered too complex and idiosyncratic to predict.

War, Richardson taught us, can be as out of our control as a summer thunderstorm. The least we might do is check the forecast.

David Berreby, the author of ‘Us and Them: The Science of Identity’, writes the Mind Matters blog at Bigthink.com. This is an edited version of an article that first appeared in Issue 15 of Nautilus; www.nautil.us

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in