I just finished re-reading Jared Diamond’s magnum opus, Guns, Germs and Steel. It’s amazing how well the text holds up on re-evaluation; the analysis is deep, and the number of cases he covers is wide enough to convince me that there is a real meat to his argument. It did, however, get me thinking about some interesting ways to extend his work. He proposes several in the epilogue, including all of the obvious further data searches and analyses which would need to be run to confirm or refute the hypothesis, and these are surely in the hands of people far more qualified to think about them than I am. But he raised one point obliquely which got me thinking about the one thing I have most trouble with in the argument, and it gave me a thought for how to answer the question.
The question that I was thinking about was about the relative strength of west-Eurasian (esp. European) versus east-Eurasian (esp. Chinese) cultures in the modern world. The biggest reason that this is at issue — and I should note that Diamond admits this quite freely, and discusses it at some length in his final chapter — is that, if one were to look at snapshots of our world at hundred-year intervals over the past two millenia, most of those snapshots would not have given any hint of an incipient European conquest of the globe; far more of them would have led one to bank on Chinese conquest. He hypothesizes that the reason for the relative ultimate strength of Europe has to do with China’s relative unification, which allowed single bad events (such as the political turmoil of the early 15th century which ended large-scale sea travel, whether it be the consequence of an internal political rivalry [as Diamond implies] or of external pressures from Mongolia) to simultaneously affect the entire “Chinese basin;” had an analogous event forced a European nation or subregion out of sea travel, other European nations or regions would quickly have taken up the slack. (And probably conquered whoever got out of that business)
This argument seems intuitively reasonable to me, but it’s far less well-supported than the rest of Diamond’s claims, and it has one word in it which really bugs me: “ultimately.” We are not living at the end of history; a snapshot taken a hundred, or two hundred, years from now may reveal a radically different world. Yet I believe that his core arguments will continue to apply. What I would suspect to be true is that the fundamental “guns, germs and steel” arguments imply a very high average strength for both Europe and China compared to the rest of the world. Within those two regions, the increased unification of China leads to it advancing faster when things go well, and regressing farther when things go poorly, compared to Europe. The basic engine of this would be that increased communication, ability to centralize resources, etc., can be a tremendous advantage, but it also means that single solutions will tend to preempt others more effectively, and so in case of a situation which damages some mechanism, there are fewer fallbacks. Essentially, this is similar to the problem of monoculture of crops.
Now this is not an easy question to answer, but I can think of two approaches which could shed light on it. The most direct is an observational approach, and I don’t have much to say about this because coming up with the right thing to look for seems to require a professional’s eye.1 But an interesting other way to look at this might be to steal a trick from other observational sciences like astrophysics and climatology: computer simulation.
Before you jump all over me and say that I’m mad, there’s a method behind this madness. Diamond spent a good chunk of his Epilogue defending the work against a criticism that, because its hypotheses aren’t testable in a lab, they somehow aren’t “scientific.” I actually found this argument a little surprising, because scientists seem perfectly aware of the difference between experimental and observational sciences, and don’t seem to argue much that astrophysics Isn’t Really Science. In fact, I would say that this book is one of the great works of observational science in the past century, a perfect example of how to synthesize a wide variety of observational data to form a coherent hypothesis. So a proposal to use the methods which have so revolutionized other observational sciences in the past few decades doesn’t come entirely out of left field.
A more serious a priori objection would be that history, even when it is as based in glottochronology, archaeology, etc., as this work, is not a numerical science, and so numerical methods such as computer simulation aren’t appropriate. I’m actually the last one to suggest that numerical methods should be applied where inappropriate; I think that much of the adoption of numerical methods in the social sciences in the past few decades has been just plain silly (I’m looking at you, sociology) and doesn’t add any particular insight. But not all computer simulations are aiming at quantitative data: I would be much more interested to see if we could simulate qualitative behaviors.
Moving beyond the objections to the whole notion of computer simulation, there’s also the rather sound objection that human history is rather more complicated than even climate, and we have a hard enough time modeling that. But the whole point of Diamond’s argument, which I find so convincing, is that the overwhelming majority of the “details” of human history are irrelevant to these largest-scale sweeps; a small number of underlying pressures would have produced the same result, independent of whether (say) Hitler died in 1943 or 1945. This suggests that a good model could abstract away a lot of details and still produce a meaningful answer — or alternatively, could average over a tremendously large number of possible histories, each with different micro-details, and verify that the large sweep of certain events remains unchanged.
This sort of simulation is well-established in applied mathematics, where it is known as a “Monte Carlo simulation.”2 Monte Carlos work by running the simulation a tremendous number of times, each time with some large fraction of the parameters set to random values, and then looking at the distribution of ultimate results. Some quantities will turn out to be highly dependent on the particular random events, while other quantities will turn out the same every time.
So here is a very crude first proposal: one could model societal flow by finite elements, with the surface of the Earth divided into reasonably small “plaquettes” on which a single society exists at every time tick. A relatively small number of parameters could capture food availability (both ambient and via intensified agriculture), population density, etc. One could simultaneously maintain state to describe every active population group (a group can spread across multiple plaquettes, persist over time, etc., as well as being engulfed by another group) and use this to track “civilizational” parameters such as degree of specialization possible, and logistical transport capacity. (I strongly suspect that the fates of large-scale wars can be reduced to a very small number of parameters; the distance scales of effective transport seem to perpetually set the size scales of political entities, and military technology levels could be crudely approximated by a single number, which is only of interest when groups which have never met before interact, as people tend to copy one another’s militaries faster than anything else) Likewise one might describe domesticable crops and large animals, whose number (as Diamond pointed out) is quite reasonable, and a few other states such as this. The simulation would run in time ticks, perhaps one tick per year, allowing for plenty of random variations to be injected; e.g., climactic variations would cause significant shifts in food availability per plaquette, and so forth.
As with any other computer simulation attempting to replicate reality, perhaps the most important thing is to build up some faith that the simulation is accurately predicting things. The standard way to do this in other observational sciences is to first see if the simulation can predict the present, given the past. Some good sample tests would be:
- Prepare a simulation with the geographic backdrop of the Pacific Ocean, and run it over the time frame of the Polynesian expansion. Diamond repeatedly pointed out the value of this time period for such studies, as it is both a relatively simple environment (not as many people, plagues, etc flowing in, and all colonization being done by an initially ethnically homogenous and very small group) and a relatively rich one (a tremendous variety of ecologies being settled, with radically different outcomes). Run it several times and try to predict the state of the various Polynesian islands as of the moment that Europeans first arrived.3
- Prepare simulations of Europe and China, respectively. These are much harder to do in isolation (one would have to specify boundary conditions) but they would be very important sanity checks if one is ultimately interested in their relative interactions, and there are a number of things one would like to see the prediction display: e.g., the relatively high unification of China and the relatively moderate unification of Europe should be stable experimental results. One should even expect that natural geographic barriers such as the Pyrenees should turn into political barriers as well. On the other hand, the political borders of Poland are highly unlikely to stay fixed in successive simulations.
If these tests could run successfully, I would say that I had baseline confidence in the ability of the simulation to work. It wouldn’t be easy to get to that point, but (given that the things which we’re ultimately trying to predict are “farming emerges in New Zealand, but not on the Chathams,” which shouldn’t really be rocket science) this seems feasible to me.
- Run simple planet-scale simulations. Verify, for example, the relative overall strengths of Europe and North America, which was one of Diamond’s driving examples and is probably the single most asymmetric case of cultural interaction in recorded history.
This last experiment is interesting because, if it does not give the expected result – if we see numerous histories in which North America conquered Europe or some such – then we can examine those in more detail, and see what factors made that happen. This should reveal either bugs in the simulation, or interesting potential counterarguments to the whole GGS hypothesis. (My money being on the former)
- Then, run full planet-scale simulations from c.8000BC to the present day. See if certain “core features” such as the spread of the Southwest Asian crop package remain invariant; see if the simulation does, indeed, predict that the world which we live in today is at least reasonably probable, if not certain.
At this point, we can go further and answer my earlier question about China. If my hypothesis above is correct, then one expects that China will be stronger, on the average, whenever circumstances favor centralization of power: either when ambient oscillations (droughts etc.) are relatively weak, or when they are relatively localized. On the other hand, when circumstances favor diversity of approach (e.g., because of frequent unpredictable events, or when rapid independent action can secure quick resource gains) Europe will be stronger, on the average. If we examine a range of input parameters to the simulation, it is likely that China would fare well in some and Europe in others.
My guess is that developing a simulation which can do this convincingly would not be an easy project, but it’s doable with a few years’ effort, especially if the team includes people with a good deal of historiographical expertise similar to Diamond’s team’s as well as people with expertise in numerical simulation.
Now, the ultimate question would be, is this worth it? Are our doubts about Diamond’s model sufficient to warrant this level of work? My answer would be that this is actually much more useful than simply validating a model. By running a large number of simulations, such a system could provide us with a large assortment of interesting counterfactual histories. If we encounter any histories significantly different from our own, we learn something: either there is some fundamental reason why such a counterhistory couldn’t have actually happened (in which case, the discussion is likely to teach us something important about human societies and why it couldn’t have happened) or our history is indeed very subject to chance, and (importantly!) such chances could easily happen again.
A simulation like this would be a tool for exploring history — a way of mapping out courses which we could have taken, and seeing how the landscape of possibilities actually looks. Its ability to test certain hypotheses is certainly a wonderful feature, but in the end, it may only be a side benefit of a much wider project.
1 Which being said, I’m still going to ponder it, I just don’t have an answer OTTOMH.
2 The name of the method comes from its dependence on chance, and the mental picture of the random number generator being a (presumably honest) card dealer. The name of this post is actually a sly reference to that most unfortunately-named textbook, “Monte Carlo Methods in Financial Engineering.” On the other hand, given the state of the economy in the past few years and how it got there, that title may not have been an accident.
3 I’d call this the “Silicon experiment;” when calibrating codes for simulating complex crystal structures, one often starts out by simulating crystalline Silicon first, as being a very simple example which will expose anything that’s broken. NB that the European arrival is a perfect terminus because (a) at that point the history ceases to be so isolated, and (b) there are sufficiently many detailed reports from these first Europeans that we have excellent historical data against which to compare.