I began my career as a historian of the century following 1660, an era of harsh climatic conditions that often affected political and cultural history. Some periods in particular, especially the years around 1680 and 1740, stand out as uniquely stressful. Extreme cold led to crop failures and revolts, social crises and apocalyptic movements, high mortality and epidemics, but it also spawned religious revivals and experimentation. If you write history without taking account of such extreme conditions, you are missing a lot of the story. That background gives me an unusual approach to current debates on climate change, and leads me to ask some questions for which I genuinely do not have answers.
I believe strongly in the supremacy of scientific method: science is what scientists do, and if they don’t do it, it’s not real science. Based on that principle, I take very seriously the broad consensus among qualified scientific experts that the world’s temperature is in a serious upward trend, which will have major consequences for most people on the planet—rising sea levels and desertification are two of the obvious impacts. In many religious traditions, activists see campaigns to stem these trends as a moral and theological necessity. Personally, I love the idea of using advanced technology to drive a decisive shift towards renewable energy sources, creating abundant new jobs in the process.
Speaking as a historian, though, I have some problems with defining the limits of our climate consensus, and how these issues are reported in popular media and political debate.
Climate scientists are usually clear in their definitions, but that precision tends to get lost in popular discourse. To say that global warming is a fact does not, standing alone, mean that we have to accept a particular causation of that trend. Following from that, we must acknowledge that the climate has changed quite radically through the millennia, and that equally is beyond dispute. Climate change of some scale has happened, is happening, and will happen, regardless of any human activity. The issue today is identifying and assessing the human role in accelerating that process.
This point comes to mind in popular debate when people who should know better denounce “climate change” as such. See for instance the recent Papal encyclical Laudato Si, with its much-quoted statement that
Climate change is a global problem with grave implications: environmental, social, economic, political and for the distribution of goods. It represents one of the principal challenges facing humanity in our day.
Well, not exactly. “Climate change” is a fact and a reality, rather like the movement of tectonic plates, or indeed like evolution. Particular forms of climate change may be exceedingly harmful and demand intervention, but that is a critical difference. It’s interesting comparing the English phrase “climate change” with the French phrase that was used at the recent COP21 Paris meetings, the Conférence sur les Changements Climatiques 2015: changes, plural, not change. Do you want to see a world without a changing climate? Look at the Moon.
That then gets to the human contribution to current trends. The basic theory in these matters is straightforward, simple, and (rightly) generally accepted. Carbon emissions create a greenhouse effect, which increases planetary temperatures. It should be said, though, that the correlation between emissions and temperatures is none too close. Rising temperatures do not correlate with any degree of neatness to overall levels of emissions. That is especially true when we look at the phenomenal growth in emissions from India and China since the 1980s, which should in theory have caused a global temperature increase far above anything we actually see. Sure, the effects might be delayed, but the correlation is still not working too well.
That disjunction is particularly telling when we look at the very recent era, from 1998 through 2012, when emissions have carried on rising sharply, but temperature rises have been slow or stagnant. This was a hiatus or slowdown in global warming, and it remains controversial. Some recent studies challenge the whole hiatus idea. Others accept the hiatus, but offer different explanations for its cause. Now, the fact that scientists disagree strongly on a topic certainly does not mean that the underlying theory is wrong. Arguing and nitpicking is what scientists are meant to do. But that lack of correlation does raise questions about the assumptions on which any policy should proceed.
That also gets us into areas of expertise. Climate and atmospheric scientists are not only convinced that the present warming trend is happening, but that it is catastrophic and unprecedented. That belief causes some bemusement to historians and archaeologists, who are very well used to quite dramatic climate changes through history, notably the Medieval Warm Period and the succeeding Little Ice Age. That latter era, which prevailed from the 14th century through the 19th, is a well-studied and universally acknowledged fact, and its traumatic effects are often cited. The opening years of that era, in the early-mid 14th century, included some of the worst social disasters and famines in post-Roman Europe, which were in turn followed by the massacre and persecution of dissidents and minorities—Jews in Europe, Christians in the Middle East, heretics and witches in many parts of the world. A cold and hungry world needed scapegoats.
Contemporary scientists tend to dismiss or underplay these past climate cycles, suggesting for instance that the medieval warm period was confined to Europe. Historians, in their turn, are deeply suspicious, and the evidence they cite is hard to dismiss. Do note also that the very substantial Little Ice Age literature certainly does not stem from cranky “climate deniers,” but is absolutely mainstream among historians. Are we seeing a situation where some “qualified and credentialed scientific experts” stand head to head with the “qualified and credentialed social scientific experts” known as historians?
If in fact the medieval world experienced a warming trend comparable to what we are seeing today, albeit without human intervention, that fact does challenge contemporary assumptions. Ditto for the Little Ice Age, which really and genuinely was a global phenomenon. Incidentally, that era involved a drop in temperature of some 2 degrees Celsius, roughly the same as the rise that is projected for coming decades.
The 2015 Paris Conference declared a target of restricting “the increase in the global average temperature to well below 2°C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5°C above pre-industrial levels.” It’s very important to set a baseline for such efforts, certainly, but what on earth is intended here? Which pre-industrial levels are we talking about? The levels of AD 900, of 1150, of 1350, of 1680, of 1740? All those eras were assuredly pre-industrial, but the levels were significantly different in each of those years. Do they want us to return to the temperatures of the Little Ice Age, and even of the depths of cold in the 1680s? The Winter of 1684, for instance, still remains the worst recorded in England, ever. Or are the bureaucrats aiming to get us back to the warmer medieval period, around 1150?
Seriously, does any serious climate scientist claim that “pre-industrial” temperature levels had been broadly constant globally for millennia, in fact since the end of the last true Ice Age some 12,000 years ago, and that they only moved seriously upwards at the start of industrialization? Really? And they would try to defend that? In that case, we should just junk the past few decades of writing on the impact of climate on history, undertaken by first rate scholars.
If pre-industrial temperature levels really varied as much as they actually did, why did the Paris conference so blithely incorporate this meaningless phrase into their final agreement? Did the participants intend “pre-industrial levels” to be roughly equivalent to “the good old days”?
I offer one speculation. Maybe the “Little Ice Age” was the planet’s “new normal,” a natural trend towards a colder world, and we should in theory be living in those conditions today. All that saved us was the post-1800 rise in temperatures caused by massive carbon emissions from an industrializing West. If that’s correct—and I say it without any degree of assurance—then I for one have no wish whatever to return to pre-industrial conditions. Climate scientists, please advise me on that?
Historical approaches are also useful in pointing to the causes of these changes, and therefore of much climate change that originates quite independently of human action. One critical factor is solar activity, and historians usually cite the Maunder Minimum. Between 1645 and 1715, sunspot activity virtually ceased altogether, and that cosmic phenomenon coincided neatly with a major cooling on Earth, which now reached the depths of the Little Ice Age. In fact, we can see this era as an acute ice age within the larger ice age. If you point out that correlation does not of itself indicate causation, you would be quite right. But the correlation is worth investigating.
So do I challenge the global warming consensus? Absolutely not. But that does not mean that all critical questions have been satisfactorily answered, and many of them depend on historical research and analysis. Pace the New York Times and large sections of the media, there is no such thing as “established science,” which is immune to criticism. If it is “established” beyond criticism and questioning, it’s not science.
Scientific claims must not be taken on faith.
Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.