The authoritative guide to ensuring science and technology make life on Earth better, not worse.

Martin Rees explains how science might save us

By John Mecklin | December 22, 2022

Board of Sponsors member Martin ReesBoard of Sponsors member Martin Rees

During a recent conversation, Martin Rees offered a thumbnail take on biological terrorism, in the process illustrating why he is known as one of the world’s most effective communicators on existential risk. “I worry very much about how we can be sure that some bad actor, as it were, doesn’t secretly develop some very dangerous pathogen and release it,” he said. “It’s unlikely that many people wish to do this. But when the release of a pathogen could lead to a global pandemic, one such person is too many. As I put it, the global village will have its village idiots, but they will have global range.”

To call Rees accomplished on many fronts is to understate by orders of magnitude. A member of the UK House of Lords and the 15th Astronomer Royal, he is based at the University of Cambridge, where he has been Professor of Astronomy and Director of the Institute of Astronomy. He is a Fellow and was the former Master of Trinity College, Cambridge. His career includes pioneering research on quasars, black holes, and galaxy formation. He is also a co-founder and leader of Cambridge University’s Centre for the Study of Existential Risk and a long-serving member of the Bulletin’s Board of Sponsors.

Beyond his hundreds of research publications, Lord Rees has written many books aimed at a general readership. I spoke with him earlier this month about his new book, If Science is to Save Us.

Martin Rees: Can I just say, at the start, that my book is not very homogeneous; the first half is concerned with big global issues, and the second half is the more personal perspective of a scientist, and interaction with the scientific community, with the public, with the government and politicians, etc. So the book’s really in two halves, and those who don’t like it say it is a dog’s breakfast; those who like it more will say it’s a smorgasbord.

John Mecklin: I enjoyed both parts of it. But could you quickly for our readers summarize the part at the beginning that is less personal: What was the main point you were trying to make with If Science is to Save Us?

Rees: I was making the point that more and more of the issues which concern us or determine our future have a scientific dimension. And therefore, people need to have a feel for science, to understand what these issues are, and to participate actively in democratic debates. And the scientists themselves have an obligation to ensure that if their discoveries have manifest applic

ations, they should do all they can to ensure that the benign applications are developed, and to issue appropriate warnings, and do all they can to minimize the probability of the downsides. Because the stakes are getting higher.

My main theme in the book, which is something I’ve discussed for a number of years in other fora, is that we are in a state where science has greater potential benefits, but greater potential downsides. And indeed, in our evermore interconnected world, there’s a genuine risk of global catastrophes, which could arise through our collective actions, as we’re seeing in the concerns about climate change and loss of biodiversity. But it could also arise from an engineered pandemic, for instance, which could be generated by ill-intended applications of biology.

Mecklin: What you’re proposing are rather large changes in how things are arranged around the world. And at one point, you mentioned that to deal with these many interlocked global risks, there will need to be—well, I’ll just read the quote: “[N]ations may need to cede sovereignty to more international bodies, like the WHO (World Health Organization).” But the WHO kind of did a poor job with COVID. And some of the other international regulatory bodies have not been huge successes, like the COP series of climate meetings, which have been very slow and not really effective.

Do you think there needs to be some new thinking about what world regulation of these problems looks like?

Rees: I think you’re a bit hard on the WHO; it wasn’t that bad, really. But I think you’re quite right in saying that these bodies may not be adequate. But I do think we need more. We have the International Atomic Energy Agency, and we have the World Health Organization. But I think we are going to need something to regulate IT [information technology] and the Internet and to regulate energy, and also for other purposes.

And you’re quite right that nations normally don’t take [international regulatory agencies] seriously enough, and they’re ineffective and slow. And I think this leads to another theme of the book, which is, how do we get the politicians to prioritize issues which are global and long-term. Politicians tend to prioritize things that affect their immediate constituency, obviously, and which are urgent. If we come to address an issue which is global, and maybe affects people in remote parts of the world more than people locally, then, of course, naturally, politicians won’t put it high on the agenda. And they won’t prioritize making the international discussions that are necessary, effective.

So that’s why one point I make is that the whole public has to be aware of these issues. Because if the politicians know that voters care, then the politicians will be activated and will do more to ensure that the international body is effective. And, of course, one obvious issue, you mentioned the COP 27 [climate change conference in Egypt], the meetings like that were not very effective. But that’s really because politicians don’t prioritize it.

I think, in the context of climate change, there is reason for a bit of good cheer, in that I think the public is more aware of this issue than it was a few years ago. And therefore, the voters take it more seriously, so politicians will. And the reason for that is that I think scientists have done a better job of presenting their work. But more important, there are charismatic figures wh

o have gotten through to a wide public. And I mentioned in my book four rather disparate people who, in their different ways, have had big effects.

The first is Pope Francis, whose encyclical in 2015 was very influential; he got a standing ovation at the UN. And he has a billion followers in Latin America, Africa, and East Asia. And he therefore indirectly helped to forge a consensus at the Paris conference in 2015. So he is one global figure who has raised this issue on the agenda.

A second is David Attenborough, who is British, of course, but I think his influence spreads wider. As we’re making people aware of natural hazards—and particularly loss of biodiversity and pollution of the oceans and things like that—he’s had a big effect.

The third I would say, is Bill Gates, who is a hugely respected figure who talks about the more technical aspects and how we can if we try hard enough deal with these global problems.

And the fourth is Greta Thunberg, who has been an influence on the younger generation and led to welcome campaigning by the young. And of course, we expect the young to be especially committed because they will be alive at the end of the century, when the consequences of mishandling environmental issues will be starker.

That’s a long answer to your question. Sorry about that.

Mecklin: In your book, you mention educating the public, so that they will influence politicians to pay attention to long-term issues, and you mentioned journalism. And you’re talking to a journalist, but I think you would admit that it’s a problematic situation; there are journalists, and there are journalists. There’s a lot of really bad stuff out there, and some good stuff. I’d just like to hear you talk about how you decide how to interface with journalists. And what advice do you have for other scientists on how they should approach this?

Rees: Well, I think scientists should be willing to address journalists and speak directly to the public or write directly to the public. Not all of them have the capacity for this, but those who are able to do it should. But I think at this point you have the public confused by the prevalence of fake news, etc. And I do think that social media has made things more difficult, because in the old days, most of the public got their news and their opinions from people like yourself, established journalists, who would, in general, sort of muffle the crazy stuff and moderate what is said. On the other hand, what happens now is the crazy stuff from extremists on any wing can get directly to the wide public. And of course, people click on that and get sent to something even more extreme.

One thinks that this is a problem of the Trump era, but I think it’s more generic than that, and this is going to make it harder for the public—and indeed, politicians who are not experts—to decide what to believe. I’m not too pessimistic, but I think it’s hard now, because although social media has its upsides, it does have the downside of raising the noise level as it were and, therefore, drowning out the best evidence.

I know scientists who have been government advisors, and they have a hard job getting much traction with politicians in general, for the reasons I mentioned—namely, parties have an urgent and local agenda, and don’t think about these long-term problems. But I think politicians need to be aware that the scientists don’t always know the answers; they give the best answers they can. And we saw during the pandemic, that idea has firmed up; things like whether it’s a good idea to wear masks or not were controversial initially, but there was a consensus later on about when they were useful. And so, I think in those contexts, we expect that politicians should be somewhat skeptical of scientists, but they should have a feel for when they can believe them, and when they can’t. And the scientists, on their side, have to realize that any actual decision—whether it’s on health, on medicine, on the environment, on energy—has a scientific component, but it also has other components, economic and social and political, where the scientists have no special expertise and speak only as citizens.

RELATED:
No cats or dogs were killed during the completion of this debate scorecard

So I think scientists have to realize that the advice they give to politicians is just one element of the input the politicians need in coming to a decision.

Mecklin: I had a wise man tell me when I was talking to him about some brilliant policy or other, he just looked at me and said, “There is no policy without politics.”

Rees: That is true.

Mecklin: I found it interesting that in talking about the need for better education at all levels, your book specifically mentioned that the humanities should remain part of major research universities. And it seemed like there was an element of nostalgia for the days when you were educated. Why don’t you explain why you think, even today in a major research institution, the humanities are important.

Rees: I think for students who are still undergraduates there should be a broad curriculum. And I think we can learn from the US in this sense, in that your system, where you have minors and majors, allows people to focus on the science and do some humanities as well, or vice versa. That’s actually harder in the UK university system, because we have too much early specialization.

So I think it’s important that people should have a broad perspective and keep their career options open. One problem in the UK, for instance, is that some people drop science at the age of 16, because there’s enforced specialization, even in the last two years of high school. And if they drop science at 16, that forecloses going to a university course and majoring in science. And that’s unfortunate. So that’s just saying that we want to ensure that most people who have any kind of post-18 education, and even more creative education, are exposed to science and humanities.

When we think of politicians themselves, scientists often bemoan the fact that very few of them have a scientific background. I think we’d like a few more of them to have a scientific background. But I’m not sure we want to go quite as far as, say, China and Singapore, where most of the top politicians have expertise in engineering, etc. And I think I say in my book that if there are politicians who have graduate-level expertise, I’d rather have it be in history than in dentistry, for instance, because history obviously gives them a relevant perspective, whereas a single specialized science doesn’t.

And incidentally, I’ve noticed that in the UK, where I’m privileged to be in the House of Lords, which is part of Parliament, the politicians who have the best perspective on science, and the best appreciation of its scope and limits, are not necessarily scientists themselves. They are people with a broad education; they may be journalists, or they may have studied humanities, but they nonetheless, understand enough about science to be able to appreciate when it’s important, when it’s reliable, etc. And I think that’s the most important thing—to ensure that the public realize when science is going to be an important contributor to a decision and know who to ask if that’s the case.

Mecklin: In part of your book, you mention the concept of responsible innovation, that there needs to be greater investment in research fostering that kind of innovation. What did you mean by responsible innovation? How do you foster it?

Rees: This is easier to do when there’s a public-private partnership. Let me give two examples. One example is research and development of efficient, clean energy. One thing that I bang on about if I ever have the chance is that countries like yours and mine ought to be expanding their research and development into clean, carbon-free energy and all of the things that go with it: energy storage, cheaper batteries, and smart grids, and things of that kind. And if we do this, then we can achieve net zero [greenhouse gas emissions] by mid-century, which is the goal of our countries.

But more important than that, we can help the Global South to achieve the same thing, because the Global South will contain four billion people by mid-century. And if they are going to develop, then we would like them to be able to shift directly to clean energy, and not trace out the path where they go through a coal-burning phase like China and India. And that can only happen if there is affordable, clean energy. And a top priority for countries like ours is to accelerate the development of affordable clean energy for the Global South, so that they can leapfrog directly to it, just as they’ve leapfrogged directly to smartphones, without ever having had landlines. That’s my first example.

The second example is a different one. There are some kinds of experiments, which are manifestly potentially dangerous, so-called gain-of-function experiments on viruses, where you can make a virus more virulent, or more transmissible, or both. Experiments of this kind were done 10 years ago, on the influenza virus. And this stimulated debate among the scientists themselves, and among scientific academies, about whether research of this kind should be done, and whether if it’s done, it should be published or not. Indeed, the US federal government did stop funding experiments on these so-called gain-of-function techniques, at least for a certain period. I think that’s an example where it’s irresponsible, perhaps, to do certain kinds of research, unless you’re confident that you can control the way it’s made use of.

And my worst nightmare, really, is of the misuse of technologies like biotech, which are widely available. It’s not like making an H-bomb, where you use special-purpose facilities which can be monitored. You need only the kind of facilities available in many industrial labs and university labs. And I worry very much about how we can be sure that some bad actor, as it were, doesn’t secretly develop some very dangerous pathogen and release it. It’s unlikely that many people wish to do this. But when the release of a pathogen could lead to a global pandemic, one such person is too many. As I put it, the global village will have its village idiots, but they will have global range.

And I think, accepting this fact—and accepting, similarly, that cyberattacks can have catastrophic, wide-ranging consequences—is going to pose a big challenge to governance, generally. Because there are three things that we want to preserve: one is freedom; another is security; and there’s privacy. And I think it’s going to be very hard to have all three. Because if we don’t have some sort of surveillance, we’re not going to be able to rule out the possibility that some bad actor is going to create and release a dangerous pathogen that could spread globally. And I think this is going to be a real challenge. I suspect that we have to give up privacy, as the Chinese already have.

Mecklin: I actually wrote down the village idiot quote from your book, because it seems that’s a big problem to me—just the person who’s completely crazy. There are a couple of other things I wrote down from the book that I wanted to ask you about. I’ll just read the quote, and you can respond as you wish: “But don’t ever expect mass migration from Earth. It’s a delusion and a dangerous one to think that space offers an escape from Earth’s problems.” Why do you think it’s dangerous? It obviously has some relevance to a line of thought that’s been out there in certain circles for a while now.

Rees: Well, I just think it’s so unrealistic. In fact, I wrote another book earlier this year, called The End of Astronauts, which was suggesting that the case for publicly funded astronauts is getting weaker all the time, as robots get better. And robots could do exploration of Mars, they could assemble big structures in space without the need for humans. And so I think, if I was an American taxpayer, I wouldn’t support NASA’s human spaceflight program at all. The reason I wouldn’t is that, first of all, it’s of no practical need, because we have robots, which get better all the time. But secondly, if NASA does something, it has to be very risk-averse. I mean, you remember the space shuttle, which crashed twice, killing seven people in it in both cases, but that was in 135 launches. That’s a less than 2 percent failure rate. And even that was something which was thought barely acceptable.

RELATED:
Trump’s potential impact on emerging and disruptive technologies

Now, if you’re even sending one person to Mars, it’s most unlikely that you could ever make the risk as small as 2 percent; 50 percent chance of coming back might be the best you could do. And there are many adventurers who would happily accept these risks. And I think the scenario for the future of human spaceflight, certainly of spaceflight to Mars, is to leave it to the billionaires, as it were, and sponsors. Because if it’s funded privately, then they can perfectly well launch people who are prepared to accept this high risk. Elon Musk himself has said that he’d like to die on Mars, but not on impact. And I think he’s now 51. So 40 years from now, good luck to him if he does this.

And so I think there will be people on Mars by the end of the century, but they will be people who are adventurers, who would have been prepared to take a high risk, and will not be funded by NASA, because I think NASA would try very hard to lower the risk before it sent anyone. And it would never succeed and make the risk low enough to be acceptable to the public. So I think there’ll only be a few people who will go to Mars. And of course, living on Mars is very, very tough; these guys will have a hard time. And as I say in my book, terraforming Mars to make it habitable for ordinary people is far, far harder than dealing with climate change on the Earth.

And so it’s a dangerous delusion to think that’s an alternative—to think that there’s a planet B for all the risk-averse people, where we can all emigrate to if we screw up on the Earth. A few people will go to Mars. But ordinary people, we’ve got to stay on the Earth, and we have to cherish this Earthly home. It’s the only one we’ll ever have.

Mecklin: If Elon wants to go to Mars, I’ll be glad to watch him. Just a couple of other questions, though. One thing I’d like to hear you talk about: There are two frames of reference on the artificial intelligence situation. There’s the set of people who worry greatly and at great length about an artificial general intelligence, a super intelligence that takes over and discards us humans. And there are others who tend to think that’s way far off, if it could ever be.

I was just wondering: What’s your thinking on that? Which way are you leaning?

Rees: I should say I’m not an expert, but I think I lean toward the latter side. I’m aware of the idea of machines taking over the world and having goals misaligned with what humans want—it’s been discussed extensively, of course, but I’m more with Rodney Brooks, the inventor of Baxter robots, who doesn’t take that all seriously and says it’ll be a long time before we have to worry more about artificial intelligence than about real stupidity. That’s his line, and I tend to align with him.

But of course, having said that, I do in my book discuss the downsides of AI already. I think there are problems if we delegate decisions to machines, because even though on average they may seem to make good decisions, there may be some hidden bugs, etc. And I think if we are being recommended for parole, if we’re in prison, or for surgery, or even for a credit rating, then we would like to feel that there were some human in the loop doing this. It’s not enough to say that on average the machine does a better job than the human; I think we need to have both.

Machines can of course, be very helpful. For instance, in radiology, they can have studied, because of their speed, tens of thousands of lung x-rays more than a surgeon can in an hour, but we still need to have the surgeon in the loop. So I think, for that reason, we have to be cautious about where we accept the decisions of AI without it being moderated by a human.

Incidentally, if you think of the effect on the labor market, which is of course, ongoing, I think the jobs that are going to be replaced will be some of the more mindless jobs that are done now—say [operating] Amazon warehouses, or telephone call centers, things like that. If those mind-numbing jobs can be taken over by robots, that’s fine, provided that the people consequently unemployed can be redeployed into areas where being human is important. And I argue in my book, that we need to tax quite heavily the international conglomerates that control AI and hypothecate those taxes to employ far more people as carers for the young and old, teaching assistants, custodians in public parks, and things where you need to be a human being—jobs which certainly in my country are underrated and in very short supply. There are far too few carers for old people, for instance.

So this will be a redeployment from jobs where humans can be replaced to those where being a human is crucial. That’s one thing. And also, if you think of which jobs can be most readily replaced, they will include not just simple manufacturing jobs, but some white-collar jobs—legal conveyancing, for instance, computer coding, and to some extent radiology and things like that. But among the jobs hardest to mechanize will be non-routine, blue-collar jobs, in particular plumbing and gardening. The idea of having a robot go to a strange house and immediately find his way around and do what a plumber does—that’s, I think, very futuristic. So there’s going to be redistribution, but there could be an added benefit, provided that there’s appropriate taxation to ensure that there is funding for the socially valuable jobs, to redeploy those who are usurped by machines.

Mecklin: It would be wonderful if we could depend on government to make that kind of redistribution. I question whether that would happen, based on other technology advances.

Rees: I suspect that’s because you’re an American. I argue as a Brit that we should learn less from America and more from Scandinavia. Scandinavians are the happiest people in the world by most good polls, and they have higher taxation rates, a better welfare state, less inequality, etc. So I think a country that’s governed more like Scandinavia could do this. And I hope America will go that way. And I certainly hope that Britain will go to the Scandinavian route and not the American route, which is why the sooner we can get rid of our present, incompetent government in Britain, the better.

Mecklin: I join you in wishing we were more Scandinavian. I’ve burned 36 minutes of your life here, so let me move on to a last question for you. You’re an optimistic person. I mean, this just comes across, right through via video screen. Why are you optimistic that violent, greedy, flawed humanity can keep from destroying itself with its own technology?

Rees: Well, not that optimistic. That’s why I co-founded a center to study these extreme threats, which indeed are growing. Because, as I put it in my book, the Earth’s been around for 45 million centuries. But this is the first of those centuries when one species, namely the human species, can destroy or degrade the entire planet. And so we are at an especially dangerous time. And I’m not optimistic, because there’s a nonzero threat of some irreversible catastrophe happening. And I think we are likely to have a bumpy ride through the century, because of misapplication of some technology.

But we can reduce the threat. And the center that I helped to set up in Cambridge is addressing this—as are a few other centers around the world, but not enough. We’re focusing on minimizing these really catastrophic, global threats. And I like to say that, even if we could only reduce the probability by one part in 1,000, the stakes are so high, that we will have more than earned our keep, because only about 100 people in the world are really thinking full-time about these really extreme situations.

I’m optimistic that the world can escape the worst catastrophes which science fiction can speculate about. But I do think that we are going to have a rather bumpy ride through the century. One does hope that we will end up in the second half of the century with having controlled climate change, avoided mass extinctions, and hopefully reduced the huge inequalities not only within nations, but more importantly, between the Global North and the Global South, particularly sub-Saharan Africa. I think science is part of that story. We’ve seen that, despite the rise in the world’s population by more than a factor of two in the last 50 years, the mass starvation predicted by people like Paul Ehrlich at Stanford in the late 1960s has not come about. There’s still malnutrition and even starvation, but it’s a consequence of maldistribution or wars, not overall scarcity. And I think science can provide enough nourishment for the world’s population throughout the century. And also can provide, I think, for everyone a life at least as good as the life that we in fortunate countries enjoy today.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
Notify of
guest

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Lee Miller
Lee Miller
1 year ago

Sad that this scientist thinks that we will be able to feed the population this century. We just passed the 8 billion number of humans. We consume vast amounts of energy daily—about 77 million barrels of oil a day and it takes 10 calories of energy to put one calorie of food on the table. The world is finite and resources are dwindling while our numbers continue upward. This is a classic outbreak-crash curve that we ecologists have identified long ago. Yes Paul Ehrlich was correct and Malthus delayed in not necessarily Malthus wrong!

jim bob
jim bob
1 year ago

MR should really pursue the problem of global overpopulation. He appears to be a denier on this question.

Dana Franchitto
Dana Franchitto
1 year ago

Bill Gates and Elon musk should not be taken seriously in a healthy democracy. They are only interested in concentrating wealth to their own elite class. And musk serves as an egregious example why access to space should not be privatized: it should be a public trust, not a market commodity.