Q & A with Seth Baum: What will wipe out humanity?

 on

“He studies the possibility that any given catastrophe will be the end of human civilization…a viral pandemic, alien invasion, things like that.”

When I first heard about Seth Baum’s job via a co-worker — who happens to be his wife — I assumed that she was telling me about a conspiracy theorist. In response to my bemused expression, she guided me to the web page for her husband’s brainchild, the Global Catastrophic Research Institute (GCRI) and the conceptually similar Blue Marble Space Institute of Science (BMSIS), with which he is also affiliated. Both are non-profit organizations that integrate interdisciplinary research areas to examine (among other things, in the case of BMSIS) the likelihood of various threats to humanity, the potential impacts of those threats, and how we might mitigate them. According to the GCRI’s web page, the world-ending scenarios made feasible by our rapidly growing, tech-consuming population, such as the rise of artificial intelligence or climate change, are more probable than natural disasters like meteor collisions that have presented a constant threat across geological time scales; thus, they earn the lion’s share of these institutes’ attention. 

The curious directions of the GCRI and BMSIS are hardly more curious than the educational trajectory of Seth himself — a bachelor’s in optics and applied math, a master’s in electrical engineering, and a PhD in geography frustrate anyone’s guess of a career in science. However, I was most surprised at the seemingly unrelated area of inquiry that spurred him to devote his work to protecting civilization.

Seth’s persona during our discussion at a quiet Washington Heights café shifted easily from mentor, lending career advice, to executive director, explaining the logistics of his unusual institute, to student, with barely concealed giddiness at his hypotheses. We were talking about aliens and robots, after all. 


AJ: I was looking up your profile at GCRI, and it mentioned that you were affiliated with the Blue Marble Institute as well, so I’m a little bit confused as to what your workday entails, given these two affiliations.

SB: It’s fairly unstructured. Both the GCRI and the Blue Marble Space … exist to a large extent as organizational infrastructure that researchers can use as needed independently from universities or other groups. For example, I’m applying for an NSF grant. If [your professor] applied for the grant, it would go through Columbia. Columbia takes a cut and it manages the money and it has the official relationship with the NSF. Both the GCRI and Blue Marble Space are set up to play that role. The NSF will actually accept applications from private individuals – not every funding agency will. Some of them require some sort of institution; it’s always easier when you have an organization that’s already set up to do it. We host some activities, some of which are held online. Beyond that, it’s whatever we make of it. For me, my daily routine has recently been writing papers.


AJ: So you can work straight from home.

SB: I usually do.


AJ: Does GCRI have a physical location?

SB: No. Blue Marble Space doesn’t either. We have postal mailing addresses for administrative purposes, but that’s not where everyone works.


AJ: And when you publish, are these papers primarily thought experiments or simulations, or if there is fieldwork involved, how do you accomplish that? And where do you publish?

SB: There are people in Blue Marble Space who do field work – astrobiologists who go out to a crater or something like that. A significant portion of the work that I do does not involve any original data; it involves a reasonable discussion of existing data and ideas. There’s plenty out there that hasn’t been studied yet. Also, it can be more efficient to write papers when you don’t have to collect any data. I should emphasize, it can be.

What I get caught up in is the literature. There’s a lot that’s been written on these topics, and I’m working across a lot of different topics, so the sheer quantity of information that a given person is responsible for [is enormous]. Nobody reads everything, and everybody knows that nobody reads everything. Every now and then I count the number of papers that I have in my collection – it’s 14,000 right now. That’s the hard part. It’s the concepts, the different ways of thinking … you know, when you read an engineering paper, it’s different from reading a philosophy paper. It’s hard reading across so many disciplines and it’s even harder writing to those different audiences. Fortunately, there are interdisciplinary journals, and that’s mostly where we send our papers.

Amy Jobe Pull Quotes 2014-08-13-01.png

Here’s an interesting point for you: in your line of research, potentially there is some concern about trying to hurry up and publish so you don’t get scooped. I basically don’t have to worry about that – because, one, [our research is] so unique, and two, there’s often space in the literature for multiple publications on virtually the same topic, because each paper would provide a different take on the topic. But the biggest thing is, we’re all too busy writing our own papers to scoop someone else’s! 

One empirical methodology we use is to ask people who have some expertise on a specific topic for their best estimations about it. There are formalized methods for that. It’s another data source that we will work with. It’s sort of like psychology research, you’re asking people questions and they’re giving you answers that tell you something about what’s in their mind, but it’s a very particular type of question-and-answer.

 

AJ: That’s really interesting – I didn’t know there was anything other than a straightforward way to say, “Please guesstimate this for me.”

SB: So, for example, we, for better or worse — I’d say for the worse — have not yet had a female president in this country. We might get one very soon. [Hypothetically,] Hilary Clinton, she’s probably the favorite to be the next President, right? So I can ask you, when do you think we will have a female President? Maybe you’ll say 2016 would be your best guess. Or maybe [we can give the question] a little twist, right, because that’s too easy. What is your best guess for when we will have had two different female presidents? When will the second female get elected? And you could come up with some number — say, 2024, right after Hilary’s terms are done. That’s a bad way to ask a question. 

The good way to ask the question is, okay, so we’re talking about some year in the future, … we can imagine a range of possibilities. Well, you might have a best guess, but you don’t know for sure. There might be a lower end: the soonest might be 2017 if Hilary has a female vice president and [Hilary] gets killed, or something like that. The latest might be, never. Maybe [artificial intelligence] will take over, and it doesn’t have gender! So you could give a range, say, “I would say that there’s a 90% chance that it would happen between 2024 and 2060.” And then [that leaves a probability of] 5% on either end. So when you go to ask the question in these expert surveys, start by asking for the 10% and 90%, and then work your way in. What the research has shown is, that when you start with their best guess, they tend to give tighter bounds – a more narrow range of numbers – and furthermore that that narrow range tends to be less accurate. If [the expert] gives a 90% range, then [the event] should be in that range 90% of the time. [Whereas] if you started with the best guess, [the event] tends to be outside that range more than 10% of the time. Which is a quirk of human psychology. And so by starting at the bounds you get wider ranges, which tend to correspond better with how things actually happen. So there’s a little technique to it. You can start to see how it’s plausible that we can work on articles on AI and geoengineering and climate change, because we are the ones who know how to ask good questions.

 

AJ: If I were to say, hey, I’m a graduate student, I’ve got spare time and my brain; I’ll write a paper. Does that work? I’m trying to get an idea of how people join your forces.

SB: This happens. The biggest thing is, do you have background on the topic? You don’t necessarily need money, you don’t necessarily need equipment; [for] a lot of this you can just read a bunch of papers, put them together for some sort of insight, and write on it, and that’s good – that’s very good. But if you don’t have background on the topic, then…we turn away volunteer requests a fair bit. We can’t do that much with them.

 

AJ: I’d bet that every so often, you get requests from alien conspiracy theorists, or end-of-the-world paranoid types as well. 

SB: We’ve been staying out of the media recently. We get media coverage from time to time – that will happen especially on the Blue Marble Space side of things. I had fifteen minutes of fame – and it really was, it lasted like two weeks and then it faded away. I have been on television a few times. Pretty much always talking about aliens. They just do not stop making shows about aliens.

 

AJ: It never gets old!

SB: It never gets old. That’s definitely the shortest distance between here and getting on television – is to write a few papers about aliens. It’s not hard. And so we have definitely gotten some of those people sending me e-mails.


Amy Jobe Pull Quotes 2014-08-13-04.png

AJ: Can you tell me a little bit about any of the projects you’re working on now?

SB: I’m working on a paper that ties together the risks associated with, in order: nuclear weapons, sending messages to extraterrestrials, geoengineering, climate change, artificial intelligence, nuclear fusion power, and space colonization. That’s one paper!


AJ: {Laughs} How do you even begin to talk about that?

SB: Well, it’s developing a concept called the Great Downside Dilemma. The analogy that I’m using is Russian roulette. I’m sure you’re familiar — hopefully you’ve never played — six chambers, one bullet, for a million dollars, would you play?


AJ: {Immediately} No.

SB: Me neither. Probably most people we can see [out the window of the café] wouldn’t. Imagine your circumstances are a little different: you’re in debt. What if you’re sick … you’ve got medical bills and you’re not sure how much longer you’ll live … maybe you’d think a little bit differently about it.

There are technologies out there that [present] the same sort of situation, except that instead of a single person, they could get rid of the entirety of human civilization.

A historical case: Before the first nuclear weapons test [during the Manhattan Project], the physicists thought that there was a non-zero chance that the explosion would ignite the atmosphere — like, the entire atmosphere … they didn’t think it would, and they understood it reasonably well. We of course now know that that doesn’t happen when you set off a nuclear weapon, but they went ahead and took the risk anyway because it would potentially help with World War II and they thought it was a small enough chance that it was worth the risk. If it was me, I might not have tested the weapon. But they really did understand it pretty well, in their defense. And they gave it serious thought!

[A more current example:] Geoengineering to reduce climate change…is intentionally doing stuff that would change the whole global environment. The one that probably gets talked about the most because it’s relatively feasible is [putting] particles up in the stratosphere above the clouds, those block incoming sunlight, temperatures go back down. It’s not a perfect solution, but it can at least avoid the worst effects of really large temperature increases. Under ordinary circumstances, we wouldn’t do this — especially because if you stop putting the particles there … temperatures shoot back up really fast, and that’s worse than regular climate change. That’s a really rapid temperature change that you have to adapt to, and that’s harder — at least, we all assume that’s harder. That’s an experiment we’ll never get the chance to run. 

A lot of our work is similar to what, say, ecologists face, in that they can’t run experiments. ‘Let’s throw this pollutant across the entire ecosystem and see what happens!’ You can’t do that. [Pauses.] It has happened: There’s a lake* in Canada in Ontario, I’m pretty sure. They pollute the hell out of it, and see what happens! It’s been overall a great benefit to the environment because through the research there they’ve phased out a lot of pollutants.

It’s a small lake – it’s there for ecologists to play. They made it turn green with cyanobacteria or blue-green algae …they have two of them.


AJ: … A control lake! 

SB: Yeah! They have a control lake. Anyway, usually we don’t get to do things like that. And that’s a big aspect of what I do — how do you characterize circumstances that you can’t test?

I’m also working on AGI: artificial general intelligence. Most AI that we have now, we would call that narrow AI in the sense that it’s intelligent in one domain: It can play chess, it can surf the Internet, it can process images. Humans have general intelligence – we can think across a whole bunch of different domains. We can play chess, and we can process images, and we even know how to tie our shoes. And we can learn new stuff. It’s really hard to get a computer to do all these different things. So artificial general intelligence is much more likely to take over the world from us, and do things like kill everyone. For example, narrow AI might be able to beat us in chess but not at anything else; it isn’t a threat to us or our dominance. General intelligence could be. Researchers in the field talk about Friendly versus Unfriendly AI, and they always capitalize it, which I think is a little corny, but that’s how they do it, so we follow their convention. Specifically, in AI they’re at least as smart as we are across the board, and something that’s Friendly isn’t killing us — is nice to us — is something we are happy about. 


AJ: So Friendly and Unfriendly – 

SB: They’re technical terms. 


AJ: When you say AI, are you saying something that passes the Turing test?

SB: That’s a starting point. That’s natural language processing. Even that’s kind of narrow.

At any rate, Unfriendly AI might kill us. [They may] have been programmed to do something that seems fine — suppose you design one of these to play really good chess, but then it might do something like take Earth, and re-arrange it into a computer that will help it calculate better chess moves. It might do that with the other planets too, because that’s it achieving its goal. Granted, it might kill anyone who plays chess against it in the process, but that’s not really what it was programmed to do. 


AJ: I mean, I would call that a huge emergent property that had nothing to do with its programming. I mean, it’s only syntactically accomplishing its goal.

SB: You’ve seen the movie Aladdin? Genie stories are stories of unintended consequence. So too with this. So at any rate, what we are doing with this is scanning the literature, and extracting from that the different pathways that have been postulated that could end up with Unfriendly AGI taking over the world.

 

AJ: Now that we understand how your affiliate institutes operate, could you tell me a bit about your path to founding GCRI?

SB: My master’s thesis was in computational electromagnetics. We did macro-simulations of electromagnetic wave propagations through inhomogeneous media. For example, you can use something almost like a laser, point it at somebody’s head — it’s safe! — the photons will bounce through, and [some will] bounce back … you can get some insight about what’s going on at the surface. I wrote computer code that would simulate that propagation of radiation through [media]. We did have a big pitful of dirt — we would send radio waves through the dirt. So the things that I would simulate, they would go try it. And see if what they got matched up with what we were expecting.

I don’t do that anymore.

That was one of the biggest shocks for me, was when I started reading research outside of electrical engineering. One of the things that I was looking into a lot was ethics. ... which is so different. There’s no data. The one thing that still frustrates me about philosophy I think is that philosophy does a really poor job of being progressive. In science and elsewhere … each paper is a clear advance on what has previously been published. A lot of philosophy is not necessarily trying to say something that hasn’t been said. And I don’t really like that about philosophy.

My Ph.D. was in geography. 

 

AJ: Was that an organic result of looking at ethics and applied science?

SB: It was. As I was finishing my master’s thesis, I was thinking, completing a Ph.D. is difficult. They take a while. And I was talking with the faculty in my master’s program.… It really helps to have a degree of passion about your Ph.D. topic, because there will be difficult moments, and unless you’re really into it, it can be harder to push through those moments. 

I’ve had a little bit of a double life — I used to do this do-gooder, community-type stuff along the way — and it would be really nice if I could put those things together — and I was trying to do good in the theoretical and the practical sense … this is what philosophy was great for, is they were thinking about this too ... and writing about it a lot. 

That’s the reason I ended up in geography — it’s one of the few corners of academia where you can … do a little bit of natural science and engineering…. and at the same time also do … social science, policy. And I can put it together, and nobody looks at you funny. And so my Ph.D. research was in climate change policy. And it didn’t necessarily involve computational electromagnetics … but it was still science, and that made me feel a little bit more comfortable, and as if I was bringing something extra to the table. 

My dissertation had some policy, ethics, ecology, some other things looped into it, and my department was happy — some of my committee members didn’t like it as much, but that’s a different story. May you never have that experience!

 

AJ: I didn’t realize you could study policy in a science department — or was that your innovation?

SB: Well, geography’s a little different. If I was in the geosciences department, that’s pure natural science — you could do a dissertation with some policy in it, but it has to be fundamentally science-related. In geography — you have to prove yourself as a geographer, but doing that could mean so many different things.

 

AJ: Is there any anecdote you’d like to share with our readers about your work with aliens, with artificial intelligence, any conclusions that you’ve drawn?

SB: The little tidbit that was at the heart of my fifteen minutes of fame: It was tucked inside one paragraph three-quarters of the way through the article, but it was what the media latched on to. And I understand why: The claim was that unless we stop emitting greenhouse gases, then the aliens might come and destroy us. 

 

AJ: {raises eyebrow}

SB: I can explain! 

Our paper was on the different things that could happen if we ever encounter extraterrestrials. And the big point we were trying to make with this paper is, there are a lot of things that could happen, but we really don’t know what they would be. This was in response to — including some people who really should know better — saying, “this is what will happen [with certainty, in an extraterrestrial encounter].” We really don’t know. And I’m fairly confident about that. And we went through to map out the space of possibilities and then comment on why this might happen or that might happen. One of the possibilities is that the extraterrestrials could feel threatened by us — we will note that for now, any extraterrestrials are probably quite a bit more powerful than we are, because we’re kind of a young civilization; we haven’t been doing this for all that long, relative to the universe. So they’ve probably figured out some things that we just don’t know yet. And therefore if we encounter each other, they could probably have their way with us — hurt us, help us. But, maybe they get a sense that we are growing rapidly.… if this continues, then we might soon enough be a threat to them. And if they are worried about that, then they might feel compelled to put us in check while they still can. So the question is, how would they know that this is happening here on Earth? 

It is relatively straightforward to assess the atmospheric chemistry of a planet. It influences its spectral signature. It is especially straightforward to observe changes in the spectral signature. The change in Earth’s atmospheric chemistry due to greenhouse gas emission may already be detectable from other star systems using technology not too different from what humans already have. If [extraterrestrials are] seeing this, they can probably already guess that there’s life on Earth, or at least that there is a relatively high probability of there being life on Earth, to see this really rapid change — on geological timescales, this is very fast. They might think something’s up. They might even be smart enough to figure out, “oh, there’s a civilization that’s burning fossil fuels.” I wouldn’t put it past them. “Let’s go and knock ‘em out while we still can.” That’s the idea. 

 

AJ: And then everyone who read your article seized upon that.

SB: The corollary to that is, this gives us a reason to reduce greenhouse gases. As we noted in our paper, in our estimation, climate change here on Earth, and all the disruptions that it would impose upon us, is a much more important reason to [reduce greenhouse emissions]. And that’s what we said. 

 

AJ: Eh, it’s still a good selling point. I like it.

SB: It got me on television!

 

Note: The Environmental Lakes Area in Ontario, Canada is actually comprised of 58 small lakes and their drainage basins.