Artificial general intelligence, or AGI, has long been the holy grail of innovation — a synthetic intelligence with all the capabilities of a human mind or more. Recent advances in AI have many predicting we could be closer to achieving it than we’re ready for.
It’s a reality that preoccupied the late diplomat Henry Kissinger before he died last year at 100 years old. He collaborated with Eric Schmidt, formerly at Google, and Craig Mundie, formerly at Microsoft, on the new book “Genesis: Artificial Intelligence, Hope and the Human Spirit.” Mundie joined Marketplace’s Meghan McCarty Carino to discuss what a future with superintelligence might look like. The following is an edited transcript of their conversation:
Craig Mundie: As it relates to the government issues, it affects many things. As we’ve seen in the applications to misinformation, people can use it to try to affect how democracies work. On the other hand, authoritarian governments [may] say, well, this is the greatest thing ever in terms of me being able to keep track of my people, or watch them or control them in some way. The impact on warfare, I think, will be profound. While in the current environment, we still tend to focus mostly on kinetic warfare, I think in the age of AI, we’re going to see that more and more this will become focused on cyber warfare, the ability to disable a society in its entirety will come more likely from cyber means than from kinetic means. And the emergence of superintelligence becomes empowering for people on both sides of that, whether you’re on the offense or the defense. And so in the book we talk a little bit about the fact that you really have to stop and think, how does this evolve? And to some extent, what should governments be doing about it?
We said there is a window of time that we think is not all that long, where governments have to come to realize that much has happened, for example, with nuclear weapons, where we built them all and then realized this could be a problem, not because we might still shoot at the Soviets or vice versa, but they came to realize that if you just let these things proliferate willy nilly, then it was really hard to keep track of what other people might do with them. And then that began a 70-year-old regime of non-proliferation and controls on those. And I think we’re ultimately going to have to come to the same realization here, that there’s going to have to be some agreement by the the governments, at least, that have the greatest investment and progress in this area, that they’re going to have to come together and realize they’re better off thinking about how to control this for the benefit of humanity collectively than strictly for the benefit of any one of the countries or governments. And so there’s definitely going to be a tension there. And the book tries to make the case that as difficult as it is, we’re going to have to encourage governments fairly quickly to move beyond this local optimization and into something that’s more global in nature.
Meghan McCarty Carino: One of the questions you engage is whether governments should cede decision making to artificial general intelligence, provided that it is providing insights that no human could provide. What are the tensions in that question?
Mundie: Well, I think this is sort of one of the central features of the book, to make people understand that these machines will be polymathic to a degree that no human or group of humans will ever attain. And therefore that’s both the good news and, to some extent, the bad news. The good news, it’ll allow us to solve problems and advance our species and our society in ways that we can’t even imagine. The bad news is we won’t understand it all. So the ultimate issue in my mind is, how do we get trust in this system? We have to trust it to the same degree that we ultimately come to trust other humans.
McCarty Carino: This question of how humans might build the technology so that it is in alignment with human interests. It doesn’t seem to have obvious solutions. I mean, the AI industry, as currently constructed, is very decentralized, fairly unregulated, unlike the example of nuclear technology, which was very top down from governments. A lot of this is happening at private companies. Some models are open source. Even among humans, there’s going to be a lot of disagreement about what proper alignment looks like. So how can we possibly ever hope that this will work out to the positive?
Mundie: Well, partly the answer is the capabilities of the AI itself. It turned out about six years ago, I started working with Sam Altman and the people at OpenAI. And my focus there was really these longer term policy questions. And, of course, one of their founding concepts was that they were going to build an AI that was going to be good for humanity. And I would frequently ask the question, well, it’s great to see all the short term efforts to try to make this thing safe, but what about that alignment thing? What does it even mean, and how are we going to get there? And so over time, some of the people who were at OpenAI would spend time talking to me about it.
And one of the things that we quite rapidly concluded, a couple of us, was that we couldn’t see a way to, if you will, govern these AIs for this long term goal of alignment and symbiosis, unless you used an AI to do it, and then that leads you to a whole other set of questions. But the whole industry and all the government activities have moved down the path of very short term ways — you hear the terms guidelines and guardrails and other things. These are very, I’ll say, short term in nature. And partly because of my interactions with Kissinger over the years, I thought what you really needed to think about was an architecture of control and governance, but in a much broader sense of the word architecture. It needed a technological strategy, but you also needed a legal strategy, a policy strategy, a diplomacy strategy, a non-proliferation strategy. All these things that we have in ad hoc ways, at times, done in other areas, like nuclear weapons and nuclear power, somebody needs to be thinking at that level about it.
And so partly out of concern and partly out of frustration, I ended up spending about four years of my own personal time thinking about that broad architecture. And the things that are in the book at a high level are derivative of a lot of the work that I did in that area. But it all built on this idea that the only way to solve the problem, the dilemma you describe, is to have an AI and its polymathic ability to adjudicate these things, to become part of the solution itself. So AI is not just the problem, it’s part of the solution. And so some of us have gone on to build at least prototypes of how that could actually be done technically.
And then that leads to the challenge of, how do we get the companies and the countries and their governments to come around to realizing that there is a path forward, but it takes an effort that’s a lot more than, well, let’s examine the models and decide whether we think they’re good or bad, or let’s try to have guidelines that are written by humans. That’s just not going to be sufficient. So in part, the book was a vehicle to try to get people to realize how big these problems are in the long term for humanity, but also to say that there’s a lot of benefit to get, and that should be a motivation to come together and attack this problem of what is the collective action that should be taken by the businesses, the academy, the governments, and to some extent, diplomatic efforts in order to bring this together.
McCarty Carino: I want to ask you more about your sense of positivity about this, because toward the end of the book you write that a world with no artificial general intelligence would be preferable to a world with even one AGI system that is not aligned with humans. I think a lot of people might look at that equation and say, okay, well, let’s not risk it, shut this down. But that is not your conclusion. Why not?
Mundie: Unless you could shut it down and guarantee that it was 100% shut down by every actor on the planet, then it’s a lost cause. And so the book is an entreaty to both the companies, the academy and the governments of the world to recognize that only by a coordinated effort can we have a trust system that will allow for those who want to comport with it, comfort and interoperability. But that then also creates the basis of discriminating between those who want to play in a happy way together and those who don’t. And once you know that, you can bring the, essentially, the powers of the economy and government to bear on the question of how you want to deal with non-proliferation, and at least try to slow down the emergence of uncontrolled activities.
Read more here: https://www.marketplace.org/shows/marketplace-tech/reimagining-the-long-term-alignment-of-human-and-ai-advancements/