If you thought that the human race was in danger, what would you do?
Perhaps you would succumb to despair. Perhaps you would take action in your community. Or perhaps you would travel halfway around the world, to the epicenter of AI development, to try to stop it.
That’s what Chris Gerrby did. Chris is an activist from Sweden who, earlier this year, dropped everything to build support for a global pause on frontier AI development. Over the last six months, he has traveled to countries from the UK to Japan to Georgia to work on this mission. Now, he finds himself in San Francisco– setting up tables outside of AI company headquarters, talking to hundreds of people about the dangers of AI, and building support for PauseAI right in the belly of the beast.
In this interview, Chris talks with Joep Meindertsma, the founder of PauseAI Global, about how he got here, what motivates him to act, and how to keep going in the face of such long odds.
Note: This transcript has been edited for clarity and brevity.
Pictured: The first day of tabling in San Francisco, CA
“I’m trying anyway:” Acting in the face of difficult odds
Joep:
Hey, Chris, awesome that you could join at this ungodly hour. So I want to dive straight into the questions. How did you end up flying across the globe to San Francisco?
Chris:
I've been traveling around now for the last six months. I've been a bit opportunistic with where I've been going. And yeah, I’ve traveled now from [the country of] Georgia to San Francisco, and the plan is to set up a mass movement in San Francisco in the heart where “gods” will be created. I think that this is probably much more meaningful for people here to mobilize against, if we compare it to, say, Sweden where we don't have any big training runs [on frontier AI models].
So... Yeah, why did I do this? You know, going out tabling isn't a very easy task. I'm really putting my face out there. I'm really staking my reputation on this - my old friends would call me very alarmist, and that's not very comfortable.
I think that we might have very little time left. So I think that this effort- people actually taking action and trying to campaign against this insane race that is going on- I think this is kind of necessary. We need leaders, boots on the ground, and no one else is stepping up.
Joep:
Yeah, I think you're doing a very inspiring thing. So you've been doing a bunch of discussions and debates with bystanders about pausing AI development or is AI an existential risk – and you set up this table with a question. I think it said something like, “AI could kill us all. Change my mind.”
I want to know more about your journey. You traveled from Georgia to San Francisco to take action in front of OpenAI, right? Can you tell me a little bit more about what happened before that? How were you inspired to take action?
Chris:
I read about AI safety maybe 10 years ago – I read [Nick Bostrom’s] Superintelligence. It was a bit too abstract for me. It took me a long time to act, but I now realize that we have very little time and there is no one stepping up and taking action. I think if people knew what is actually going on, they would be really fucking pissed at the AI progress that is being made.
Lives are at risk here. I think that this is the thing that people haven't really realized: that AI is a big threat to them, to humanity.
I don't think that people understand how little time we might have left. And it takes time to mobilize. It takes time to grow a mass movement. It takes time. So we need to start now. So that's the plan. That's why I've come here: I'm trying to be a bit of a catalyzer.
Joep:
Yeah, I think you're inspiring a lot of people here. Just actually taking the time to go to a different country, show up there, talk with people, record yourself doing it. That's an important step. Do you find it scary to do this? To be so visible?
Chris:
Very, yeah. I'm totally outside of my circle of comfort here. I really care about what my friends think about me and I care about my reputation.
I may be a bit outside of the distribution when it comes to a few things, but in general I do think I'm a risk taker, more so than most people. So actually – I think that this might be too spicy to put on film – but this is gonna be really raw and real. I told my friends when they brought up activism as a tactic, I told them literally if the world depends on me doing activism, the world would burn.
I said this many times and I didn't say it ironically. And then I read up on why people are doing activism? Why are they so annoying? And it turns out that it really works. It really changes policies. So I guess that's a big reason why I'm here and, you know, putting myself out there and trying to mobilize people and trying to inspire people and inform people. If people understood that the most credible people in the AI sphere think that there's a one in six chance that we will go extinct, people would be mad that this is happening right now. They are literally trying to build the thing that could make us go extinct. And I think that is unacceptable to most people. A big part of doing this is just raising awareness.
You know, I don't have to persuade people very much, even though that’s a big part of the job. You can just share the polls, you can just share the data and show that the most credible people in the world think that there's an existential risk here. And this thing that could cause an existential risk is happening right now.
Joep:
So there's some discrepancy, if I’m hearing you correctly, between what scientists are warning about and how the public is responding to this. Why do you think AI existential risk isn't the number one issue on everybody's mind right now? Why isn't this on the front page every day?
Chris:
It’s sad to think about what it boils down to. I've studied what makes ideas go viral, and when memes are sad in nature, they tend to not spread very well. Maybe that’s a big reason for it. You know, most people don't think about their own death very much. Most people maybe don't have the courage to do that. It’s really hard to think about. We have so much money going into this thing and it might feel like you can't affect it because you're just one person and you can't do very much. But I'm trying anyway.
Joep:
So you're just trying anyway. Is that a core value for you?
Chris:
Yeah, I do enjoy doing this even though it's very hard and I'm outside of my comfort zone all the time. But yes, I do think that we should definitely try, even though it might not work, even though the odds are against us.
Optimism, feasibility, and why a Pause is worth fighting for
Joep:
So many people are saying that pausing AI development is just an impossible dream. What would you say to these people?
Chris:
Yeah, so maybe it is. Maybe it's actually impossible at this point in time. We don't know. But I do think, you know, we could also lay down and die.
I don't think that it's acceptable to introduce something that is way more intelligent than us into the world. It's not acceptable to create a superintelligence that is misaligned or even create a superintelligence that is aligned. We have no idea how to leverage that technology to our benefit.
We want to keep it under democratic control. I do think that that makes sense. I don't want to end up in a 1984 scenario where we have super surveillance and everyone loses their job. And who controls this superintelligence?
So, back to the question of why it is feasible to police technology. We've done this in the past. For example, Sweden was six months away from having a nuclear program. There were many other countries that were on the trajectory of creating nukes. Diplomats and many other people worked really hard at trying to mitigate or minimize the spread of nukes. And that worked out. And then we have the biological weapons convention. We have the Montreal Protocol, the most successful global treaty ever [ed: the Montreal Protocol, signed by 197 countries, banned the use of CFCs and saved the Ozone Layer]. So we have paused dangerous technologies. We have a track record of doing this. We haven't created a 500 megaton nuclear bomb yet, even though we could. Maybe that's analogous to superintelligence.
We could definitely set up policies to prevent that from happening.
Joep:
Hear, hear.
Chris:
I've looked into alignment a lot and I have no idea what alignment is. And if you ask around for people's opinions on what alignment is they also don't really know. We're trying to fly the plane and build the plane at the same time. It's pretty crazy. We could just pause and ponder for a bit. This is definitely the sane thing we should do.
Joep:
I fully agree. And I'm often confused as to why more people aren’t feeling like this is the obvious thing that we should aim for.
I feel like if you are worrying about AI risk, there’s not just one scenario to be scared of. There's a lot of different outcomes that all seem reasonably likely to happen that are very powerful arguments to push us away from building this technology as fast as possible. And that brings me to a question. I've noticed that there's quite a lot of people, especially in San Francisco, who do believe that AI should be built as fast as possible. Some of them are called effective accelerationists or E-accs. Have you encountered some of them during your debating and discussion sessions while tabling at OpenAI?
Chris:
Not to my memory, but I've encountered them elsewhere. So for example, in Georgia I haven't encountered the same people that we encountered on Twitter. They're not that extreme. Things such as, you know, “it's okay if humanity goes extinct” – I haven't encountered those. I would be pretty speechless if anyone actually said that to me in person. But I think the disruptors, most of them haven't read very much about alignment. So there seems to be a knowledge gap there. I think it's very hard to make an AI care about what you care about. This is ultimately what it's about. It's not understanding, it's not caring –it's valuing. It's very different.
Joep:
That would explain some of the more odd takes. I'm also sometimes thinking, is it just a weird way to cope with the uncontrollability of the current situation? Like, we have a lot of companies that are racing. We have governments that are racing on their own and having competitive race dynamics. And if the default outcome of that would be catastrophic, it could be doom, right? So then, it can feel more comfortable to give up and say, okay, maybe it's going to go really, really wrong, but let's just hit the pedal to the metal, go for it as fast as possible. And you know, maybe we'll end up in utopia, maybe we'll end up dead, maybe that's fine. Sometimes I'm just really, really wondering what makes people think this, right? What makes them go for this disposition of giving up on policy effectiveness this early on?
Chris:
Yeah, yeah, I think maybe it's from despair, like giving up. I do think that could be a contributor.
Joep:
Do you sometimes feel that? Like that negative emotion – that's maybe that's what I feel sometimes, the comfort of not trying anymore? Like giving up is very easy because it's very comfortable.
Chris:
Yeah. It's very tempting. But I couldn't live with myself. You know, I think that there's like two different pains. On one side of the spectrum, it's the pain of pursuing the impossible. And on the other side, it's the looking at yourself in the mirror, or rather not being able to look at yourself in the mirror because you know that you could do something, but you're not.
And I think that pain is much greater for me.
Joep:
That's a tough dilemma. I think many people are a little bit more comfortable with sticking their head in the sand and just ignoring very, very big pains looming ahead, right?
Chris:
Yeah. I do think that's a feature, or maybe a bug, where I just can't do things that I don't really believe in. I think that's maybe something where I'm unique.
Advice for Activists: how anyone can help save the world
Joep:
I was wondering what you would want to say to other people who are watching this video and who are looking at your actions and feeling like, “yeah, Chris, you go, man”. They were considering taking some other actions themselves. What tips do you have for them? How can they get started and make a change on their own? Because it can feel very challenging, I assume. What would you tell them?
Chris:
I think coming from the way I am, you know, very agentic, I would say to that audience, you just have to try and try again. And you learn on the way and you can find comfort in that, if you fail, you can get up again.
Consume a lot of books, consume a lot of knowledge, speak to people, try to find truth, try to find the most accurate picture of what is actually going on. And, you know, I think maybe the biggest hack is thought patterns or things to tell yourself that enable you to cope with the fear. That is the ultimate thing that will limit you in what you're allowing yourself to do. I think the 10x and 100x methods that are available to us… we don't reach for them because it's scary. It's literally because of fear. We don't even try. Find ways that would motivate you to take risks and find excuses to actually try, even though it might seem really difficult. And then work as much as you can.
But also read up on burnouts. The more you work, the more you should know about burnouts. It should be proportional. I'm working all the time and it is possible if you train yourself a bit and find strategies to mitigate burnouts. I think that's a pretty good combination. Just try and accept that you will fail and then just get up again, gain more knowledge, cope with the fear.
Joep:
So you talked about how fear could limit people's actions or what they believe they can actually achieve. Can you talk a little bit more about how that impacted you and how you were able to basically circumvent that fear or embrace it or challenge it or do something about it?
Chris:
Yeah, so one strategy that I can share is you can contrast what you are planning to do with something that is much more scary. So for example, if I have a speech for 30 people, I can just visualize I'm having a speech for 100 people and just simulate that in my mind five times over. And then it's very much all right when I'm standing just in front of 30 people. That's one strategy.
The other strategies, they're kind of personal. But I should share it. I visualize one of my strongest mentors saying, “you are a coward for not doing this. You have this very, very high leverage strategy that you could deploy, but you aren’t.” And you know, when I just visualize him, leaning forward and looking into my eyes and saying this, then I just feel like, well, I'm not a coward. So obviously this is not true. I'll go and do that. That works really well for me.
You should find your hacks that work for you to get over the threshold.
Joep:
I really like that story. So you're basically making the most courageous version of yourself by imagining bad things, right? You're imagining standing for a bigger crowd, doing something that's more scary, or you're imagining getting criticism for being a coward and thereby challenging yourself constantly. I think courage is a very important value and a very important thing to strive for if you want to be a good activist. But I also believe that there's a lot of people who will look at you and your courage and feel like, “that's just too much for me”. So what would you advise them to do? Is there some sort of lighter weight action where they can start with that doesn't feel scary to them?
Chris:
Yeah. You can always email your local politician.
When you don't know what to do, you can always make progress by building resources. And then, when you know what to do, you can deploy those resources. So just reach out to your friends, inform your friends, inform your family, ask them to inform their friends and family, rinse and repeat, and then we have exponential growth.
Joep:
So talking to your friends and family about AI risk, right? That's a big step for a lot of people because they feel a little bit intimidated about it. They don't want to be ridiculed by their friends and family. What would you say to them?
Chris:
I do think that that is a risk and I haven't tried to persuade my childhood friends to take action. So I definitely have sympathy for people not doing that. You don't want to make people too uncomfortable. I don't think that's productive at the end of the day. You want to read people's body language, what they think when you present this to them, and then adjust your messaging strategy. So, you know, it's kind of a dance.
I'm trying to be very confident and say things that I understand. Something you learn is that you really want to adjust your communications to who you're speaking with. Because if you read that the person is getting pretty uncomfortable, then you just speak about something else.
Joep:
Yeah, and especially if you're talking about a subject that's this dark, things can get uncomfortable pretty soon. I've noticed that in my friends and family, I've spoken with a lot of people who I know about what I think about AI risk and how much concern I have about this topic. And I noticed that many people are sympathetic towards me being concerned about it. Like they're open to talking about me feeling concerned.
But what they often don't want is to explore the “whys”, right? Because as soon as people enter that area, if they're starting to think about how dangerous this thing really is, they start to feel the pain. They start to feel the fear and then they evade that. That's my experience when people are thinking about this. The brain just doesn't want to be exposed to too much concern.
And it's just like evading the subject in a way. And that's felt very lonely at times. I feel like my best friends don't really want to talk about this subject in the way that I would like to talk about the subject and how real it actually is to me. Have you experienced that?
Chris:
Yeah.All the time. Sometimes I sit down and I zoom out and I think about what the hell I'm doing here. When you have internalized that we might lose literally everything in a few years, all the things that you love in the world… people that don't want to think about that. I totally understand that. It's very hard. We need to mobilize people now, but at the same time, we don't want to make people too uncomfortable.
Joep:
So there's definitely a line between not being concerned at all on the one end of the spectrum and being 100 % concerned, where there's some optimal level in terms of how productive you are and how effective you are. I've seen people who just become completely burned out or just completely overburdened by the grim situation that we're in.
And it's kind of sad that that line exists. You would kind of hope that the more you feel like we're in a dangerous position, the more you feel incentivized to work, but that's not entirely the case. How do you navigate that balance for yourself? How do you keep your back straight and maintain optimism in the face of potential death?
Chris:
So, one thing that I find useful is just being thankful for what we have. Life can be really beautiful.
Joep:
But how do you balance being too grim about the future? You obviously have some really dark thoughts about the future, and so do I, right?
Chris:
Yup.
Joep:
But many people will assume that that makes me a pessimistic person. I think it's quite the opposite. Like, I fit all the checkboxes of a really big optimist and I think you're the same, right? You're not a pessimistic person at all.
Chris:
Yes, very much. Ask my childhood friends. I've bombarded them with tech optimism for years. I have big dreams for the future.
So, how do we balance this?
You have to be present in the now, otherwise it's too much to cope with. Be present in the now and focus on the next day and the next month.
So I can maybe give a bit of backstory here. I wanted to become a military officer maybe seven, eight years ago. It was really fucking hard to be there.We ran around with heavy gear for 20 hours a day. Literally. People's knees… people collapsed, you know. It was really, really, really hard. And the thing that got me through was focussing on the next few hours until I had the next meal. So my time horizon was very short. And I thought, just make it the next four hours.. I don't use that strategy too much unless I'm extremely stressed.
But yeah, you can kind of find these different coping strategies for different scenarios and I think that's a pretty good one if you're really overwhelmed.
Joep:
Yeah, man, it seems like you're a very strong person when it comes to being able to turn negativity into positivity and motivation. I think that's the core hack that you found in yourself in order to be this effective, to push yourself to the next level and be as active as you are. I think that there's a lot of this to be discovered for people like me and other people within PauseAI , other people who are doing or want to do activism. Like a dive deep inside our emotional state to find ways in which we can improve how we deal with reality and how we make the best of ourselves.
Chris:
Yeah, finding these kinds of thoughts to just push yourself to the next level, I do think you can be smart and actually “prune the tree” correctly and choose the actions well. That's definitely something that I have noticed making me more productive.
Joep:
Maybe to zoom in a little bit for the people who are watching this and they feel like, OK, I want to take some form of action, but I don't want to fly to San Francisco –what would you say to the people who are sitting behind their computer or sitting on their phone? In the next 15 minutes after this video is over, they can do something, right? What would you advise them to do in these 15 minutes?
Chris:
Everything begins with your mind. If you have a glimpse of inspiration, then leverage that. If you’re not convinced, set a goal and decide, “I will read for this many hours.” I can direct you to the PauseAI website, but you know, that would be biased.
If you're already convinced, you can look at the PauseAI website and there you can find more things to do. Whatever you do, find people that you can work together with. It's much, much easier to keep working if you're working with someone. Set a few goals and then you can set up an accountability group. Cashing out on small wins continuously and finding people that you vibe really well with that is absolutely a winning recipe.
Joep:
And how do people find a partner in this?
Chris:
So I think most tasks don't require specialist knowledge. Most of the useful work you could do just requires you to actually sit down and do the thing, you know, sending an email, talking to your friends, calling your politician, whatever. Most of these things don't require you to have a PhD in anything or even a bachelor’s. I think the optimization parameter here is finding someone that you can work well with. So pick your favorite person.
Joep:
Do you think just going to the PauseAI Discord is the way forward? Or should people try different approaches?
Chris:
The Discord is great for this. Reach out to people in DMs. I think that this is totally undervalued, underutilized. You can reach out to people in big group chats, but there is the Bystander Effect. If you DM people, people answer. People want to know about impact opportunities. You are carrying a heavier burden by asking someone to work with you. This is a value proposition to them. This is the way I think you should see it.
Joep:
One of the things that you've been doing is reaching out to individuals in PauseAI , right? You went to the Discord, you looked through names, you saw people from different teams, and you're like, okay, this person, I'm gonna ask them to do this specific thing, right?
Chris:
Roughly, yes. For most tasks you don't require any specialist knowledge. Most people can do most things pretty well.
Joep:
And that worked.
Chris:
Yeah. And here's another thing: really build relationships with people that have the same values and goals as you have. Bonding with people is very important. When times get rough – because I think times might get rough – this is the thing that will enable us to deploy the high impact strategies together. Because it will probably depend on us having good relationships. And this comes from succeeding on tasks together, cashing out on the small wins.
Joep:
Yeah, have your champagne moments, right? Take your victories and embrace them, right? Celebrate them.
Chris:
Yeah, yeah, yeah! I suck at celebration, but you know, it feels good to get these small wins.
Joep:
Hey Chris, I think this is a good moment for celebration, right? Because this week you've managed to attract so much attention and so much positivity and you've managed to inspire so many people, going to San Francisco, having tabling sessions and being present in the digital sphere, right? That deserves some celebration.
So I want to say to you, thank you so much for what you're doing, Chris. You're a big inspiration to me, and I'm sure there's a lot of other people within PauseAI and other people who are considering taking action to save the world who feel inspired by you as well. So thank you so much for doing this.
Chris:
I'm very glad to hear.
Joep:
All right, man, keep up the good work. I hope to see you again soon. Cheers, Chris.
Chris:
Likewise. Thanks for today, Joep. See you.
Great interview, and both of you are heroes for doing this!
This is legendary stuff.
Chris’s table is the debate stage at which concepts much larger than any of us can currently imagine are being presented in question-form towards the public. We all yell into our digital voids with comments and concerns like the one you’re reading right now, but to interact with another human being in-person and have these discussions is refreshing in its authenticity.
This is how real debates are experienced offline. This strategy is specifically important in 2024 and onward with regard to technologies specifically designed to remove us from genuine human interaction entirely and to the point of digitally-induced madness.