C: Tell me a little bit about your college experience. How did you get interested in this work originally?
E: Good question. It kind of started for me in high school actually. I took my first philosophy class when I was 16 or 17, and I loved it. But one of the things that we read in that class was an article by Peter Singer, who was a key member in starting effective altruism or at least inspiring it in the first place. The article was called Famine, Affluence, and Morality. And that was kind of my first introduction into the ideas of effective giving, which is kind how EA started originally before it became about all this other long-term stuff.
And I just really liked the idea of, you know, thinking about how and why and where we donate and what the moral obligation is, whether it’s supererogatory or something that should actually be our moral obligations as people who live in a more affluent country and can survive and also afford to give back a decent amount of our money in a consistent way.
So that was kind of the first introduction, and then I forgot about it for a little bit. And then when I was in college, I had a friend who I knew because we lived in halls together that got really involved with the effective altruism movement on campus. He really encouraged me to come along.
C: Can you remind me where you went to college?
E: Yes, I went to St Andrews in Scotland. I was also a philosophy major, so I feel like I was a usual target for that kind of thing. EA is very popular with the philosophy majors, and also with medical students and biology majors for some reason. So my friend encouraged me to come along to an EA meeting, and they had this mentorship program where you get paired with another person to do some reading over the summer and then you have calls if you talk about it.
And I became really good friends with the guy that I was paired with for that. So yeah, that’s kind of a long-winded version of how I got involved in the first place. And then I went on to start the One for the World Club, which is how I ended up working there after college.
C: Oh you actually started it?
E: Yeah, well, I started the chapter at college.
C: Oh, got it. Can you tell me a little bit more about that organization and what attracted you to it back in college?
E: Yes, well so One for the World is kind of interesting because it’s very connected to cultural EA stuff but I wouldn’t say it is a core organization of the movement. It was kind of on the outskirts. What One for the World does is we had a bunch of chapters at universities in the US, the UK, Canada and Australia and the goal was to convince students to take a pledge that once they graduated and started having an income, they would donate one percent or more – it usually was one percent – of their income to effective nonprofits which were chosen by a charity evaluator called GiveWell. GiveWell is like one of the key original organizations of Effective Altruism.
So, yeah, that’s what One for the World does. And I got involved with them my senior year at St Andrews because the friend who I had been paired with for this mentorship scheme, through the EA club, basically called me one day of the summer and asked if I wanted to start it with him, and I thought it sounded really cool, so I did.
I was totally diehard about this club, willing to do whatever it took, because I really believed in this idea that first of all, we were all these students that were going to graduate from a good university, to be and how such a college graduate should probably in a much better position to make an income after college than others. And I felt really strongly that there should be some commitment and some obligation to giving back and to giving back in a way that was effective and based on the needs of the world, rather than on what particular charitable intervention felt closest to home or interested us.
So I was super involved in that for a while, and then a couple months after I graduated, One for the World ended up hiring me, then I worked there for almost two years, and I basically was coaching and doing community organizing work with the chapter leaders that were running all of the chapters at other universities, I was getting on calls with them all the time, trying to develop their leadership skills, and doing all of this kind of work.
Interestingly, some of the chapter leaders were very interested or involved in EA. But a lot of them didn’t even really know what it was until they started working with us. So we were kind of their first introduction to it, and I think something kind interesting that happened strategically at One for the World at the time I worked there was that we started off being much more connected to EA and wanting to use that network as a resource for both development and advertising on the EA forum, things like that. And then at some point, there was kind of a conscious decision to shift our communication so that we weren’t mentioning EA or outwardly advertising that we were connected to the movement. I think that had to do with a lot of the scandal that was going on in the area at the time.
C: I see. And what was the scandal? Was it Sam Bankman-Fried stuff or other stuff?
E: Yeah, so SBF stuff but there was also a lot of weird like sexual harassment allegations and basically I think a lot, of a, lot men in particular in the EA had taken up polyamory, you know, were kind of avid arbiters of polyamory, they thought polyamory is the greatest thing ever and that’s still going on in EA. I And I think what ended up happening was there were a lot of these big conferences like EA Global, where people that are working at EA get together and there are lectures and you have a bunch of one-on-ones, and some people were going around in these conferences and trying to convince people to have threesomes with them, making women kind of uncomfortable in that way.
C: Wow, I can totally imagine that would be gross. Was there a gender imbalance?
E: Yes, definitely. I think even still, EA is like at least 80% straight white men, I want to say. And I think they are trying to get a little bit better about that because they’re starting to realize that it’s an issue, but even at St Andrews, there were not a ton of women in the club, most of the people that were part of the leadership team were men.
One for the World wasn’t like that, by the time I started working there, and there was one guy and everyone else working there was a woman, but yeah, I think we were different from lots of other EA orgs in many ways. And I think that they way they were involved in EA and are still involved in it, they are mostly working on short term specific issues like global health and development and the long-termism stuff is much more dominated by men.
C: Interesting. So I mean, it sounds like you at One for the World separated yourself from EA a little bit, but did you end up thinking it was even so it’s like too embedded or what made you get disillusioned?
E: I think I lost a little bit of hope. You know, when I first got involved with the EA, at least with global health and development stuff, I really felt not only that it was a good idea in theory, but that it was also actually effective in practice.
And one thing, and not that it’s not effective, but I did one that started to feel, was that EA is a very theory-based movement full of philosophers and intellectuals who love to grapple with these interesting, moral, philosophical questions.
But I started to feel like it was very, what’s the word, self-indulgent. There are a lot of people in EA who just wanted a legitimate reason or excuse to sit around and talk about these big questions. But that made it feel like it’s a real job and they’re doing something good in the world instead of just sitting in a room and talking about philosophy.
C: So it was like a hobby that they wanted to have taken seriously.
E: Yeah, I think so. And wanted to feel like they’re actually doing something good in the world and something that other people are going to take seriously. So yeah, that was a big realization for me is that it wasn’t really all that I had cracked it up to be.
Also there’s a lot of interesting behavioral norms in EA as a community that I disagreed with. So, for example, there is this big emphasis on maximizing your time and how you spend your marginal extra hour, whether it should be doing this or doing that. And I felt sometimes that that was kind of toxic behavior. There were people, there were higher-ups in the EA movement who would hire assistants, like young students or recent graduates who really just wanted to work in EA and were willing to get any job to be part of the movement, and they would have them picking up their laundry because they’d say my marginal hour is better spent working on this AI issue than on doing my own laundry.
C: So it sounds like the EA work was not just an excuse to sit around, but an excuse not to do laundry.
E: Pretty much. There’s something so strange about people like taking themselves and their time so seriously that they think about things like replaceability and is it better to have someone else doing this thing because my time is better spent doing that or like should I pursue this particular job if there’s someone else there who could do it just as well are better than I could have been, so I should do something else, just as like really intense emphasis on how you use your time on the that level is kind of weird and unhealthy.
C: Yeah, it sounds like it’s very economic talk, like very much competitive advantage, as in you want to optimize your competitive advantage versus optimizing your quality of life or your happiness or how much love you have.
E: Exactly. And yeah, I think there’s this intense emphasis on rationality and logic, which I think appealed to me at first, because I’m a very logical person, and I was a philosophy student. I mean, in some ways, a lot of the EA thinking is great. Like, for example, when you’re thinking about where to donate, I think it’s great that you put the emphasis on logic and on research, right? Like, let’s actually give money to the organizations that are cost-effective and where the money is gonna make the most difference instead of just the ones that have the biggest brand-new budget, so you’ve heard of them. And in that way, I think that the emphasis of logic is great, but then I think in a lot of ways they take it way too far to the point where you’re losing things like emotion and empathy and passion.
C: Yeah. I call it spreadsheet thinking, myself. And I was introduced to it at a hedge fund. That’s exactly how you think through, which project should we work on? Because what’s the expected profit? And they do a lot of calculations. It seems scientific. It’s really not. But it makes people feel like they’re doing something logical. So I’m familiar with it. But I mean, going back to your earlier thing that you were worried that One for the World was not effective. At the same time, if it was getting people to give money and then GiveWell is figuring out where to give it, that sounds like effectiveness. So was that not actually happening? Was it not actually collecting money, or was GiveWell doing a bad calculation? Or what was going on?
E: I do think One for the World is effective, and I think GiveWell is still doing really great work and it’s actually one the only organizations in EA that has resisted this whole pull towards longtermism and is still doing the stuff which I think is really great.
One for the World is effective to some extent. You know, we were bringing in money and all of it was going directly to the non profits that GiveWell would recommend. I think the struggle sometimes was that the model didn’t work as well as we thought it would. So in theory, it’s this great idea that you know, you get all these young people that take the pledge while they’re still in school and then they go on to have these careers and some of them go into corporate law and end up making a ton of money. And then, if you get them to buy into this philosophy of donating consistently and regularly early on, it can have a really great impact on the future.
But I think, unfortunately, sometimes students would make this commitment when they were in school, but then, they’re a year out or two years out, and they are not making the kind of money that they thought they would, or they didn’t actually have that much of a philosophical or emotional connection to the pledge, so then the money starts coming out of their bank account, and their like, what is this? I’m canceling.
C: I see, yeah.
E: It was this weird catch 22. There are some people who are training students to run the EA groups at different schools. They were very effective at finding and recruiting a small number of students and really getting them like so bought into the idea of EA that every single one of those kids is going to go on to restructure their whole career path based on EA, but the downside to that was that they weren’t doing any actual tangible work to help current people, like donating or convincing people to take a giving pledge. The other side of that coin is what we were doing was like very tangible research-backed work, which is why I liked it, but because we were just focusing on trying to get people that take the pledge, maybe there wasn’t enough social, cultural, philosophical buy-in to the movement as a whole and therefore some people would end up canceling later on.
C: Yeah. Got it. Do you think it would be fixable, if you could start over again and be in charge of how it works?
E: About how One for the World works?
C: Yeah, or do you think there’s something fundamentally problematic with the model?
E: I know that one thing that OFTW has been doing more of is just going, focusing more on corporate giving. So they have done some of that in the past where they just go into companies like Microsoft and Bridgewater and Bain. And I think that’s definitely an effective route, not only because they have a lot more money, so 1% means something very different from an English major, who just graduated, but also because they are making a commitment to start now rather than a future-dated pledge.
C: Yeah. That makes sense. Can I ask you to comment on a theory I have about effective altruism?
E: Sure.
C: And this is in part because I’ve been heavily involved in like thinking about AI policy with people in Washington and a lot of them have told me that, especially recently, there have been like embedded EA lobbyists who are like being offered to work for free because they’ve been paid by EA essentially on writing up bills and they I think of it as a kind of intentional effort for Silicon Valley to distract policymaking on AI away from short-term problems, and I will just add that the folks that I’ve talked to who are interested in EA or maybe even part of EA, I don’t think they’re intentionally doing that. I kind of feel like there’s a way that young people are being taken advantage of, because these are young people, early 20s, maybe mid-20s typically, that they’re like the true believers and they are going to Washington and they want to do good, but I feel that the actual goal the overall movement is to not do anything. Does that make sense to you?
E: Yeah, it totally does. I think that longtermism is just an excuse for a bunch of guys to legitimize the fun philosophical questions that they think are really interesting, by making it into thinking through a catastrophic event, that would have an insane effect on the world and the future and therefore it’s like the only thing that we should focus on, but actually just because it is really fun to think about that. Yeah. And so I totally see that that’s part of what’s going on and I think my experience in a really scary way is that there’s been a total shift in the last maybe three or four years away from tangible short-term issues, whether it comes to AI or global health development or animal advocacy, where we’re actually trying to make policy changes and effect changes and support interventions in the here and now, like helping human beings and helping animals that live on this earth now.
So there’s been a real shift away from that and towards talking about, you know, people that are going to be living 100, 200, 300 years from now, or even thousands of years from now, where it’s this like probability game of, we have no way of knowing what life is going to be like, you know, 100, 200, 300, 1000 years from now, but it’s like a slim chance that AI takes over the world and tries to kill everyone, then working on AI safety is the most important thing that anyone could spend their time doing. And for some reason, I think EA thinks that means everyone should quit their job and work full-time on AI safety and alignment which means that logic doesn’t make a ton of sense.
C: By the way what is alignment in this context? I’ve heard that phrase but I don’t really understand it.
E: So I don’t really know very much about AI safety stuff at all but alignment is working on making sure that what the humans who are creating and writing the code for this AI want and what the AI wants are the same thing. Does that make sense?
C: No. Because AI doesn’t want things.
E: Right, but maybe it will in its future. I guess that’s the goal.
C: I see. So does it have a premise that AI is going to have desires?
E: Yeah, that it might be sentient in the future. So I think the general idea with alignment is just like making sure that, you know, AI does exactly what we want it to do, and nothing more, and nothing less.
C: Ah, okay. Well that makes more sense to me.
E: But I’m really not the right person to ask.
C: Okay. My last question is a subpart of the same question: is it a deliberate attempt to get us to think about stupid things, or is there just too much money in this, in part because of the success of the 1% giving thing, that somehow causes downstream events? Things like taking over philosophy departments?
I spoke to a philosophy class a few months ago, and I brought it up with a philosopher who was trained, I believe, in the UK. And he was just like, oh my god, the way those fuckers are taking over philosophy departments! And there’s no pushback from the universities, because the universities are so underfunded that they cannot say no. But he makes it out as a real takeover attempt, or not even an attempt.
E: Yeah, well I think that EA is something that is growing and growing, and kind of wants everyone to join, they’re very intentional about that, like they are very, very strategic about how we get as many people involved as possible. And it is kind of scary when you look at it. So part of my work at One with the World was helping students think about how they could engage more students on their campus and get them to show up to events and clubs. And so I would do a lot of research into what the Center for Effective Altruism was advising their student leaders to do to get people to come to events and to join their mentorship program. And it is kind of terrifying how intensely strategic they were when we’re talking about 19-year-old students trying to figure out the best way to trick other 19-year-old students into coming, you know, it’s like being brainwashed and coming to this event, and then joining the mentorship program and blah, blah blah. And they’re sometimes very effective at it.
C: I mean, okay, like you just reminded me of my actual last question, which is, you said brainwashed. When we first met, you said cult. Can you riff on that a bit? What do you think is actually happening overall?
E: Yeah. And also I want to be fair and say, you know, I think that this doesn’t represent everyone who’s in EA. I have definitely met a good chunk of people who got into this in the first place for the right reasons, and who are still there for the right reason.
But I think there’s a really scary intentionality to get people involved in this movement, to not just get involved, but really hooked to them. Because the idea of EA is that it’s also a lifestyle. It’s not just a cultural movement or a way to the job market, but it is just about how you think about the world and how you spend your life, and where you donate, and what you need to do with your career, And even how you eat, whether you’re vegan or how you source your food, it’s all of those things. In order to involve people in a movement that’s so broad and that spans every aspect of your life and how you live it, you need to really get that philosophical buy-in for it, you need to sell people on it and get them to commit for life.
And so I think that it starts at the university level. And then people start there, and they completely get indoctrinated into it. And there’s a very intense intentionality behind that, a strategy behind how they’re convincing 19-year-olds to change their career path and what they want to do with their lives and how they think and all of it, which is…
C: Just to clarify, because I don’t really know the answer to this, what career should they take? Are they told they go make as much money as possible so that your 1% goes further? What is the advice?
E: So what you just said is called earning to give. That’s actually not at all what they advise people to do anymore. That’s kind of what was when EA started in 2014 or whenever it was. Now they’re totally advising people against that. Instead, I feel like it’s mostly at this point encouraging people to go into AI. So, when I was involved with the EA group at St Andrews, there’s this website called 80,000 Hours that is, you know, another one of these big kingpins of the EU movement, and their role is to give career advice to anyone who wants to be an EA and how to have a career would that help people in some way. If you look on their website, I feel like there’s actually a really slim selection of options for different careers that you could go into.
They’re definitely encouraging some people to go to politics and represent EA values within politics. And then they are encouraging a lot of people to become researchers for AI and AI safety. And then there’s some stuff like pandemic preparedness, animal advocacy, global health and development. But basically there are just a few cause areas and then a few types of roles within each of those cause areas.
And they’re trying to get everyone to go in one of the two directions. So that’s it.
C: Wow, that is really narrow.
E: I think that was a conflict for me. Another issue that I have with EA is that, in order to be effective, it’s important to do high impact work, but I also think in doing that you have to also be happy and have a fulfilled life. You know? Like, find some meaning in your life, so that also means things like building community and making art. And I think that, in this philosophical framework of impactfulness and effectiveness, there was no room for love or community or empathy or creativity. It was just like these are in the four or five genres where you can be the most impactful with your career. And if you’re not working one of those, you are not being impactful, or you aren’t making the most impact you could.
C: It’s a weird cult in the sense of like, why AI? But when you put it together like you did with here’s how you should eat, here is what you work on, here how you should think, here’s what you shouldn’t think about, and then there’s like that intense recruitment and it’s not just like, try this out, it’s a lifelong commitment. It really adds up to something kind of spooky.
E: It does, yeah. And again, I think that there are incredible people that are involved with EA, but I sometimes think it is possible to pick out parts of EA that are really good and helpful and smart without buying into other parts of it. And I think that’s easier for adults who have come into it later in life, and harder for people who enter the movement as students and are kind of brainwashed from the get-go to zoom out and see it that way.
I’m still donating 1% of my income, and I think it’s a really good thing to do. But I have no interest in going to the EA Global conferences or writing on the EA forum or any of that.
C: Are you keeping in touch with the other people who have left the company you worked at?
E: Yeah, actually, one of them was going to stay with me in the city, which was nice, but we’re very close. I still have really good relationships with actually everyone that I worked with. And again, I think part of that turns to the fact that we were EA adjacent rather than being a key or a core part of the movement.
So I think that was helpful because as we started to see all of these controversies happen, and watch EA turn in a very different direction from why we all first got involved, we were able to like distance ourselves from it as an organization,
C: Yeah, sounds like it. Wow, well, is there anything that you want to add to anything that we’ve been talking about?
E: I could talk about this for many, many hours. I think I covered most of it.
C: I really appreciate talking to you. It’s very, very interesting. Thank you.