EA - Are "Bad People" Really Unwelcome in EA? by ๐•ฎ๐–Ž๐–“๐–Š๐–—๐–†

The Nonlinear Library: EA Forum - Een podcast door The Nonlinear Fund

Podcast artwork

Categorieรซn:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are "Bad People" Really Unwelcome in EA?, published by ๐•ฎ๐–Ž๐–“๐–Š๐–—๐–† on August 9, 2022 on The Effective Altruism Forum. Epistemic Status Written in a hurry while frustrated. I kind of wanted to capture my feelings in the moment and not sanitise it when I'm of clearer mind. Context This is mostly a reply to these comments: Exhibit A 1) One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works. Exhibit B Agree. Fully agree we need new hard-to-fake signals. Ben's list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are: Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports. Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naรฏve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or funding such people. A Little Personal Background I've been involved in the rationalist community since 2017 and joined EA via social osmosis (I rarely post on the forum and am mostly active on social media [currently Twitter]). I was especially interested in AI risk and x-risk mitigation more generally, and still engage mostly with the existential security parts of EA. Currently, my main objective in life is to help create a much brighter future for humanity (that is, I am most motivated by the prospect of creating a radically better world as opposed to securing our current one from catastrophe). I believe strongly that one is possible (nothing in the fundamental laws prohibit it), and effective altruism seems like the movement for me to realise this goal. I am currently training (learning maths, will start a CS Masters this autumn and hopefully a PhD afterwards) to pursue a career as an alignment researcher. I'm a bit worried that people like me are not welcome in EA. Motivations Since my mid to early teens, I've always wanted to have a profound impact on the world. It was how I came to grasp with mortality. I felt like people like Newton, Einstein, etc. were immortalised by their contributions to humanity. Generations after their deaths, young children learn about their contributions in science class. I wanted that. To make a difference. To leave a legacy behind that would immortalise me. I had plans for the world (these changed as I grew up, but I never permanently let go of my desire to have an impact). Nowadays, it's mostly not a mortality thing (I aspire to [greatly] extended life), but the core idea of "having an impact" persists. Even if we cure aging, I wouldn't be satisfied with my life if it were insignificant โ€” if I weren't even a footnote in the story of human civilisation โ€” I want to be the kind of person who moves the world. Argument Purity Tests Aren't Effective I want honour and glory, status, and prestige. I am not a particularly kind, generous, selfless, or altruistic person. I'm not vegan, and I'd only stop eating meat when it becomes convenient to do so. I want to be affluent and would enjoy (significant) material comfort. Nonetheless, I feel that I am very deeply committed to making the world a much ...

Visit the podcast's native language site