EA - We should say more than “x-risk is high” by OllieBase

The Nonlinear Library: EA Forum - Een podcast door The Nonlinear Fund

Podcast artwork

Categorieën:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We should say more than “x-risk is high”, published by OllieBase on December 16, 2022 on The Effective Altruism Forum.This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated.Epistemic status: outlining a take that I think is maybe 50% likely to be right. Also on my blog.Some people have recently argued that, in order to persuade people to work on high priority issues such as AI safety and biosecurity, effective altruists only need to point to how high existential risk (x-risk) is, and don’t need to make the case for longtermism or broader EA principles. E.g.Neel Nanda argues that if you believe the key claims of "there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA.AISafetyIsNotLongtermist argues that the chance of the author dying prematurely because of AI x-risk is sufficiently high (~41%, conditional on their death in the subsequent 30 years) that the pitch for reducing this risk need not appeal to longtermism.The generalised argument, which I’ll call “x-risk is high”, is fairly simple:1) X-risk this century is, or could very plausibly be, very high (>10%)2) X-risk is high enough that it matters to people alive today - e.g. it could result in their premature death.3) The above is sufficient to motivate people to take high-priority paths to reduce x-risk. We don’t need to emphasise anything else, including the philosophical case for the importance of the long-run future.I think this argument holds up. However, I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run. Here are three counterpoints to only using “x-risk is high”:Our situation could changeTrivially, if we successfully reduce x-risk or, after further examination, determine that overall x-risk is much lower than we thought, “x-risk is high” loses its force. If top talent, policymakers or funders convinced by “x-risk is high” learn that x-risk this century is actually much lower, they might move away from these issues. This would be bad because any non-negligible amount of x-risk is still unsustainably high from a longtermist perspective.Our priorities could changeIn the early 2010s, the EA movement was much more focused on funding effective global health charities. What if, at that time, EAs decided to stop explaining the core principles of EA, and instead made the following argument “effective charities are effective”?Effective charities are, or could very plausible be, very effective.Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise anything else, including the case for effective altruism.This argument is probably different in important respects to “x-risk is high”, but illustrates how the EA movement could have “locked in” their approach to doing good if they made this argument. If we started using “effective charities are effective” instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks as a top priority.Our pri...

Visit the podcast's native language site