EA - Parfit + Singer + Aliens = ? by Maxwell Tabarrok
The Nonlinear Library: EA Forum - Een podcast door The Nonlinear Fund
Categorieën:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Parfit + Singer + Aliens = ?, published by Maxwell Tabarrok on October 12, 2022 on The Effective Altruism Forum. Summary: If you Have a wide moral circle that includes non human animals and Have a low or zero moral discount rate Then the discovery of alien life should radically change your views on existential risk. A grad student plops down in front of their computer where their code has been running for the past few hours. After several months of waiting, she had finally secured time on the James Webb Space Telescope to observe a freshly discovered exoplanet: Proxima Centauri D. “That’s strange.” Her eyes dart over the results. Proxima D’s short 5 day orbit meant that she could get observations of both sides of the tidally locked planet. But the brightness of each side doesn’t vary nearly as much as it should. The dark side is gleaming with light. Northern Italy at night The Argument This argument requires a few assumptions. Strong evidence of intelligent alien life on a nearby planet Future moral value is not inherently less important than present moral value Many types of beings contain moral value including nonhuman animals and aliens I will call people who have a Singer-style wide moral circle and a Parfit-style concern for the long term future “Longtermist EAs.” Given these assumptions, lets examine the basic argument given by Longtermist EAs for why existential risks should be a primary concern. Start with assumption 2. The lives of future human beings are not inherently less important than the lives of present ones. Should Cleopatra eat an ice cream that causes a million deaths today? Then consider that humanity may last for a very long time, or may be able to greatly increase the amount of moral value it sustains, or both. Therefore, the vast majority of moral value in the universe lies along these possible future paths where humanity does manage to last for a long time and support a lot of moral value. Existential risks make it less likely or impossible to end up on these paths so they are extremely costly and important to avoid. But now introduce assumptions 1 and 3 and the argument falls apart. The link between the second and third points is broken when we discovery another morally valuable species which also has a chance to settle the galaxy. Discovering aliens nearby means that there are likely billions of planetary civilizations in our galaxy. If, like Singer, you believe that alien life is morally valuable, then humanity’s future is unimportant to the sum total of moral value in the universe. If we are destroyed by an existential catastrophe, another civilization will fill the vacuum. If humanity did manage to preserve itself and expand, most of the gains would be zero-sum; won at the expense of another civilization that might have taken our place. Most of the arguments for caring about human existential risk implicitly assume a morally empty universe if we do not survive. But if we discover alien life nearby, this assumption is probably wrong and humanity’s value-over-replacement goes way down. Holding future and alien life to be morally valuable means that, on the discovery of alien life, humanity’s future becomes a vanishingly small part of the morally valuable universe. In this situation, Longtermism ceases to be action relevant. It might be true that certain paths into far future contain the vast majority of moral value but if there are lots of morally valuable aliens out there, the universe is just as likely to end up one of these paths whether humans are around or not so Longtermism doesn’t help us decide what to do. We must either impartially hope that humans get to be the ones tiling the universe or go back to considering the nearer term effects of our actions as more important. Consider Parfit’s classic thought experiment: Option A: Peace O...
