EA - You should consider launching an AI startup by Joshc

The Nonlinear Library: EA Forum - Een podcast door The Nonlinear Fund

Podcast artwork

Categorieën:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You should consider launching an AI startup, published by Joshc on December 8, 2022 on The Effective Altruism Forum.Edit: I've noticed I've been getting a lot of downvotes. If you want to reduce the probability that people go this route (myself included), it would be helpful if you voiced your disagreements. Also, opinions are my own.The prevailing view in this community is that we are talent constrained, not resource constrained. Earn to give has gone out of fashion. Now, community builders mostly push people towards AI research or strategy.Maybe this is a mistake. Or if it made sense previously, perhaps we should update in light of recent events.We could use a lot more moneyAI Safety researchers will need a lot of money for compute later...Given how expensive AI research is becoming, research orgs like Redwood research and The Center for AI Safety will need a lot of funding in the future. Where is that money going to come from? How are they going to fine-tune the latest models when OpenAI is spending billions? If we want there to be a favorable safety-capabilities balance, there needs to be a favorable safety-capabilities funding balance!Money can be converted to talentCurrently, the primary recruitment strategy appears to be to persuade people to take X-risk arguments seriously. But not many people change their careers because of abstract arguments. Influencing incentives seems more promising to me; first, for directly moving talent, but also for building safety culture. The more appealing it is to work on AI Safety, the more willing people will be to engage with arguments. Research incentives can be shifted in a variety of ways using money:Fund AI Safety Industry labs. If there are more well-paying AI safety jobs, people will seek to fill them.Build a massive safety compute cluster. Academics feel locked out. They can barely fine-tune GPT-2 sized models while industry models crush every NLP benchmark. Offering compute for safety research is a golden opportunity to shift incentives. Academia is a wellspring of ML talent. If safety dominates there, it will trickle into industry.Competitions. The jury is still out on whether you can use ungodly amounts of money to motivate talent; but the fact that you can always raise the bounties make competitions scalable.Grants. A common way to grow research fields is to simply fund the research you would like to see more of (i.e. run RfPs). Apparently, the standard interventions tend to work. Accountability mechanisms have to be solid, but government orgs like DARPA appear to be good at this.Fellowships. Many professors could take on more graduate students if they had funding for them. For the small cost of $100,000 per year (one-third that in Canada), you can essentially 'hire' a graduate student to produce AI safety research. Graduate students are often more capable than industry research engineers and even come with free mentorship (a professor!). In addition to building new safety labs, we can start converting existing ones and giving them industry-level resources.'Talent' might not even be an important input in the future (relative to resources)The techniques in Holden's "How might we align transformative AI if it’s developed very soon?" are generally more resource-dependent than talent dependent. This is consistent with a general trend in ML research towards engineering and datasets and away from small numbers of people having creative insights.The most effective ways to select against deception could involve extensive red-teaming (e.g. 'honey pots'), or carefully constructing datasets / environments that don't produce instrumental self-preservation tendencies early on in the training process. The most effective ways to guard against distribution shift could be to test the AIs in a wide variety of scenario...

Visit the podcast's native language site