EA - AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now by Greg Colbourn

The Nonlinear Library: EA Forum - Een podcast door The Nonlinear Fund

Podcast artwork

Categorieën:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now, published by Greg Colbourn on May 2, 2023 on The Effective Altruism Forum.Content note: discussion of a near-term, potentially hopeless life-and-death situation that affects everyone.Tldr: AGI is basically here. Alignment is nowhere near ready. We may only have a matter of months to get a lid on this (strictly enforced global limits to compute and data) in order to stand a strong chance of survival. This post is unapologetically alarmist because the situation is highly alarming. Please help. Fill out this form to get involved. Here is a list of practical steps you can take.We are in a new era of acute risk from AGIArtificial General Intelligence (AGI) is now in its ascendency. GPT-4 is already ~human-level at language and showing sparks of AGI. Large multimodal models – text-, image-, audio-, video-, VR/games-, robotics-manipulation by a single AI – will arrive very soon (from Google DeepMind) and will be ~human-level at many things: physical as well as mental tasks; blue collar jobs in addition to white collar jobs. It’s looking highly likely that the current paradigm of AI architecture (Foundation models), basically just scales all the way to AGI. These things are “General Cognition Engines”.All that is stopping them being even more powerful is spending on compute. Google & Microsoft are worth $1-2T each, and $10B can buy ~100x the compute used for GPT-4. Think about this: it means we are already well into hardware overhang territory.Here is a warning written two months ago by people working at applied AI Alignment lab Conjecture: “we are now in the end-game for AGI, and we (humans) are losing”. Things are now worse. It’s looking like GPT-4 will be used to meaningfully speed up AI research, finding more efficient architectures and therefore reducing the cost of training more sophisticated models.And then there is the reckless fervour of plugin development to make proto-AGI systems more capable and agent-like to contend with. In very short succession from GPT-4, OpenAI announced the ChatGPT plugin store, and there has been great enthusiasm for AutoGPT. Adding Planners to LLMs (known as LLM+P) seems like a good recipe for turning them into agents. One way of looking at this is that the planners and plugins act as the System 2 to the underlying System 1 of the general cognitive engine (LLM). And here we have agentic AGI. There may not be any secret sauce left.Given the scaling of capabilities observed so far for the progression of GPT-2 to GPT-3 to GPT3.5 to GPT-4, the next generation of AI could well end up superhuman. I think most people here are aware of the dangers: we have no idea how to reliably control superhuman AI or make it value-aligned (enough to prevent catastrophic outcomes from its existence). The expected outcome from the advent of AGI is doom. This is in large part because AI Alignment research has been completely outpaced by AI capabilities research and is now years behind where it needs to be.To allow Alignment time to catch up, we need a global moratorium on AGI, now.A short argument for uncontrollable superintelligent AI happening soon (without urgent regulation of big AI):This is a recipe for humans extincting themselves that appears to be playing out along the mainline of future timelines:Either ofGPT-4 + curious (but ultimately reckless) academics -> more efficient AI -> next generation foundation model AI (which I’ll call NextAI for short); OrGoogle DeepMind just builds NextAI (they are probably training it already)NextAI + planners + AutoGPT + plugins + further algorithmic advancements + gung ho humans (e/acc etc) = NextAI2 in short order. Weeks even. Access to compute for training is not a bottleneck because that cyborg syste...

Visit the podcast's native language site