EA - [Crosspost] Why Uncontrollable AI Looks More Likely Than Ever by Otto
The Nonlinear Library: EA Forum - Een podcast door The Nonlinear Fund
Categorieën:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] Why Uncontrollable AI Looks More Likely Than Ever, published by Otto on March 8, 2023 on The Effective Altruism Forum.This is a crosspost from Time Magazine, which also appeared in full at a number of other unpaid news websites.BY OTTO BARTEN AND ROMAN YAMPOLSKIYBarten is director of the Existential Risk Observatory, an Amsterdam-based nonprofit.Yampolskiy is a computer scientist at the University of Louisville, known for his work on AI Safety.“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control,†mathematician and science fiction writer I.J. Good wrote over 60 years ago. These prophetic words are now more relevant than ever, with artificial intelligence (AI) gaining capabilities at breakneck speed.In the last weeks, many jaws dropped as they witnessed transformation of AI from a handy but decidedly unscary recommender algorithm, to something that at times seemed to act worryingly humanlike. Some reporters were so shocked that they reported their conversation histories with large language model Bing Chat verbatim. And with good reason: few expected that what we thought were glorified autocomplete programs would suddenly threaten their users, refuse to carry out orders they found insulting, break security in an attempt to save a child’s life, or declare their love to us. Yet this all happened.It can already be overwhelming to think about the immediate consequences of these new models. How are we going to grade papers if any student can use AI? What are the effects of these models on our daily work? Any knowledge worker, who may have thought they would not be affected by automation in the foreseeable future, suddenly has cause for concern.Beyond these direct consequences of currently existing models, however, awaits the more fundamental question of AI that has been on the table since the field’s inception: what if we succeed? That is, what if AI researchers manage to make Artificial General Intelligence (AGI), or an AI that can perform any cognitive task at human level?Surprisingly few academics have seriously engaged with this question, despite working day and night to get to this point. It is obvious, though, that the consequences will be far-reaching, much beyond the consequences of even today’s best large language models. If remote work, for example, could be done just as well by an AGI, employers may be able to simply spin up a few new digital employees to perform any task. The job prospects, economic value, self-worth, and political power of anyone not owning the machines might therefore completely dwindle . Those who do own this technology could achieve nearly anything in very short periods of time. That might mean skyrocketing economic growth, but also a rise in inequality, while meritocracy would become obsolete.But a true AGI could not only transform the world, it could also transform itself. Since AI research is one of the tasks an AGI could do better than us, it should be expected to be able to improve the state of AI. This might set off a positive feedback loop with ever better AIs creating ever better AIs, with no known theoretical limits.This would perhaps be positive rather than alarming, had it not been that this technology has the potential to become uncontrollable. Once an AI has a certain goal and self-improves, there is no known method to adjust this goal. An AI should in fact be expected to resist any such attempt, since goal modification would endanger carrying out its current one. Also, instrumental convergence predicts that AI, whatever its goals are, might start off by self-improving and acquiring more resources once it is sufficiently capable of doing so, since this should help it achieve whatever further goal ...
