We don’t know whether artificial intelligence will ever attain the level of artificial general intelligence, with the ability to accomplish virtually any goal, including learning. That is, as the title of Max Tegmark’s illuminating book (Knopf, 2017) puts it, we don’t know whether we will ever reach Life 3.0. Even so, as we enter the age of AI, it is important to think about what sort of future we want so “we can find shared goals to plan and work for.” “If a technologically superior AI-fueled civilization arrives because we built it, … we humans have great influence over the outcome—influence that we exerted when we created the AI.”
Tegmark posits three stages of life: biological evolution (Life 1.0), cultural evolution (Life 2.0), and technological evolution (Life 3.0). In Life 1.0, with bacteria being a good example, both the hardware and software are evolved rather than designed. With Life 2.0, our current status as human beings, the hardware (DNA) is evolved but the software is largely designed, through learning. Life 3.0 will design both its hardware and software. “In other words, Life 3.0 is the master of its own destiny, finally fully free from evolutionary shackles.”
Tegmark is a professor of physics at MIT and president of the Future of Life Institute, which advocates for beneficial AI and AI-safety research. Both projects involve a heavy dose of ethical debate and decision making. For instance, should we have autonomous weapons, which select and engage targets without human intervention? The author and a colleague wrote an open letter in 2015 arguing against autonomous weapons, a letter signed by over 3,000 AI and robotics researchers and 17,000 others.
Life 3.0 engages the reader in a wide range of future scenarios, from those where superintelligence peacefully coexists with humans (even if, in one scenario, as a zookeeper) to those where humanity goes extinct and is replaced by AIs (or by nothing, if we self-destruct). Tegmark admits that “there’s absolutely no consensus on which, if any, of these scenarios are desirable, and all involve objectionable elements. This makes it all the more important to continue and deepen the conversation around our future goals, so that we don’t inadvertently drift or steer in an unfortunate direction.”
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment