AI pessimists and AI optimists
In the past there has been a lot of debate about the potentially kill-all-humans dangers of Artificial Intelligence. Sparked by Nick Bostrom’s book Superintelligence, which provides an concerning and detailed description, the likes of Elon Musk, Stephen Hawking, Steve Wozniak and many AI researchers have voiced their concerns — for an easily accessible overview, I recommend Wait But Why’s two-parter.
Is it all doom ahead? Ben Goertzel, who is actively working on creating a super-intelligence with his OpenCog project, is arguing otherwise. His response is a long, well-written read, and worth it to get an understanding of the side of AI optimists.
The main arguments that stood out to me were:
- A super-intelligence will likely continually re-evaluate and re-adjust its goals, so the initial goal is not nearly as critical and likely to cause doom as Bostrom makes it appear. Also, he rejects that intelligence and goals are orthogonal, i.e., he argues that it seems highly unlikely for a super-intelligent system to adopt and stick with a “stupid” goal such as filling the universe with paper clips.
- Utility maximisation is overly simplistic and unlikely to work well for a super-intelligence. Compare to how little human goals and behaviours align with utility theory, even when looking at it just in economic terms, and how poorly Utilitarianism works as an ethical framework to decide what’s right.
- It could be less of a case of us-versus-them when humans create a super-intelligence and rather a convergence of the two, where humans create the super-intelligence in a way that benefits them and where it’s not clearly distinguishable what is human and what is the super-intelligence.
1 and 2 are reasonable points, and 3 showcases the potential benefits (which Bostrom also acknowledges but doesn’t focus on).
I find Goertzel’s view particularly interesting as he takes Bostrom’s feedback less philosophical and looks at it more in a practical way. Goertzel is actively working on his approach to a super-intelligence and he’s optimistic that it’s just a decade away. As such, he’s arguing for his open approach which is not compatible with Bostrom’s recommendation that work on creating a super-intelligence shouldn’t be done in the open, should be regulated and ideally be done by a small, isolated group of selected scientists.