In this episode, Gabe and Dan have the opportunity to chat with Daniel Dewey about artificial intelligence (AI) and intelligence explosion. Daniel is a researcher at the Future of Humanity Institute, researching artificial intelligence, reinforcement learning, and how machines could have values.
Check out Daniel’s TEDx talk: The long-term future of AI (and what we can do about it)
Some links to what we talked about:
- Asimov’s three laws of robotics
- Learning algorithms
- The power of intelligence
- AI already powers much of the world around you. See: self-driving cars, stock trading (they don’t always do well), medical diagnosis, and plenty of other industries
- Intelligence explosion
- The simulation argument; physicists trying to detect the simulation
- Humans don’t care about ants, and they get by just fine–until humans want to build things on top of their nest, or destroy their universe as we turn it into art.
- Bad predictions about AI, by Stewart Armstrong
- Steve Omohundro explains how easily an AI with mundane goals can go wrong in his TEDx talk
- So AI might turn out poorly for humans… How can we make a friendly AI?
- AI might also turn our unimaginably good, we just have to be smart about it.
- The Future of Humanity Institute, The Machine Intelligence Research Institute, and The Centre for Study of Existential Risk
The book we mentioned in the episode:
- Superintelligence: Paths, Dangers, and Strangers - Nick Bostrom
Links to other people whose cool research came up in the discussion:
- Eliezer Yudkowsky
- Robin Hanson
- Eric Drexler and exploratory engineering
- Nick Bostrom
- Steve Omohundro
Thanks for listening!