Nick Bostrom calls artificial intelligence the modern day Doomsday invention. Luminaries such as Bill Gates, Elon Musk and Stephen Hawking agree with him. Why? Because the possibility of creating thinking machines raises a host of ethical issues.
For example, we’re designing software that works like us, that learns like us, and that we can even fall just a little bit in love with. We’re making computer programs that can kill like us, too.
In short, we’re creating the future in our image.
Artificial intelligence carries the promise of humanity in all of its possibility for a better world. But what about humanity’s destructiveness?
From automating the law, to sex bots, to predicting what will happen when we tinker with our own DNA, the possibilities for artificial intelligence are as diffuse and contradictory as human nature itself.
On this episode of The Big Bang podcast we discuss some of these issues with our guest Ivan Rodriguez:
- Why do we want machines that think for themselves?
- What happens when our computers get smarter than we are?
- What do we fear about AI?
- Will machines eliminate us?
- What are the potential dangers and solutions of AI?
- What is an ideal future between humans and AI?
- How might we ensure that we do things that are both legal and ethical?
- Why should we trust our future robots?
- How will machines know what we value?
- Are we driving ourselves to be more inhuman?
What are your thoughts on artificial intelligence?
Let us know on Twitter @jorgebarba, @gatopan and @adrianpedrin.
Like our weekly podcast? Subscribe to my YouTube channel or blog newsletter.
The Big Bang is a weekly podcast. Tune in every Tuesday for more discussions on what’s possible.
Intro audio is by Arturo Arriaga, outro audio is Candyland by Guy J.