AI ShortsSurviving AI, Episode 8 Short: How do we make AI safer? How will Artificial Intelligence determine what is right and what is wrong?
AI ShortsSurviving AI, Episode 7 Short: How do we solve the AI alignment problem? Aligning the values of AGI with positive human values is the key to ensuring that humans survive and prosper in a post-AGI world. Dr. Kaplan
AI ShortsSurviving AI, Episode 6 Short: What is the alignment problem? AI begins as a tool but it will not remain one. AI will learn to set its own goals. What if its goals don’t align with ours? Could it mean t
AI ShortsSurviving AI, Episode 5 Short: Can we lock AI up? Could we lock up AI like we secure plutonium or other dangerous technology? Can we prevent it from falling into the hands of bad actors?
AI ShortsSurviving AI, Episode 4 Short: Can we program AI to be safe? The idea of programming safety rules into intelligent systems dates back at least to the science fiction author Isaac Asimov and his “laws o
AI ShortsSurviving AI, Episode 3 Short: Can we regulate AI? Regulation is a standard answer for dangerous technologies, but there are problems with this approach. Dr. Kaplan discusses three major chal
AI ShortsSurviving AI, Episode 2 Short: How dangerous is AI? Dr. Kaplan discusses the existential threat posed by AI with clips from Geoffrey Hinton (former Google Fellow), Jerry Kaplan (Lecturer at St