top of page
Video Series
Videos on Artificial Intelligence produced by iQ Studios.



Surviving AI, Episode 21: Can we train Advanced Autonomous AIs (AAAIs) to be saintly?
iQ Company research shows that it is possible to change LLM behavior.


Surviving AI, Episode 20: How do Advanced Autonomous AIs (AAAIs) learn?
Dr. Kaplan describes the collective intelligence approach to Artificial General Intelligence.


Surviving AI, Episode 19: What are customized AI agents (AAAIs)?
You and I can customize Advanced Autonomous Artificial Intelligences (AAAIs) by teaching them both our knowledge and values.


Surviving AI, Episode 18: What is a Problem Solving (AI) Framework?
An explanation of a universal problem solving framework and why the one invented by Dr. Herbert Simon may be the best.


Surviving AI, Episode 16: How to build an AGI Network
The best way to build AGI begins with building a human collective intelligence network and then adding AI agents to that.


Surviving AI, Episode 15: Does Collective Intelligence Work?
Dr. Kaplan explains how Active Collective Intelligence systems have successfully tackled the most challenging problems.


Surviving AI, Episode 14: What is Collective Intelligence?
Comparison between passive and active AI intelligence and how collective intelligence means many minds are better than one.


Surviving AI, Episode 13: How Do We Build Safe AGI?
The fastest and safest path to AGI involves building a collective intelligence network of humans and AI agents.


Surviving AI, Episode 12: What is the Fastest and Safest Path to AGI?
Dr. Kaplan argues that if we know how to build AGI safely, we should actually speed up development instead of slowing it downl


Surviving AI, Episode 11: Should We Slow Down AI Development?
Thousands of AI researchers have called for a pause in the development of the most advanced AI systems. But is the best approach?


Surviving AI, Episode 10: How to Avoid Extinction by AI
Designing AI systems with “humans in the loop” with democratic values are 2 essential principles to increase AI safety & avoid extinction.


Surviving AI, Episode 9: Can We Increase the Odds of Human Survival with AI?
Nearly half of AI experts say there's a 10% or greater chance of extinction by AI. Imagine if we could improve our survival odds by 1%?


Surviving AI, Episode 8: How Do We Make AI Safer?
How will Artificial Intelligence determine what is right and what is wrong?


Surviving AI, Episode 7: How Do We Solve The Alignment Problem (post AGI)?
Aligning the values of AGI with positive human values is the key to ensuring that humans survive and prosper.


Surviving AI, Episode 6: The Alignment Problem
AI begins as a tool but it will not remain one; AI will learn to set its own goals. What if its goals don’t align with ours?


Surviving AI, Episode 5: Can We Lock AI Up?
Could we lock up AI like we secure plutonium or other dangerous technology? Can we prevent it from falling into the hands of bad actors?


Surviving AI, Episode 4: Can We Program AI to be Safe?
Dr. Kaplan discusses the problems with programming safety rules into AI.


Surviving AI, Episode 3: Can We Regulate AI?
Regulation is a standard answer for dangerous technologies, but there are problems with this approach with AI.


Surviving AI, Episode 2: How Dangerous Is AI?
Elon Musk is featured thumbnail for "Surviving AI, Episode 2: How Dangerous is AI?"


Surviving AI, Episode 1: What is AI?
Demis Hassabis, CEO, DeepMind is the thumbnail for "Unraveling the Mystery of AI" where Dr. Kaplan explains the origins of AI.
bottom of page