top of page
Video Series
Videos on Artificial Intelligence produced by iQ Studios.



How to Create AGI and Not Die
IFoRE Sigma Xi Conference 2023: Dr. Craig Kaplan provides an overview of AGI and explains the safety challenges and current AI approaches .


Surviving AI, Episode 26 Short: Now is the time to make AI safe!
A short window exists during which humans can influence the development of AGI and SuperInteligence.


Surviving AI, Episode 25 Short: Summary of AI safety
Narrow AI is rapidly developing into AGI which will then inevitably lead to SuperIntelligence much smarter and more powerful than humans.


Surviving AI, Episode 20 Short: How do AAAIs learn?
Constitutional AI is a tuning approach whereby researchers first train one AI on what is right or wrong based on a “constitution” written b


Surviving AI, Episode 19 Short: What are customized AI agents (AAAIs)?
Customize AAAIs by interacting with them, teaching them both our knowledge and our values.


Surviving AI, Episode 18 Short: What is a problem solving (AI) framework?
Customizable AI agents (AAAIs) and the universal problem solving framework are 2 key technical elements of the fastest/safest path to AGI.


Surviving AI, Episode 15 Short: Does Collective Intelligence work?
Online collective intelligence of humans and AI can achieve AGI safely.


Surviving AI, Episode 13 Short: How do we build safe AGI?
Dr. Kaplan argues that the fastest and safest path to AGI involves building a collective intelligence network of humans and AI agents.


Surviving AI, Episode 12 Short: What is the fastest and safest path to AGI?
AGI is potentially a “winner-take-all” scenario where the first AGI will likely dominate slower approaches.


Surviving AI, Episode 9 Short: Can we increase the odds of human survival with AI?
Nearly half of AI experts surveyed say there is a 10% or greater chance of extinction by AI.


Surviving AI, Episode 8 Short: How do we make AI safer?
How will Artificial Intelligence determine what is right and what is wrong?


Surviving AI, Episode 5 Short: Can we lock AI up?
Could we lock up AI like we secure plutonium or other dangerous technology? Can we prevent it from falling into the hands of bad actors?


Surviving AI, Episode 3 Short: Can we regulate AI?
Regulation is a standard answer for dangerous technologies, but there are problems with this approach. Dr. Kaplan discusses three major chal


How to Create AGI and Not Die: IFoRE / Sigma Xi Conference Presentation 2023
A presentation by Dr. Craig A. Kaplan at the IFoRE / Sigma Xi Conference on 11/10/23.


Surviving AI, Episode 26: Now is the time to make AI safe!
There is a relatively short window during which humans can influence the development of AGI and SuperIntelligence.


Surviving AI, Episode 25: Summary of AI Safety
A recap of the main points of the Surviving AI series.


Surviving AI, Episode 24: More detail on how to build safe AGI
An overview of multiple pending patents describing in detail how to build safe and ethical Artificial General Intelligence (AGI).


Surviving AI, Episode 23: What makes the AGI network safe?
Three ways are explained to increase the safety of AGI.


Surviving AI, Episode 22: How does the AGI network learn?
There is more than one way to teach an AI.


Surviving AI, Episode 21: Can we train Advanced Autonomous AIs (AAAIs) to be saintly?
iQ Company research shows that it is possible to change LLM behavior.
bottom of page