top of page

Videos: AI Safety, AGI, and SuperIntelligence

geometric lines

View featured podcasts, presentations, and series to gain an understanding of iQ Company’s novel approach to safer and more profitable SuperIntelligence.

London Futurists: 
Safe SuperIntelligence via a Community of AIs and Humans

 

David Wood and Calum Chace speak with Craig A. Kaplan about how coordinated communities of humans and AI systems may enable safer paths toward SuperIntelligence. The discussion focuses on democratic oversight, collective intelligence, and long-term alignment.

AIM 2025 SuperIntelligence Keynote:
What's your p(doom)?

 

Craig A. Kaplan presents a keynote on Safe SuperIntelligence, examining limitations of current AI safety approaches and outlining an alternative architecture grounded in collective intelligence and human-centered design.

AI Safety:

AI Ethics, AGI, and Superintelligence

​

A 26-episode series hosted by Craig A. Kaplan exploring how advanced intelligent systems are designed, governed, and guided over time. Each episode runs approximately three minutes and builds from foundational concepts to deeper questions of alignment, safety, and long-term risk, drawing on insights from researchers and practitioners across industry and academia.

bottom of page