top of page

Videos: AI Safety, AGI, and SuperIntelligence

geometric lines

View featured podcasts, presentations, and series to gain an understanding of iQ Company’s novel approach to safer and more profitable SuperIntelligence.

The CTO Compass
AGENTS.md Won't Save You: Design AI Systems You Can Actually Control

​

Host Mark Wormgoor and Dr. Craig A. Kaplan discuss why the shift from AI copilots to autonomous agents demands a fundamental rethinking of enterprise architecture, including the limits of guardrails, the alignment problem, and the risks of monolithic black-box models. They offer tech leaders a practical framework for building safer, more resilient systems through democratic multi-agent design before autonomy outpaces human oversight.

AIM 2025 SuperIntelligence Keynote:
What's your p(doom)?

 

Craig A. Kaplan presents a keynote on Safe SuperIntelligence, examining limitations of current AI safety approaches and outlining an alternative architecture grounded in collective intelligence and human-centered design.

AI Safety:

AI Ethics, AGI, and Superintelligence

​

A 26-episode series hosted by Craig A. Kaplan exploring how advanced intelligent systems are designed, governed, and guided over time. Each episode runs approximately three minutes and builds from foundational concepts to deeper questions of alignment, safety, and long-term risk, drawing on insights from researchers and practitioners across industry and academia.

bottom of page