top of page

AI Safety

The Surviving AI series provides an overview of developments in Artificial Intelligence and features many respected AI researchers and leaders in the field.

Surviving AI, Episode 1: What is AI? Learn about the origins of AI, Narrow AI, and how AI is developing into AGI and ultimately into SuperIntelligence.

Surviving AI, Episode 2: How dangerous is AI? The existential threat posed by AI.

Surviving AI, Episode 3: Can we regulate AI? Three major challenges to regulating AI

Surviving AI, Episode 4: Can we program AI to be safe? The problems with programming safety rules into intelligent systems.

Surviving AI, Episode 5: Can we lock AI up? The challenges of trying to contain the spread of advanced AI.

Surviving AI, Episode 6: What is the alignment problem? AI begins as a tool but it will not remain one as it will learn to set its own goals.

Surviving AI, Episode 7: How do we solve the AI alignment problem? Two things that are needed to solve this problem.

Surviving AI, Episode 8: How do we make AI safer? How will AI determine what is right and what is wrong?

Surviving AI, Episode 9: Can we increase the odds of human survival with AI? How to improve humanity's survival odds.

Surviving AI, Episode 10: How to avoid extinction by AI. Designing AI systems with humans in the loop is essential.

Surviving AI, Episode 11: Should we slow down AI development? The fastest path to AGI is the safest path.

Surviving AI, Episode 12: What is the fastest and safest path to AGI? Some characteristics of the safest and fastest path to AGI.

Surviving AI, Episode 13: How do we build safe AGI? The blueprint involves a collective intelligence network of humans and AI agents.

Surviving AI, Episode 14: What is Collective Intelligence? The difference between active and passive collective intelligence.

Surviving AI, Episode 15: Does Collective Intelligence work? The online collective intelligence of humans and AI can achieve AGI safely.

Surviving AI, Episode 16: How to Build an AGI Network. AGI begins with building a human collective intelligence network and adding AI agents.

Surviving AI, Episode 17: What is a human CI network? AGI, which is knowledgeable and ethical, needs as many human brains as possible.

Surviving AI, Episode 18: What is a Problem-Solving Framework? The framework invented by Dr. Herbert A. Simon may be the best.

Surviving AI, Episode 19: What are customized AI agents? Customized AAAIs can be taught with human knowledge and values.

Surviving AI, Episode 20: How do AAAIs learn? Common approaches include training, tuning, and prompting.

Surviving AI, Episode 21: Can we train AAAIs to be saintly? It is possible to change a LLM’s behavior to have positive human values.

Surviving AI, Episode 22: How does the AGI network learn? There is more than one way to teach an AI.

Surviving AI, Episode 23: What makes the AGI network safe? Three ways to increase the safety of AGI.

Surviving AI, Episode 24: More detail on how to build safe AGI. Pending patents on how to build safe and ethical AGI.

Surviving AI, Episode 25: Summary of AI Safety. Recapping the main points of the Surviving AI series.

Surviving AI, Episode 26: Now is the time to make AI safe! There is a relatively short window to influence AGI development.

bottom of page