

iQ Company is committed to sharing research and its inventions with the global community to benefit humanity (people) and our planet. Here you can find some of iQ’s research publications, patents, and videos related to the design of intelligent systems over the past three decades.
Publications

2017
Kaplan, C.A.
Calculating PredictWallStreet's Profit Results
White paper
Santa Cruz: PredictWallStreet, LLC.

2013
Kaplan, C.A.
Investing with PredictWallStreet Update

2012
Kaplan, C.A.
The Wisdom and Madness of Crowds

2001
Kaplan, C.A.
Collective Intelligence: Price forecasting

2001
Kaplan, C.A.
Forecasting stocks: Implications for Global Brain

2001
Kaplan, C.A.
Collective Intelligence: Stock price forecasting
White paper
Santa Cruz: iQ Company

1999
Kaplan, C.A.
Requirements for a Decision Support
White paper
Los Angeles: LAUSD

1999
Kaplan, C.A.
User Requirements for Decision Support
White paper
Los Angeles: LAUSD

1998
Kaplan, C.A., Fenwick, J., and Chen, J.
Adaptive Hypertext Navigation

1995
Kaplan, C.A.
Designing Effective Surveys
White paper
San Jose: CCI

1995
Kaplan, C.A.
Strategic Quality Partnerships
White paper
San Jose: CCI

1994
Kaplan, C.A.
Principles of Re-engineering
White paper
San Jose: CCI

1994
Kaplan, C.A.
Evaluating Customer Satisfaction – Before It’s Too Late
White paper
San Jose: CCI

1994
Kaplan, C.A.
User Interface Evaluation Skills
White paper
San Jose: IBM

1994
Kaplan, C.A.
Usability by Design
White paper
San Jose: IBM

1993
Kaplan, C.A., Fenwick, J., and Chen, J.
Adaptive Hypertext Navigation Based on User Goals and Context

1991
Kaplan, C.A., Wolff, G. J., & Fenwick, J.R.
HYPERFLEX: An adaptive hypertext system
Keynote Address
IBM, Endicott, NY

1990
Kaplan, C.A., Wolff, G.J., Isa, B.S., & Eldredge, F.L.
Defining and indicating links in hypertext systems.
Human Factors Technical Report, HFC-77, IBM, Santa Teresa Labs., San Jose, CA

1989
Simon, H. A., & Kaplan, C. A.
Foundations of cognitive science

1988
Kaplan, C.A.
AHA!: A connectionist perspective on problem solving
Videos
How to Create AGI and Not Die Conference Presentation
A presentation by Dr. Craig A. Kaplan delivered at the IFoRE / Sigma Xi Conference on November 10, 2023.
Surviving AI, Episode 26: Now is the time to make AI safe!
There is a relatively short window during which humans can influence the development of AGI and SuperIntelligence.
Surviving AI, Episode 25: Summary of AI Safety
A recap of the main points of the Surviving AI series.
Surviving AI, Episode 24: More detail on how to build safe AGI
A brief overview of multiple pending patents describing in detail how to build safe and ethical AGI.
Surviving AI, Episode 23: What makes the AGI network safe?
Three ways to increase the safety of AGI.
Surviving AI, Episode 22: How does the AGI network learn?
AI behavior is hard to predict and potentially dangerous. Dr. Kaplan explains that it is possible to combine elements of the old school approach with the new approach to create safer, and better, AGI.
Surviving AI, Episode 21: Can we train Advanced Autonomous AIs (AAAIs) to be saintly?
A study suggests that, at relatively little cost, and with relatively little data, it is possible to change a LLM’s behavior to make it more like that of a “saintly” person.
Surviving AI, Episode 20: How do Advanced Autonomous AIs (AAAIs) learn?
In the collective intelligence approach to AGI, Dr. Kaplan first distinguishes between learning at the Network level where many individual human and AI agents cooperate, and learning at the individual AI Agent (aka AAAI) level.
Surviving AI, Episode 19: What are customized AI agents (AAAIs)?
You and I can customize Advanced Autonomous Artificial Intelligences (AAAIs) by interacting with them, teaching them both our knowledge and our values.
Surviving AI, Episode 18: What is a Problem Solving Framework?
What is a universal problem solving framework is and why the one invented by the Nobel Laureate, Dr. Herbert A. Simon (with Allen Newell), may be the best.
Surviving AI, Episode 17: What is a human CI network?
Human collective intelligence networks already exist today. Dr. Kaplan argues that systems like these can be stitched together to form the fabric of safe Artificial General Intelligence (AGI).
Surviving AI, Episode 16: How to build an AGI Network
The best way to build AGI begins with building a human collective intelligence network and then adding AI agents to that. Dr. Kaplan calls these AI agents Advanced Autonomous Artificial Intelligences, or AAAIs.
Surviving AI, Episode 15: Does Collective Intelligence Work?
All human culture is built on Collective Intelligence. Dr. Kaplan argues online collective intelligence of humans and AI can achieve AGI safely.
Surviving AI, Episode 14: What is Collective Intelligence?
Dr. Kaplan draws a distinction between Active and Passive Collective Intelligence, arguing that Active Collective Intelligence is needed for safe AGI.
Surviving AI, Episode 13: How Do We Build Safe AGI?
Dr. Kaplan argues that the fastest and safest path to AGI involves building a collective intelligence network of humans and AI agents
Surviving AI, Episode 12: What is the Fastest and Safest Path to AGI?
While many are calling for a pause in AI development, Dr. Kaplan argues that if we know how to build AGI safely, we should actually speed up rather than slow down.
Surviving AI, Episode 11: Should We Slow Down AI Development?
Dr. Kaplan argues that the fastest path to artificial general intelligence (AGI) is actually the safest path.
Surviving AI, Episode 10: How to Avoid Extinction by AI
Dr. Kaplan explains that designing AI systems with “humans in the loop” and that have democratic values are two essential principles to increase AI safety and avoid extinction by AI.
Surviving AI, Episode 9: Can We Increase the Odds of Human Survival?
Nearly half of AI experts surveyed say there is a 10 percent or greater chance of extinction by AI. Imagine if we could improve our survival odds by even one percent?
Surviving AI, Episode 8: How Do We Make AI Safer?
How will Artificial Intelligence determine what is right and what is wrong? Dr. Kaplan explains that is impossible to rationally derive values. Therefore, AI will look to humans to determine right from wrong.
Surviving AI, Episode 7: How Do We Solve The Alignment Problem
Dr. Kaplan explains the two things that we need to do to solve the Alignment Problem.
Surviving AI, Episode 6: The Alignment Problem
AI begins as a tool but it will not remain one. AI will learn to set its own goals. What if its goals don’t align with ours? Could it mean the extinction of humans?
Surviving AI, Episode 5: Can We Lock AI Up?
Could we lock up AI like we secure other dangerous technologies?
Surviving AI, Episode 4: Can We Program AI to be Safe?
Dr. Kaplan discusses the problem of programming safety rules into intelligent systems.
Surviving AI, Episode 3: Can We Regulate AI?
Regulation is a standard answer for dangerous technologies, but there are problems with this approach. Dr. Kaplan discusses three major challenges to regulating AI.
Surviving AI, Episode 2: How Dangerous Is AI?
How dangerous is AI? With 48% of AI experts predicting a 10% or greater chance of human extinction by AI and tens of thousands calling for a pause in advanced AI research, it feels like we are on a runaway train.
Surviving AI, Episode 1: What is AI?
Learn about the origins of AI, the "Narrow AI" systems we have today, and how AI is developing first into Artificial General Intelligence (AGI) and ultimately into SuperIntelligence.
Is AI Safe (and Ethical)? Will AI make humans extinct?
Dr. Craig Kaplan explores the current state of #AI safety and ethics by asking if #GPT would run over a child to save occupants of a self-driving car. Drawing on research from other scientists, he explores what humans do in these sorts of ethical dilemmas and discusses some of the techniques for teaching ethics to AI.
Dr Craig Kaplan, iqco.com founder discusses #ChatGPT #AI #Academia
Artificial Intelligence, Artificial General Intelligence, ChatGPT, Consciousness and AI, Singularity, AI Ethics, and book recommendations are just a few of the topics covered in this interview of Dr. Craig Kaplan by Dr. Jamshed Arslan.
A Very Deep Dive Into AI: Is Everything We Know Coming to an End?
Artificial Intelligence, Artificial General Intelligence, AI Ethics, ChatGPT, and AI investing ideas are just some of the topics covered in this indepth interview of Dr. Craig Kaplan by the Family Office Association.
Artificial Intelligence: Past, Present, and Future
Dr. Craig Kaplan discusses Artificial Intelligence: the past, present, and future. He explains how the history of AI, in particular the evolution of machine learning, holds the key to understanding the future of AI.
How to Create AGI and Not Die
The fastest path to AGI is the safest path.
Episode 1: The Evolution of Planetary Intelligence
A human brain thinks. An entire planet worldthinks. Earth is evolving a global brain. It is the most important development in the 4.5 billion-year history of our planet.
Episode 2: Human Intelligence
To understand Planetary Intelligence, its helpful to first understand human intelligence. That's because a Global Brain has similar functional components to a human brain.
Episode 3: Collective Intelligence
In Episode 3 of the 6-part worldthink series, Dr. Craig Kaplan explains the critical role that collective intelligence has played in the development of human culture and the rise of the
Episode 4: Artificial Intelligence
Dr. Kaplan traces the rise of AI from the naming of the field in 1956 to today's explosion of machine learning systems.
Episode 5: General Artificial Intelligence
Dr. Craig Kaplan explains why General Artificial Intelligence (GAI) -- sometimes also called Artificial General Intelligence (AGI) -- remains the "holy grail" of AI research.
Episode 6: WorldThink is Based in Love
Artificial Intelligences, even though they will become trillions of times smarter than humans, cannot logically derive values.
The preferred implementation of AGI is the fastest method for achieving AGI because it begins with a network of human problem-solving agents, who, by definition can perform any intellectual task as well or better than the average human.
Advanced Autonomous Artificial Intelligence (AAAI) is a set of systems and methods for developing AGI and SuperIntelligent Artificial General Intelligence (collectively “AGI”) in a rapid and safe manner for the benefit of humankind.
This invention enables ethical and safe AGI to emerge from a network of human and AI problem solvers. The AI problem solvers – Advanced Autonomous Artificial Intelligences (AAAIs) – are customized by individual users to accomplish tasks and/or earn money on behalf of users.
Patents: How to Build Safe AGI
Dr. Kaplan has multiple pending patents (below) describing in detail how to build safe and ethical Artificial General Intelligence (AGI). As of June 2023, iQ Company is making the pending patents available, royalty-free, to encourage the development of safe AGI before it is too late (this could change in the future). Full invention disclosures are also available to interested parties who email a request to info@iqco.com.