Introduction
Since being a student of AI since the mid-1980s when I co-authored research with Herbert A. Simon—the Nobel Prize and Turing Award winner who co-founded the field of Artificial Intelligence—I have been following AI developments closely for decades. That, in itself, does not qualify me to opine with authority on AI investing.
However, I also spent 14 years as a hedge fund manager, placing billions of dollars’ worth of trades in US equity markets from 2006 – 2020. Significantly, my fund performed in the top 10 in 2018 via the unique route of harnessing the collective intelligence of millions of retail investors and combining that intelligence using AI algorithms to beat the best minds on Wall Street.
In addition, as owner of SuperIntelligence.com since 2006, it is fair to say that I was prescient in anticipating the most significant advances in AI, well before such figures as Nick Bostrom (who published his book entitled SuperIntelligence in 2014) or Ilya Sutskever (who founded a company called SafeSuperIntelligence, Inc., in 2024).
I want to explain where I believe AI is headed and what this means specifically for investors, and more broadly, for all of humankind. To understand the future of AI, it is helpful to understand at least a little about AI’s past.
A Brief History of AI
Briefly, the history of AI is as follows.
1956: Herbert Simon, Allen Newell, and 45 others attended the Dartmouth Conference, where the field of AI was named. Simon, Newell, and Shaw were the only ones to present a working AI program that actually demonstrated creative thought. This marked the beginning of the “Symbolic AI” age, in which AI’s intelligence depended on explicit rules that humans programmed into the system.
1986: Geoffrey Hinton, sometimes referred to as the “Godfather” of modern AI, along with others, invented a methodology for training AI without rules being explicitly programmed into the system. Instead, one could simply show a “neural network” many examples, and AI would learn from the examples. However, because computing power was minuscule compared to today’s GPUs, only small, relatively unintelligent systems were possible.
2012: Ilya Sutskever, a student of Geoffrey Hinton, and others invented AlexNet, which demonstrated that the machine learning approach could do very useful things, such as image recognition, far better than expected. Jensen Huang, CEO of Nvidia, credits AlexNet, which used Nvidia’s powerful GPUs, as the AHA! Moment when he realized AI was going to take off.
2022: Using algorithms invented at Google, OpenAI released ChatGPT, triggering widespread adoption of Large Language Models (LLMs), which most people know of AI. The most advanced models required thousands of GPUs, vast amounts of data, and hundreds of millions of dollars to train. However, the resulting LLMs could perform well enough that many people felt they were as smart as humans, at least in specific, limited domains. This triggered an explosion of interest in AI and AI-related companies.
Today, in 2025, AI agents—autonomous LLMs equipped with tools such as search capability—are beginning to revolutionize what can be done with AI. AI agents will easily add another order of magnitude or two of value. Consider all the cognitive tasks that humans perform worldwide. Think of how much humans are paid for many tasks, well over $50/hr. Now consider that what a human does for $50, with an hour of think time, can be accomplished by AI agents for a few pennies in a few seconds. You begin to understand how valuable AI agents—and this next phase of AI—will be.
DeepSeek and Nvidia
DeepSeek cratered Nvidia stock, my largest stock holding (full disclosure), to the tune of -17% when it burst into public awareness on January 27, 2025. Everyone from Donald Trump to dozens of news commentators shared their opinions. However, few had read the research paper by the DeepSeek team, and even fewer knew how to interpret it. For my part, I just bought more Nvidia on the way down and continued to buy subsequent dips.
In the case of DeepSeek, a Chinese company trained an LLM (DeepSeek) for a fraction of the hundreds of millions spent by large US tech companies. DeepSeek performs as well as the US LLMs on intelligence benchmarks, causing much handwringing over China’s progress and fears that companies will stop buying GPUs. The idea is that fewer GPUs are now needed to produce an LLM with high intelligence. But let’s examine that idea critically.
We need to recognize that even if software and training innovations, such as those used by the DeepSeek team, can result in more cost-effective AI systems, those systems will still perform better and ultimately exhibit higher levels of intelligence if coupled with the most powerful GPUs. If AI enables doing work for pennies that humans charge $50+ to do, does it make sense for technology companies to pursue anything other than the very best possible AI systems?
No. You will want the best because whoever has the best will get the lion’s share of this new market. Further, the cost of developing the best, even if it is very high, is still peanuts compared to the market opportunity. Certainly, a company can’t have enough or “good enough” intelligence over the next several years because the value is so high.
Chinese companies like DeepSeek lack access to Nvidia’s most advanced chips, so they must devise clever ways to use the less advanced chips they are allowed. To their credit, they have been very clever. Well done.
Yet, their innovations have already been shared in research papers and can be easily adopted, copied, and improved by US companies. So, the advantage of their clever programming is fleeting. Once the innovations are copied – and a researcher who knows the field and understands the DeepSeek paper can readily comprehend that most innovations are easy to copy – we are back to the scenario of whoever has the most—and most powerful—GPUs wins the intelligence race. That is why demand for Nvidia’s chips—the undisputed leader in computing power for AI—will accelerate despite programming and training innovations. Understanding this, when the stock goes on sale at a steep discount, I ignore what I consider to be the noise of fear, uncertainty, and doubt that grips the market over short periods and buy more NVDA stock.
My Simple AI Investing Thesis
My AI investing thesis is simple: The markets continue to underestimate the value of AI and the speed at which it is developing. Market participants also appear to have difficulty thinking through the implications of events like DeepSeek. While markets are generally efficient and accurate at pricing information over the long term, short-term uncertainty represents an opportunity for those with sufficient understanding and risk tolerance to act.
Over the last three years, my simple thesis has generated outstanding returns. Will it continue? I think so, for several years at least, but no one knows. Every investor must judge their risk tolerance, liquidity needs, overall investment portfolio, individual goals, and other considerations. I am not offering investment advice; I explain what I see.
High Stakes
More important than investing is survival—my survival, your survival, and the survival of everyone we love. Understanding the likely future evolution of AI can help not only make money but also reduce the existential threat of human extinction by advanced AI. I know “extinction” sounds melodramatic and like science fiction, but hundreds of the top AI researchers (including Geoffrey Hinton) acknowledge that the risk is very real.
While companies give lip service to regulation, AI safety, and responsible AI, almost no AI company really wants to slow or pause progress since that would affect their profits and potentially allow another company or country to gain an advantage. Further, if the leading AI experts don’t know how to make AI safe, how can regulators pass effective regulation?
Intelligent analysis will show that we can not only significantly reduce the existential threats of AI but also win (at least in the short term) the AI arms race. To understand my optimism, one needs a view of the future development of AI.
The Future of AI
This is the part of the essay where I am supposed to say that no one knows the future, that I don’t have a crystal ball, that I only have hypotheses, and that the future of AI is anyone’s guess. That would be prudent on my part and would perhaps make me sound more credible. But the stakes are too high, our window for action is closing, and this is no time for indecisiveness or hedging. I have been thinking hard about these things for decades. Although I cannot provide precise dates, I am very confident about the order in which AI will develop over the coming years.
First: AI agents will become pervasive. Meta (i.e., Meta Platforms Inc.) is already working on personalized AI agents. Personalization will include the product preferences and the expertise, values, and ethics of individual humans and corporations. With personalized agents will come a dawning recognition that “intelligent entities” include not only human beings but also AI agents.
Second: Companies will realize that networks of intelligent entities can harness the collective intelligence of billions of AI agents and millions of humans. Such networks will constitute the fastest, safest, most democratic, and most cost-effective path to SuperIntelligence.
Third: SuperIntelligent networks will incorporate the expertise and ethics of millions of diverse humans. As a result, the networks will align with human values, which are primarily pro-social and positive. Network designs will include a common architecture of thought that both human and AI entities can share. Designing this common architecture with safety mechanisms can reduce the existential threat posed by SuperIntelligence. Designing the network so that values and ethics are democratically distributed across millions of humans, and billions of AI entities can make it difficult for malevolent actors to usurp the SuperIntelligence, further reducing existential risk.
Fourth: Country-wide networks of SuperIntelligent networks will form, continuously improving and self-extending. The role of humans will increasingly become that of providing the values and goals for SuperIntelligence networks. Human values and ideologies, as embodied in the agents on the SuperIntelligent networks, will be amplified. Whether this leads to conflict or cooperation depends primarily on humans – not the technology. If the majority of humans worldwide, including their governments, value survival and mutual prosperity above domination and exploitation, then all will be well. If our worst inclinations prevail, we will reap what we sow with potentially catastrophic results for everyone.
Fifth: Planetary Intelligence will emerge from the country-wide SuperIntelligent networks. Assuming our better natures prevail regarding values fed into the networks, the era of Planetary Intelligence can be a golden age of prosperity, health, and freedom for all humankind and our planet.
Concluding Thoughts
If these five steps in the development of AI lack specificity, the best thing I can do is point you to the website SuperIntelligence.com, where 1,800 pages of patent-pending inventions are summarized, outlining in precise terms how the steps might be implemented. These invention summaries have been made freely available to the world, hoping that disseminating them will increase the chances that SuperIntelligence is developed safely. Many other possible designs and implementation paths result in scenarios where extinction risk is also reduced. The important thing is that we must design SuperIntelligence to align with positive human values and reduce existential risk.
Currently, relatively few researchers are pursuing the idea that SuperIntelligence can be designed to be safer. This must change. We have limited time to act. All the wealth in the world will have no consequence if our human species ceases to exist. Now is the time to act, first by becoming aware of the problem and potential solutions, and ultimately by investing in those companies that design the safest systems. I believe, and SuperIntelligence.com explains, that the most powerful and profitable systems can also be the safest.