What Is the Goal? Is It AI Or AHI?
Understanding the Difference Between Information and Intelligence
Over the next decade, we will witness the next great leap in technological innovation: the development of what we are now calling artificial intelligence (AI). There is already a growing sense that we have made significant progress along the path to developing this game-changing technology. We marvel at the capabilities of OpenAI’s ChatGPT and were recently stunned by the unveiling of China’s entry, DeepSeek. As the world’s two most prominent countries appear poised for an AI technology race, many may feel that the pace of progress is likely to move at lightning speed. However, while that sense of progress may be correct, it may not be totally accurate. That’s because the AI we have developed so far is better described as accelerated information. And there’s a big difference between information and intelligence.
Before we can develop a highly advanced intelligence technology, we need to appreciate the distinction between these two terms, and, more importantly, we need to understand the workings of the most highly developed organic intelligence system: human intelligence. Given the track record of constructing highly biased algorithms in shaping AI’s early applications, it is clear that we have much to learn. For while this new technology is producing rapid information, it is not necessarily providing us a pathway to the higher reaches of intelligence. Should the day come when we can build an intelligence system that rises above the limitations of human biases and taps into the farther reaches of human intelligence, we are likely to discover that a truly advanced system doesn’t produce artificial intelligence. Rather, it is a portal to real intelligence and a catalyst to a radical evolution of human thinking. Once we understand the difference between information and intelligence, we will realize that the goal of AI is not the production of artificial intelligence; it’s the production of accelerated human intelligence. The goal is not AI; it’s AHI.
Accelerated Information
The AI technology boom is the current iteration of the digital age, which emerged in the mid-twentieth century. The dawn of this new age began with the computer. This marvelous invention was first embraced by scientists and businesses. For scientists, it was a powerful research tool. For businesses, it paved the way for the automation of administrative and operational activities. The primary benefit of this new technology, especially for businesses, was its ability to perform mathematical calculations rapidly and error free. As computers became essential fixtures, information systems departments became commonplace in universities and businesses. Over time, computers transformed daily life as the machines got smaller and more powerful thanks to a phenomenon known as Moore’s law that references our capacity to store and process information doubles every 18 months. In other words, our capability to process information has been accelerating at an exponential pace.
The lexicon of the first phase of the digital age was built primarily around math. Software engineers constructed novel program languages designed to direct the performance of mathematical calculations. Computers could process large volumes of error-free information at incredible speeds. We now take this capability for granted. For example, when was the last time you found a math error on your credit card statement? If there’s an error, it’s likely a wrong charge a human entered into the information system, but the computation of the charges entered will be correct.
Today, as we are exploring ways to build machines and software applications to catapult intelligence, the context of information systems has shifted from mathematical models to large language models. Instead of doing numerical calculations, AI systems are rapidly scanning large volumes of linguistic information, discerning patterns based on preconceived algorithms. However, unlike math, which is reliable and rational, language is normative and subject to human biases. While the information AI produces is fast, it’s not error-free, and is often inaccurate. Thus, while AI systems continue to expand our ability to produce accelerated information, this new technology has a long way to go before it can reliably produce highly intelligent information. If we are going to bridge that gap, we need to understand why humans—among whom are designers of AI algorithms—are often prone to making apparently senseless errors.
Dual Thinking Modes
The human brain is a paradox. While it is capable of producing highly developed analytical and creative intelligence, it is also susceptible to making inane blunders. Why is this so? The answer, according to the psychologists Daniel Kahneman and Amos Tversky, is that people are nowhere near as rational as they think and are incredibly prone to unconscious biases that influence human decision-making to a far greater extent than we realize.
Kahneman and Tversky discovered that people engage in two different thinking modes in their day-to-day lives. Both modes are necessary for navigating reality, but how they operate is worlds apart. They refer to these ways of thinking by the nondescript names System 1 and System 2. System 1 is fast thinking, which operates automatically with little or no effort. It is highly proficient at identifying causal connections between events, sometimes even when there is no empirical basis for the connection. It’s the capacity to make split-second decisions with limited information, as often happens when we are driving. It works most of the time, but not well enough to eliminate all accidents. System 2, on the other hand, is slow thinking and involves deliberate attention to understanding details and the complex web of relationships among various components. Because linear thinking comes more naturally for most of us, discerning the dynamics of complex realities takes time, as happens in sophisticated scientific research endeavors. Whereas System 1 is inherently intuitive, deterministic, and undoubting, System 2 is rational, probabilistic and highly aware of uncertainty and doubt. Needless to say, these two ways of thinking are contextually very different.
While fast thinking is more useful in making immediate choices, it is also more likely to result in judgment errors, even though when engaged in System 1, we tend to feel more confident than when we employ System 2. That’s because the mental narratives that are a natural byproduct of System 1 are likely to result in biases that often cause us to make confident decisions that are completely wrong. Kahneman and Tversky’s work provides clear evidence that most decisions are shaped by unconscious biases. This evidence convincingly disproved the longstanding foundational assumption of economic theory that humans are rational decision makers. We are clearly not, and for that contribution, Kahneman was awarded the Nobel Prize in Economics in 2002.
Perhaps the most interesting aspect of Kahneman and Tversky’s research is the dichotomy between how we perceive ourselves as thinkers and how we actually approach decision making. While we may view ourselves as predominately rational System 2 thinkers, the reality is most human judgments and decisions—even those by experts—are based upon the more intuitive System 1 for the simple reason that we usually don’t have the time to do System 2 thinking. However, there is a price we pay for our overdependence on System 1 thinking. And that price is blindness. In their research, Kahneman and Tversky discovered two important facts about how our brains work: “we can be blind to the obvious and we are also blind to our blindness.” While the first fact is troublesome enough, the second fact is far more disturbing. As we build AI, if we ignore our tendency to be blind to our blindness and continue to overconfidently construct algorithms that amplify unconscious biases, the resulting intelligence will not only be artificial, it will likely be out of touch with reality. Once again, artificial intelligence is not the goal of the next iteration of data processing.
The Great Promise of AHI
The development of the human brain with its dual thinking modes is arguably the single greatest evolutionary leap because human thinking has enabled the emergence of a profound level of intelligence. The human brain, despite its flaws and limitations, is the foundation for the most advanced intelligent system in the universe, at least up to this point. Both System 1 and System 2 thinking are distinctly human capabilities that have provided humanity with an immense evolutionary advantage. We are capable of developing complex intellectual structures such a mathematics, physics, and music via applications of System 2, and, thanks to System 1, humans have the unique capability to make judgments and decisions quickly from limited available information.
However, each of these modes is not without its flaws. As mentioned above, System 1 is hampered by unconscious biases that result in senseless errors. On the other hand, System 2 is marred because, while this thinking mode is the foundation for a profound level of intelligence, it is an inherently slow path to that intelligence. And it is this slowness that has enabled System 1 and its unconscious biases to serve as the default mode for human decision making. We need look no further than the mismanagement of the Covid pandemic to understand how the unconscious biases of supposed experts resulted in policy mandates that compelled citizens to take a vaccine that doesn’t work and carries a risk of disability and death greater than the sum total of all vaccines administered for the past three decades. That’s how senseless errors happen.
The great promise of the next technology revolution is not about producing accelerated information that blindly, or even willingly, amplifies the biases of the algorithm designers. The real opportunity is in creating a powerful technology that fosters an incredible evolutionary leap in human intelligence by giving humans the capacity to do System 2 thinking at System 1 speeds. This human-machine symbiosis could make it possible for System 2 to become the default thinking mode. If this were so, accelerated human intelligence could dramatically improve decision making by overcoming human biases and greatly diminishing, if not eliminating, senseless errors.
And read also:
Follow me on X/Twitter. Check out my website or some of my other work here.