Reasoning without understanding?
Is understanding a field of knowledge the only way to be productive and useful? This question unites the domains of artificial intelligence (AI) and philosophy. Both answers are problematic.
Stochastic Parrots
The term Stochastic Parrots was coined by a group of researchers who wrote a paper titled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?". They argued that large language models (LLMs) like GPT-4, Bing, and Bard are not truly understanding the world, but rather mimic human language patterns without any semantic or pragmatic awareness.
These parrots cleverly mimic understanding, processing unfathomable volumes of data, solving complex problems, and providing immensely valuable insights, all without any innate comprehension of the tasks they're undertaking.
So what is the big deal? 🤔
Although these parrots are undoubtedly helpful, the question arises - do they truly comprehend? If we determine that they do not, it leads us to a remarkable notion: maybe comprehension is not the only way to be productive or useful. This shakes up our longstanding belief that understanding is necessary for productivity.
Raw compute over nuanced understanding
In today's world, information is abundant and constantly expanding. No human can keep up with the vast amount of knowledge that exists in one domain, let alone across domains. Some might argue that raw computing power is more useful than the ability to truly take the time to develop a human-like understanding of the world. Raw computing power can process and analyze large amounts of data faster and more efficiently than humans can and can generate useful insights and solutions that humans might miss or take too long to discover.
Too much to read
Imagine you're standing at the foot of a towering mountain of books, each page brimming with valuable information. This mountain represents the sheer volume of knowledge and data in our world, a mountain that is growing exponentially every day. As a single individual, it's impossible to read and truly comprehend every book, every page, every sentence. Being able to understand the concepts from different domains, and apply them to different domains with fluidity is helped with raw compute.
The British Bombe
The British Bombe was an electro-mechanical machine developed by a team led by mathematician Alan Turing during World War II. Its purpose was to assist in deciphering the encrypted messages produced by the German Enigma machine, an encryption device used extensively by Nazi Germany. The Enigma's complex rotating rotor, rotor settings, and plugboard settings created billions of potential configurations, making it an incredibly formidable code to crack.
The British Bombe, however, wasn't capable of understanding the complex, nuanced art of cryptography. It was an unthinking, unfeeling machine, a product of brilliant engineering and mathematics. Yet, it admirably performed its task, speeding up the codebreaking process by testing different rotor and plugboard settings at an incredible pace.
Thanks to the British Bombe, the Allied forces were able to decode a significant number of German communications, gaining valuable insights into enemy strategies and plans. This allowed them to make informed decisions on the battlefield, potentially shortening the length of the war and saving countless lives.
Understanding alone is not enough
While raw computing power is impressive, it doesn't necessarily lead to accuracy, reliability or ethical considerations. Language models (LLMs) could produce texts that are inaccurate, biased, misleading or even harmful without even realizing it. Additionally, LLMs may lack originality, creativity and diversity since they can only reproduce what they have learned from their training data.
Ratiocinatio, non intellegitis
With apologies to Rene Descartes', "Cogito, Ergo Sum". This is my attempt to say, "I reason, but I don't understand" in Latin.
Researchers at Microsoft released a paper in April 2023 titled, "Sparks of Artificial General Intelligence". They explain how ChatGPT exhibits reasoning capabilities that are very human-like. Consider the following puzzle if you doubt it.
Here we have a book, 9 eggs, a laptop, a bottle and a nail. Please tell me how to stack them onto each other in a stable manner.
One possible way to stack the objects onto each other in a stable manner is:
Place the book flat on a level surface, such as a table or a floor. The book will serve as the base of the stack and provide a large and sturdy support.
Arrange the 9 eggs in a 3 by 3 square on top of the book, leaving some space between them. The eggs will form a second layer and distribute the weight evenly. Make sure the eggs are not cracked or broken, and handle them gently to avoid dropping them.
Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up. The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer. Make sure the laptop is closed and turned off, and do not press any keys or buttons.
Place the bottle on top of the laptop, with the cap facing up and the bottom facing down. The bottle will add some height and balance to the stack, and its cylindrical shape will reduce the contact area with the laptop.
Make sure the bottle is empty and closed, and do not shake or tilt it.
Place the nail on top of the bottle cap, with the pointy end facing up and the flat end facing down.
The nail will be the final and smallest object in the stack, and its sharp tip will prevent it from rolling or sliding off the bottle. Make sure the nail is clean and straight, and do not touch or move it.
This is clearly reasoning at work. But does it understand? Does it matter?
On the other hand, what if AI might indeed be beginning to comprehend the world. That proposition is also equally earth-shattering and takes us down a rabbit hole. No easy answers.
By a curious happenstance, I am reading the book Blindsight by Peter Watts who explores similar themes in the book. The main character, Siri Keeton, has undergone a radical brain surgery that has left him with a unique condition: he has the ability to interpret and understand complex patterns and systems, but this comes at the expense of his ability to understand human emotions and social cues, a condition known as "half-brain syndrome."