The Impossible Language
Why LLMs Are Not Natural Nor Sustainable
Soren Kierkegaard, an existentialist philosopher, once said that there are two ways to be fooled. “One is to believe what isn’t true. The other is to refuse to believe what is true.” Humans have tended to straddle between the two throughout time and with the many different conceptualized phenomena that have appeared in the world. Many of us even end up basking in the bliss of ignorance, putting aside intelligence because it’s easier to be lazy than to think critically. Always accepting of things which are irrational, illogical, and away from the objective truth.
Einstein once said “The theorist’s method involves his using as his foundation general principles from which he can deduce conclusions, [and] His work thus falls into two parts. He must first discover his principles and then draw the conclusions which follow from them.”
Einstein is saying here that we must not put the cart before the horse — putting out nice theories and then going about to prove it. We must always see and accept reality and not try to talk around or bend it towards our nice stories.
So we shouldn’t avoid reality, principles are always the driving force behind new discoveries whether we know it or not; avoiding them is foolish. The important question to first ask is how one comes up with the new principles.
Einstein was very clear; “…there is no method capable of being learned and systematically applied so that it leads to a new [principle]. The scientist [must] worm these principles out of nature by perceiving, in comprehensive complexes of empirical facts certain general features which permit of precise formulation.”
I’m no Einstein that’s for sure, but I would guess he would give the smack-down to those computer scientists today who are trying to invent an alternative reality and circumvent nature. Those crafting nice stories, to fool us, as Kierkegaard says about how narrow artificial intelligence can reach and surpass human general intelligence.
Facts or statements about things can’t be isolated to serve one’s nice theory, they are always conceptualized and highly selective of the “facts” they choose to tell the story with. When it comes to certain things in AI, they are only implicit and constructed theoretically. We must recognize those scientists who fall in love with their theories and only see beauty where none exists.
Einstein said it best in a lecture he gave in 1933 at Oxford, “[The discovery of principles] are free inventions of the human intellect, which cannot be justified either by the nature of that intellect or in any other fashion a priori,” he said.
I believe Einstein was saying here that sometimes to get around scientific problems often one tries to defy the rules of nature. Consider this is what is being done in certain aspects of AI today.
The father of modern linguistics, MIT Professor Noam Chomsky, is even more direct. Chomsky says that “we don’t know the world and don’t understand nature and its restraints. Too often we accept things that can’t ever logically be true,” and Yuval Noah Harari, the author of best-sellers like Sapians and 21 Lessons for the 21st Century, says that humans are “susceptible to nice stories” as opposed to critical thinking.
So when it comes to those professing and hyping artificial intelligence (AI) about its ‘natural language models’ that are supposed to be able to communicate like humans. Chomsky calls this pure science fiction. He goes on to say that language is a “creative infinite act” which can be constructed in real-time with no restraints. That the genesis of language is based on human experiences…through our organic matter existence. No mechanical or artificial forms or systems can replicate human biology and experiences in nature. End of story!
So when asked a big question in an interview with Machine Learning Street Talk about ChatGPT/GenerativeAI “receiving massive investments, and continues to be hyped beyond belief despite very strong theoretical arguments for the futility of learning language from data alone. Further, the computorial complexity of language is on a scale that would eclipse any earthly data. And that human cognition extrapolates from common knowledge to understand text…and how we can ascertain background knowledge which is never actually communicated in the text.”
The interviewer adds that Francois Chollet calls Large Language Models (LLMs) make-believe AI…thus the road to nowhere. Gary Marcus even called it a parlour trick. (Recently too, Gary Marcus wrote Information pollution reaches new heights AI is making shit up, and that made-up stuff is trending on X.)
Chomsky answers that “LLMs have not achieved anything in this domain…achieved zero. It’s a theory of anything goes…that includes all the laws of nature, the ones we know and the ones we do not know yet. With a supercomputer, it can look at 45 terabytes of data and find some superficial regularities, and it can imitate. ChatGPT has done nothing!”
He continues, “It’s a made-up language that violates every principle of language, so there is no point in even looking at its deficiencies because it does nothing! And all this is doing is wasting a lot of energy in California.”
He does say that ChatGPT/GenAI, in general, has some good applications and engineering, that can be helpful as a useful tool.
For example, Chomsky says I like “voice transcription because I like to use it. I like bulldozers too…it’s a lot easier than cleaning the snow by hand. But it’s not a contribution to science. So if you want to use all the energy in California to improve live transcription…then okay.”
ChatGPT 4 is coming along and is supposed to have a trillion parameters, but it will be exactly the same, even use more energy and will still achieve nothing — for the same reasons. You can fool New York Times reporters ecstatic about ChatGPT-4, but you shouldn’t be able to fool scientists.” Or any critically thinking person I would add.
So when it comes to all the hype around LLMs, people are generally lazy and never skeptical when being fed good stories. Critical thinking escapes and they choose to be wilfully ignorant. As Friedrich Nietzsche puts it. “Sometimes people don’t want to hear the truth because they don’t want their illusions destroyed.”
Marcus Garvey once said, “Intelligence rules the world and ignorance carries the burden, so we must get as far away from ignorance as possible.” This is the primary focus of this article.
Firstly, let’s set the foundation for our thinking and analysis. We can use a couple of the ancients for that. Plato and Aristotle. Plato-like thinking is the abstract, fascination with things. Platonic thinking will spend time trying to analyze and perfect the imperfect circle, as an example. Aristotle-like thinking, on the other hand, understands that circles have good utility functions in the universe. The wheel for example works just fine and we should not waste time trying to explain or perfect what we can’t ever be perfected. Aristotle would tell Plato that the imperfect wheel continues to serve humanity very well, so let’s not waste our intellect on the superfluous.
Aristotle’s way of thinking says that the pursuit of knowledge shouldn’t be about the pursuit of abstract perfection, or the very pursuit itself. Instead, our knowledge acquisition is best utilized when it’s harnessed for our ingenuity, in creating the tools that can serve humanity best.
LLMs are the pursuit of the abstract. Plato-prescribed. Which, as Chomsky says, does nothing for humanity.
Natural limitations exist in our metaphysical time and space, it is the very nature of our existence in the universe, and its laws apply to everyone. Physics gives us good ideas and theories about the natural world, but in the end, it’s the application of mathematics in physics that proves what is true or what is not true. Math explains the universe! Fundamentally make that the underpinning of your thinking processes. If you bolster basic principles of mathematics and logic in your thinking you can’t ever be fooled by those who profess LLMs as natural language.
Large Language Models (LLM) are supposed to interpret ‘natural’ human language through special types of deep learning models. To interpret natural language communications and provide accurate human-like outputs. These models are said to be trained naturally (but it’s only an illusion) on billions of pieces of data designed to draw relationships between words in different contexts. But it’s only a predictor of the next word. This processing of vast amounts of information is said to allow the models to regurgitate a combination of words that are most likely to appear next to each other.
It’s not an autonomous, intelligent system, able to think and decide like we do. Instead, as Emily Bender and colleagues emphasize, generative AI is a mimic of human action, parroting back our words and images. It doesn’t think, it guesses — and often quite badly in what is termed AI hallucination.
— Kean Birch is director of the Institute for Technoscience & Society at York University.
The theory of neural networks (a theoretical method in artificial intelligence that attempts to teach computers to process data in a way that is inspired by the human brain.) This is part of the deep learning theory, using interconnected nodes or neurons in a layered structure that resembles the human brain. However, as Chomsky told us earlier…machines can’t replicate human organic matter so neural networks are not a reality.
(Note: if you take time to read 18th-century philosopher and mathematician Immanuel Kant, it may enlighten you on how intelligence and reason happen. Understanding limits is a central tenet to understanding intelligence and reasoning according to Kant.)
We saw the so-called Godfather of AI, Geoffrey Hinton, flame out with his neural network theory. Subsequently ending up leaving Google, and don’t believe the false narrative he put out about leaving because AI is going too far and being a risk to humanity. The reality is, that he left because his grand neural network theory doesn’t work. Organic biological matter versus a mechanical construct, can’t be compared. Hinton got caught up in Plato-like fascination trying to perfect the circle.
Again, a read of Kant’s Critique of Pure Reason shows the impossibility of one sort of metaphysics being the foundation for another. Intelligence includes experience-based and the five senses. Impossible for a machine.
Humans do not perceive things as they are, rather we are constantly inventing our world and correcting our mistakes by the microsecond. We experience the world and the self, with, through, and because of our bodies. Accordingly, intelligence is not only in the mind it is also in our physical body. Reactions are based on our senses — what we see, feel, and hear. This is common sense, contrary to the neural network concept that pushes intelligence as something only in the mind. Therefore, communication and understanding are connected both to our subconscious and conscious existence and are critical to accurately understanding what is being communicated.
Human understanding is holistic and commonsensical and ‘machine comprehension’ we don’t know about. Therefore, deep learning/LLM is built on a fallacy!
“Everything must be taken into account. If the fact will not fit the theory — let the theory go,” — Agatha Christie.
The LLM construct is an impossible language says Noam Chomsky. It’s synthetic, not natural and made up, and one that violates all the rules of nature, grammar, culture, experiences and symbolism. Chomsky points out that because of past experiences in time and space, emotion and state of mind, environment, and basic human interaction and relationships. These variables contribute to how language is communicated and interpreted. All of which are linked back to general natural intelligence. Chomsky exposes the myths and hype around AI and what it professes it can do, versus what it is.
He says that calling this impossible language natural is intellectually dishonest and dangerous for humanity.
Chomsky goes on to point out that LLMs are essentially a form of pure mathematics, which can be made up elegantly. It’s the only way that ChatGPT can perform, by creating and adhering to its own rules. In other words, pure mathematics.
Where thoughts come from and how they’re generated and formed are critical to how they are translated into words, and subsequently interpreted. This is a highly unique human process. More sophisticated and complex than any programmed machine language can achieve.
Therefore, regardless of all his shaky pontification out there, you can’t get around nature and science — LLMs are not natural at all, and again, operate as an impossible language! Nevertheless, there is still enough available in language models to be taken and properly applied for effective value creation, for organizations, individuals, and humanity.
So it is not whether or not AI is becoming more intelligent, as Hinton thinks it is, but it’s more about how LLMs are being developed. Is it being developed in the best interest of humanity or the best interests of Big Tech corporations and their billionaires? Are these massive investments in LLMs creating a stranglehold on the world via its impossible language? Or can we develop the technology in a more Aristotle-like focused way?
More useful tools that can enhance human productivity and serve humanity best, simultaneously. There is no reason why it can’t. It just depends on who has more influence over it.
Two streams of intelligence are ongoing; first, human authentic or natural general intelligence, second is narrow (LLM) machine intelligence. This is what it comes down to in the end. However, it doesn’t have to be that way. There is a more productive, useful, and focused middle ground of language models coming, and in the best interest of humanity too.