A Classic Con
AGI is NOT a solved problem
I’ve recently been observing intelligent people…computer scientists and cognitive scientists, psychologists, etc; those involved in the artificial intelligence space, tripping over OpenAI’s CEO and snake oil salesman Sam Altman’s bogus claims about AGI is a solved problem. But we must chill, look beyond the hype and see the evolving classic con.
For those who may not be familiar with artificial general intelligence (AGI), it is a field of theoretical AI research that attempts to create software with human-like intelligence with the ability to self-teach, and this hypothetical machine intelligence would possess and surpass human intelligence. But this is a theory, a constructed delusion, a crafted narrative Altman uses to continue to raise money, chasing something he knows isn’t there. Never-ending capital is necessary to sustain a business model that now even its largest shareholder, Microsoft, says won’t be profitable for at least 15 years; if at all.
Altman: “We are now confident we know how to build AGI as we have traditionally understood it,” as usual, his words are without substantiation and say nothing. It serves to deflect and delay. Slick and inferring AGI is happening, and OpenAI has everything under control. A usual con tactic to distract. If you’ve been following the industry you’ll understand that nothing has fundamentally changed with ChatGPT — the usual hype continues.
Altman and his cast of Tech Bro characters should join the WWE hype team! Stop telling us that you are “close,” that “AGI is a solved problem” and give it to us already. I, for one, won’t be holding my breath, and it’s incredible how reasonably intelligent people continue to fall for this classic con.
OpenAI/ChatGPT/LLMs have hit the wall, and this was always in the cards because building commonsense into machines was always science fiction. But seriously, it goes against the laws of nature, which always prevails. The logic involved in LLMs is illogical and disingenuous because it attempts to circumvent, avoid, and fool people with the appearance of intelligence. Machines have enormous memory which is what they have and carry out spectacularly. Which is their true value.
Authentic human intelligence is basically commonsense and LLMs fail when it comes to commonsense and can’t close the gap between the real world and the software world. As Noam Chomsky puts it, LLMs are the Impossible Language! It only understands itself and its own internal systems…machine language but can’t translate to the external world. It doesn’t know what it’s talking about and therefore, doesn’t know meaning.
So Altman and AGI are a continued con, distracting us. In 2024 he said, “I don’t care if we burn $50 billion we are building AGI and it’s going to be worth it.” But it’s easy to say that when it’s other people’s money, you’re burning.
Writer Sharon Goldman, in an article in VentureBeat titled Sam Altman wants up to $7 trillion for AI chips. The natural resources required would be ‘mind-boggling’ — the article came after the Wall Street Journal reported that he wants to raise $7 trillion for a “wildly-ambitious” tech project to boost the world’s chip capacity. But at what cost? Fortune reported in September 2023 that AI tools fueled a 34% spike in Microsoft’s water consumption; Meta’s Llama 2 model reportedly guzzled twice as much water as Llama 1; and a 2023 study found that OpenAI’s GPT-3 training consumed 700,000 litres of water. OpenAI has never disclosed its numbers, but it will surely cost the earth. There is already pressure from Generative AI industry players to keep coal plants on. I fail to see how going backward in the name of ‘technological advancement’ advances humanity.
The reality is that Generative AI has been mostly promises rather than measurable delivery. And no argument can be made that we should tear up the planet in a dystopian fashion and take an unprecedented and enormous risk on something so unproven.
Many of the large investors in OpenAI, including Microsoft, are way too far down the road to turn back now (but Microsoft does have the enormous capacity to write down the loss). Altman has effectively built a Multi-Level Marketing (MLM) type scheme that has been elevated to the status of too big to fail. Everyone has gotten caught up in the hype and no one wants to be the first to pull the shoot. As usual, the room is filled with cowards!
AGI is a constructive narrative to sell you more stuff, dreams of riches, which has created an industry of group thinking minions developing Everything-AI, and, of course, to feed OpenAI’s/ChatGPT business model, and just like with MLM only the guys at the top get rich.
Look at his comment below and notice the classic pushing the ball down the road. Then, the shift of the narrative from AGI to “Superintelligence,” the new shiny object that is intended to blind us from the truth.
It’s also no coincidence that Altman is using this new language: “Superintelligence” because in 2024, former OpenAI co-founder Ilya Sutskever, with other former OpenAI Tech Bros, launched Safe Superintelligence Inc., raising $1 billion essentially overnight. Altman now has competition raising money so he’s cleverly hedging through new narratives — the same house of mirrors.
But there is nothing super in Superintelligence, and certainly, there is nothing safe about it too, relative to the lack of AI-safe that exists in the industry right now.
Intelligent scientists continue to point out that AGI is not real, but truth and facts don’t matter in this upside-down world. What matters most is power!Who has it and the money to speak the loudest…sucking up all the investment oxygen in the room.
One MIT professor said that AGI is not close to happening and posted several links to show how “far we are from robust intelligence.” Another point to the problem of distribution shift, well-tested in Apple’s 2024 reasoning paper. AI can’t reason, and LLMs just pick the next word and generalize well to similar items but struggle with reliability in unfamiliar regimes, even minor variations sometimes break them. A recent paper on Putnam math problems showed a similar failure on distribution shift, showing a 30% decrement by o1 on math problems with minor variations eg on variable names. Same old story, a different set of problems.
Applying the commonsense of applied intelligence | ai
So the evidence is there, but the majority of the population is too lazy to read, willful ignorance is easier, and they seemingly enjoy being stupefied and would rather blend in with the just-as-ignorant herd.
AI/LLMs have no Commonsense reasoning is still dodgy, as it has long been, as Ernie Davis and Gary Marcus just reviewed. Even on certain benchmarks, results may be fragile or difficult to replicate. So, commonsense is the hallmark of intelligence; without it, there can be no intelligence. And commonsense tells us that a machine can’t ever be human. There is that little variable called biology to get around.
Additionally, Immanuel Kant explained intelligence centuries ago but most people don’t read history so they are easily fooled. Perhaps neuroscientist Anil Seth explained authentic intelligence best: intelligence requires both mind and body, and we are intelligent because of our bodies. Authentic intelligence, therefore, has a major emotional component. But these low-EQ Tech Bro software engineers don’t have it in them and don’t want to recognize anything other than their one-dimensional approach.
This happens when you attack problem-solving in singularity — mathematically without the guidance of philosophy. When you take a mechanical approach and entrench yourself in group thinking to protect yourself from criticism and approaches you may not be familiar with and are not willing to learn. So when you throw greed and big money into the equation to the exclusion of others, a lot of nonsense happens and the authentic pursuit of human innovation and creativity suffers.
As Bertrand Russell wrote, looking for quick and easy approaches to difficult problems “has many advantages; they are the same as the advantages of theft over honest toil.”
What we are missing, therefore, is the understanding that math is not enough. Philosophy is required to make sense of the math and to bring purpose and reason holistically. Mathematics and philosophy, therefore, are symbiotic; they go hand-in-hand, and math is not optimally useful without first establishing principles. Principles, based on a given philosophy, help to ascertain the thinking behind concepts and ideas.
This is why LLMs hallucinate and why hallucination will not go away under an Everything-LLM mindset. We require more focused language models for practicality and to grasp the linguistic meaning of linguistic communications in both text and speech.
The meaning of words and expressions is determined by the context in which they are used — context is everything, i.e., commonsense. Focused language models look for inferences based on the focused templates being applied. It plays an inferential role.
Humans create understanding by assigning normative meanings to words and concepts; these commitments to meaning can cover a wide range of claims from personal preferences to assertions about facts. Facts are, therefore, statements about what we perceive a priori or through empirical observation but these facts don’t exist on their own. They are always contextualized to convey meaning.
Humans have established these normative dimensions for reasoning and meaning, and no language model would be optimally useful to humanity if it doesn’t strive to capture authentic commonsensical meaning. LLMs lack this basic applied intelligence functionality, and they apply brute force instead, with abstraction to coerce and fool people through mathematical elegance. LLMs are not intelligent.
“Pragmatism offers a conception of reason that is practical rather than intellectual, expressed in intelligent doings rather than abstract sayings. Flexibility and adaptability are its hallmarks, rather than mastery of unchanging universal principles. It is the reason of Odysseus rather than of Plato.”
― Robert B. Brandom
Accordingly, applied mathematics must find logical coherence and become empirically warrantable to be useful in conveying meaning in the real world. We just can’t violate norms and say things work when they don’t, that’s called lying. We too often fall in love with nice stories and theories instead of thinking for ourselves through the applied intelligence process of rational human logic.
We can’t achieve great things if our creativity and innovation are reduced to if this, then that. Where logic and truth “collapse like a cheap lawn chair under the weight of beauty, absurdity, and mystery — beyond the zeros and ones lies the essence of what it means to be human, a domain immune to quantification,” says Murat Durmus.
It is a continuing mistake by highly intelligent people that the universe can be explained by mathematics alone — that would be the end of intelligence! We need philosophy! Big time! Because principles are always the driving force behind intelligence and new discoveries according to Einstein. So we must move away from abstraction, the noise and nonsense, and apply our applied intelligence practically — underpinned by our commonsense.
Mathematics’s overriding function is to make things precise, to inform and clarify those principles and to explain the characteristics being utilized in problem-solving for maximum utility. So the most important question to ask now is whether or not we are using the right language models, approaches, and the most optimal thinking processes and systems. LLMs are not the holy grail. Get over it and move on! An incredible amount of time and resources is being wasted and that doesn’t serve humanity best.
Einstein said:
“There is no method capable of being learned and systematically applied so that it leads to a new [perfect method]. The scientist has to worm these principles out of nature by perceiving in comprehensive complexes of empirical facts certain general features which permit of precise formulation.”
In other words, we must activate our applied intelligence capacity and capabilities practicality and abandon the brute force approach.
6ai is redefining how strategy is crafted in the age of AI!
We need specific and applicable principles to discover what we don’t know, to innovate and create. The 6ai Technologies 6-steps to applied intelligence (ai) is based on Einstein’s use of principles of discovery — where we can deduce conclusions. Einstein said that his work falls into two parts. First, find your principles and follow them to draw conclusions, which, of course, effectively can’t be achieved without the proper strategic interweaving of mathematics and philosophy. Newton said similar and Aristotle too before both Newton and Einstein. So we feel strongly that our intellectual foundation is laid in strong intellectual bedrock!
6ai is a human-centric approach — the 6-step to ai provides a simplified framework to deal with complexity, for critical thinking to test and develop our new ideas. It’s a higher dimensional level of useful information retrieval with augmenting and amplifying tools — sharpening ideas that can change the world.
The 6ai proprietary process utilizes Generative AI — Agentic-AI purposefully in parts of our software but our product is not an Agent. It is an enabling-amplifying tool that empowers strategy development for human progress. Designed practically and responsibly to augment human intelligence capacity, capabilities, and ingenuity — not to replace humans but to enhance their worth.
To truly maximize the productive value of language models a more focused and higher-dimensional paradigm of information retrieval and augmentation is required. The integration of mathematics and philosophy is utilized for maximum utility to truly maximize sustainable human potential, for continued knowledge building that can bolster and improve one’s unique value proposition to their organization, marketplace, and themselves.
It is not intelligent or business savvy to jump on new unproven trends, instead, to simply identify an enduring business problem: strategy development for winning! Strategy is agnostic and everyone requires a strategy to succeed, regardless of domain or just in life. Our business model, therefore, has universal appeal which creates new demand.
6ai is about making things more accessible, aided by quickly evolving digital technologies and doing it better, faster, and at a very fraction of the cost of hiring ridiculously expensive and low-value-for-money consultants and advisors.
This defines the applied intelligence | ai process and the process is the IP! New thinking and producing new focused language models to get things done! Highly useful and hallucination-free.