The Plateauing AI Hype
Reality Always Comes Home To Roost
I wrote an article in June of 2023, titled “The Problem With AI”. It mainly had to do with AI being more about hype and fascination than producing substance for humanity. So the problem with AI, as I see it is the nature of the pursuit itself — towards the perfection and the impossible — AGI. OpenAI’s hubris with the use of knowledge in pursuit of abstract perfection, makes the pursuit itself the main thing. Not technology as a tool that can serve humanity best.
They have it the other way around…humanity serving technology.
The history of civilization tells us that automation technologies are just part of the evolution of automation tech and that technology is best utilized when harnessed to our human ingenuity in creating the tools that serve our humanity’s interests.
“A thousand years of history and contemporary evidence make one thing clear: progress depends on the choices we make about technology. New ways of organizing production and communication can either serve the narrow interests of an elite or become the foundation for widespread prosperity.”
—from the book Power And Progress | Our 1000-Year Struggle Over Technology & Prosperity; by MIT Professors Daron Acemoglu & Simon Johnson.
What’s been happening in AI today is a path filled with hype and fascination — a labyrinth — the consumption of the mind engaged in a pre-generated abstract solution in search of a problem. Not purposeful, empirical, tangible, practical, or commonsensical towards how AI can help humanity to flourish. Just a lot of big promises, i.e., how has the driverless car world worked out? About $100 billion and still counting later, but things seem to be increasingly moving away from the idea of driverless cars, being a good idea.
At the heart of these AI tools are Large Language Models (LLMs,) which are supposed to interpret natural human language, through special types of deep learning models. LLM interprets natural language communications and provides accurate human-like outputs, so the story goes. Except it doesn’t, not even close!
Humans do not perceive things as they are, rather we are constantly inventing our world and correcting our mistakes by the microsecond. We experience the world and the self with, through, and because of our bodies. Accordingly, intelligence is not only in the mind its also in our physical, our reactions based on our senses — what we see, feel, and hear, etc. This is common sense and contrary to the neural networks concept that pushes intelligence as something only in the mind. Therefore, communication and understanding are connected to our subconscious and conscious existence and are critical to accurately processing what someone is saying.
So human understanding is holistic and commonsensical and ‘machine comprehension’ is not, its a program, unable to function on its own. Neural networks, deep learning and LLMs, effectively are theories built on a fallacy.
So “Everything must be taken into account. If the fact will not fit the theory — let the theory go.” — Agatha Christie
AI hype and fascination have plateaued. The launch of ChatGPT back in 2022 put the world on a magic carpet hype ride, everything was predicted to happen in 10 years. Driverless cars, in ten years, even though that prediction was made 10 years prior by many, even Elon Musk. That AI would replace humans and there would be fully automated work environments. AI would cure cancer, poverty, and anything else you could think of. But things have petered out, reality has come back into play as it always does. And it now has many even questioning if AI is just full of hype!
Nevertheless, this is a normal part of the market or human behaviour and its relationship with new technology. This always happens. Investors initially go crazy and even trip their grandmothers to invest in the new shiny object. Relying on hype and fascination, wild predictions, rather than rational thought and critical thinking. The money and visions of grandeur blind the normally rational human, many become suckers and jump onto the hype train, fuelling bogus valuation models for clever insiders’ wealth objectives. Pump and bump no less. So in the end the house, casino…aka the stock market, always wins.
As time passes and predictions look more likely than not to come through, the fog of hype begins to clear and disappointment sets in. The bright light of reality shrinks lofty dreams. Markets then begin to accept reality and form a more realistic view of the potential of the technology.
So when it comes to OpenAI’s ChatGPT, as the best example, they begin to see all the blemishes and flaws. The shine fades and like kids do with toys, put it aside. Infinite loops of hallucinations are just not fun anymore, someone can lose an eye.
Hallucinations continue to be a vulnerability, a big problem for OpenAI, an Achilles heel. With no cure in sight. Additionally, the strange images produced by Google’s Gemini tool lately leave many shaking their heads about all the claims of revolutionary technology coming out of Big Tech. Revolutions can be messy I guess and the majority of them do fail.
Further, today, generative AI is mainly known more for producing deepfake porn, online scams, and circumventing democracy, than it is for benefiting humanity.
There is no doubt, at least from where I stand, that artificial intelligence will have a meaningful impact on our global economy and society, and has the potential to be transformative. However, positive outcomes will come down to the choices made, and how we apply AI to our lives. Either done so purposefully and responsibly, or not? Or will we allow Big Tech to run right over us becoming the arbiters of our lives?
AI is also proving extraordinarily expensive so basic market forces of supply and demand will ultimately determine its viability, regardless of the hype Big Tech and its propaganda agents throw at us.
The large language models are what GenAI effectively is, derived from a vast abyss of data troves, programmed to search the internet, to fool us into believing that’s what intelligence is. But don’t believe the hype!
Further, the likes of OpenAI’s ChatGPT, for example, have no regard for copyrighted laws, stealing content and intellectual property from creators without compensation. Which is effectively how they fuel their models. But many entities and content creators are now beginning to fight back, suing the likes of OpenAI, as the big-budget New York Times is doing.
As the lawsuits pile up, we are looking at hundreds of billions even trillions in damages including future compensation and ongoing licensing fees for content. Son when data is no longer free and must be paid for, we’ll come back and look at OpenAI’s business model viability, and its 86 billion dollar valuation.
Just as important and alarming, are the huge environmental costs associated with AI. The vast data centres powering AI are consuming massive amounts of energy and water, which some studies put at about 4 percent of global greenhouse gas emissions. Regulators are beginning to look more closely at this; again, the free ride is ending.
Just like all empires fall so too do all bubbles burst. In most recent memory, the dot.com bubble in the late 1990s and early 2000s, and the financial bubble in 2008. With the higher interest rates experienced over the last year, the free-money era is drying up, and it may no longer be possible to just say GenAI, and get several million in start-up funding, or raise ridiculous capital for a new tech venture fund. A real business idea, once again, will have to be articulated as a reason to invest.
So if you are an investor, it might be prudent to ignore the hype and go back to basics — assessing AI companies and technology models based on a genuine and verified market fit, factoring in costs, revenues, and likely profitability. Additionally, if you are for humanity it will be important to assess the company’s environmental impact and AI safety policies and safeguards.
Therefore, we are at a crucial inflection point about AI and humanity. There is much to consider and reconsidered. The hype and abstract fascination are coming to an end. So the future winners will be those who can apply their intelligence effectively towards making good investment decisions.
The bottom line is, we don’t know and can’t see clearly enough about the future impacts of artificial intelligence on our world, the jury is still out. And anyone who professes they can tell you or predict the future is a hubristic fool. You should avoid taking advice from them. Nevertheless, the AI technology we do have today, what we have to work with, is incredibly useful. But how we ultimately choose to use it and innovate and create (purposefully and responsibly?) will determine AI’s best use to our humanity.