The AI Illusion

Fascination, Hype & The Myths of Venture Capital

Perry C. Douglas
9 min readJul 27, 2023
Douglas Blackwell Images

In a previous article I mused about Plato and Aristotle having a conversation about intellect. The discussion was around how Plato felt that the best use of human intelligence was trying to achieve intellectual perfection through academic challenges of abstraction. Aristotle on the other hand believed in pragmatism, the useful utility of knowledge being the best use of the mind. When intelligence is applied and harnessed to our ingenuity to innovate and create tools — serves our humanity best. So getting caught up in the fascination of abstraction is a useless academic endeavour and does not serve humanity best; Aristotle would say. Aristotle also felt that fascinations only served the motivations and pleasures of the elites, whose main purpose was to control society through their self-interest applications of intelligence.

Today, these elite few are the big-tech corporations and billionaires which represent the new tech aristocracy (the aristocracy,) along with their court of acolytes: start-up tech investors/VCs, academia, scientists, government, journalists, the wilfully ignorant social media influencers. So the aristocracy-led top-down system essentially runs the world — the defacto authority and arbiters on all matters of technology and society. Steering the global economy and acting as if only they know what’s best for humanity.

AI is all that anyone can talk about these days irrespective of the fact that the vast majority don’t know what AI is. However, AI has not lived up to the hype which has been evolving since the late 1950s. AI has not achieved very much of all the things it has said it could over those many years. A lot of praising announcements and ‘things to come’ but measurably it has not replaced the human workforce in any meaningful way. Radiologists, for example, the greatly endangered species first targeted by AI for extinction — demand for Radiologists is the highest it’s been in years. AI has no cognitive abilities and it doesn’t know what humans nor does it understand how humans perceive the universe through nature, and what is nature itself.

Consequently, AI can’t interpret natural human language, it can only deal with its impossible language (see the article titled, The Problem With AI). So what AI is doing is transposing the information it receives to its machine language function and interpreting meaning based on how it has been programmed.

AI runs on what is called called Large Language Models (LLMs,) computing outputs on multiple layers of data. Factually, there is no such thing as deep learning because there is no cognitive intelligence happening — just data processing computations.

So, in reality, there is no learning happening with LLMs, deep or otherwise. And by definition, learning is about gaining or acquiring knowledge of or skill (something) by study, experience, or being taught. AI neither studies nor has experience(s,) that can generate conscious thought.

Therefore, the objective truth about AI is that it’s built of LLMs, which inherently reflects the biases of those writing its code — white males — mainly from California. Who are detached from the broader world and the problems and challenges it faces. Making AI’s perspective on the universe narrow, one-dimensional, not diverse and not serving the interests of the whole world. Just those who are in control of it…the aristocracy.

LLMs are not intelligence born of their own origin but are programmed on data sets, limited to the corpus of information it has to compute. AI is not magic but it does create the illusion of human-like intelligence through its impressions of having general intelligence.

General intelligence and artificial intelligence exist but as two separate things. And this is perfectly fine, there are magnificent benefits to be gained for humanity by harnessing AI. However, pursuing Plato-like perfection in abstract academic pursuits to correlate or finesse the square peg of AI into the round hole of general intelligence, does not serve humanity best.

The new kid on the hype block these days is called generative AI, i.e., ChatGPT launched in 2022 and has created unbelievable fascination. However, when you look beyond the hype and from an applied intelligence perspective, ChatGPT is highly unimpressive! A souped-up and unreliable predictive search engine underpinned and promoted by the aristocracy to capture the universe towards increased dependence on its technology; i.e., Microsoft, Google, Meta, and Amazon, the usual suspects.

If you consider ChatGPT, for example, helping students cheat on exams, propagating the internet with deep fake content, and circumventing democracy as impressive. Then you’ve been taken by the AI illusion. Amazingly, many believe they've discovered an edge by using ChatGPT to do the work for them. However, if everyone has access to and uses the same generative AI technology or ChatGPT for an advantage, then where is the advantage?

In past articles, I’ve also given examples of the AI illusion, i.e., New York-based start-up Normal Computing, which recently raised $8.5m in seed funding to “enable artificial intelligence (AI) to give unprecedented control over the reliability, adaptivity and auditability of LLMs [systems.]” Making outputs more “reliable” is the expressed core of Normal Computing’s business model. However, Normal is just hallucinating on LLMs; in line with Plato’s abstract perfection pursuits. Normal’s thesis seems to be that throwing more layers upon layers of data at the problem will make the outputs more “reliable.” This hallucination loop can go on to infinity I imagine. Aristotle would say that it is not productive for humanity to waste time engaging in pure mathematics but pursue the useful utility of applied mathematics instead — applied intelligence to solve real-world problems.

General intelligence, the all-mighty G-factor (Gf,) by definition is the broad mental capacity of operating in consciousness and subconsciousness, in natural environments. So, performance can only be a function of cognitive ability measures which are also influenced by one’s awareness of their surroundings. This general mental ability is what underlies specific mental skills related to areas such as spatial, numerical, mechanical, and verbal abilities. The Gf influences performance on all cognitive fronts and tasks; allowing people to construct ideas from their different cognitive abilities, based on their experiences and capacity.

These cognitive abilities allow us to acquire knowledge and solve problems by setting a path to our constant learning, both consciously and subconsciously.

Very recently, French generative AI start-up Mistral AI raised €105 million without having a working product, prototype, or even a solid basis for a business. Still, they were able to raise tons of cash! Becoming one of Europe’s largest-ever seed rounds — all driven by the fascination and hype surrounding the potential of generative AI. This has become a fascinating hype game where the acolytes don’t want to be left out.

Future funding valuation models are the new paradigm in the VC world. Conveniently contrived and bogus valuation models geared to generate magnificent returns in the VC hype game. Steering the overall direction of AI product development, influenced top-down by the aristocracy.

We’ve seen this before, however, in the dot-com era of irrational exuberance. Ultimately a bubble was created and it subsequently burst. Similarities between then and now (dot com and generative AI) are evident, so taking history as a guide, it won’t be any different this time.

Emad Mostaque, founder, and chief executive of Stability AI. Also points out the stark similarities between the “dot com” bubble of the late 1990s and AI today. In both, speculation ran wild, and all you had to do back then was add “.com” at the end of your company’s name; and abracadabra you’re a new tech billionaire. Today, it is widely known and encouraged that adding “generative AI” to your pitch deck will meaningfully strengthen your chances of getting venture capital. The hype game.

In the first quarter of 2023, $15.2 billion was plowed into generative AI companies on a global basis, according to Pitchbook data services. Not surprisingly, the vast majority of that amount came from Microsoft (MSFT)’s $10 billion investment in OpenAI, the generative AI chatbot ChatGPT. But putting aside Microsoft’s amount, VC investments in generative AI were up by close to 60 percent compared to the same time last year (2022.)

Before that frenzy, Crypto was the hype…as it too was going to change the world. However, that fascination came to an abrupt and unpleasant crash, after the very hyped but very fake “genius wonder kid” scammer: Sam Bankman-Fried (SBF,) CEO of cryptocurrency bullshit exchange FTX, revealed nothingness. Even the most “sophisticated” investors like institutional pension funds, and the highest in the aristocracy, Sequoia Capital (who lost $300 million of investor capital) got caught up in the hype. But don’t feel sorry for Sequoia, it’s just part of the future funding valuation game. Firms like Sequoia use their name brand (if Sequoia is in then it must be good) to create momentum through the hype to intentionally drive up their start-up investment’s valuations.

According to Professor Julia Ott, The New School of Economic Thinking, Centre for Capitalism Studies. Prof. Ott says that we shouldn’t buy into the myths about venture capitalism…that how early-stage VC-specific financing is essential, and that the VCs deserve tax breaks and policy changes for their contribution to the economy. They deserve to get rich because they are taking on so much risk. “These are myths, it isn’t true,” according to Professor Ott.

That “these people are good people and what they do benefits everyone and if you muck around with that virtuous cycle of return and reinvestment, everything is going to collapse… innovation, job growth etc., will all fall apart.” Again, just part of the myth people are sold and buy into she says. Prof. Ott continues “that the funding that has historically gone into innovative enterprises, that creates jobs, create economic growth, and bring products and services to the market, that makes peoples’ lives better, venture capital plays a very negligible role.”

Therefore, it’s a game of “building up these high-valuation incumbents who buy up competitors with their own VC money and crush the competition, stifle innovation, raise prices and suppress workers etc.”

“I always worry when we talk too much about something,” says Sophie Forest, managing partner at Montreal-based Brightspark Ventures; “…it’s important to step away from the hype,” in order critically assess the reality and risk.

So in 2008, the thing was start-ups trading on potential future cash flow, now generative AI start-ups are trading on future funding. These are invented schemes…games make money by pumping and dumping to the next hot potato round of VC funders says highly regarded risk analyst Nicholas Nassim Talib. In his recent appearance on CNBC, Talib (who correctly called the 2008 crash) when asked by the host if there is another similar crisis coming, said “Of course…risk is right in front of us… it’s a white swan, not a black swan.” Talib continued: “[…look at the conditions…a couple of the two biggest risks are, first, we have a 100 trillion+ real estate market with ridiculous valuations when interest rates were at 3%, now it’s 7% and climbing. Second, I would certainly stay away from new technology…particularly unproven AI companies. History shows that new tech will always be overdone and get crushed the most in a market downturn…crisis.]”

The Nasdaq’s value more than doubled in 1999 alone. And according to Goldman Sachs information, tech stocks on the Nasdaq plunged 81 percent between its peak, March 2000, and late September 2002. However, with generative AI and AI investing in general we can’t predict the timing of the next bubble. But we can learn from history — as human nature goes this bubble will be spectacular and have an unimaginable impact on humanity.

Investing based on fascination and hype has never worked out in history but the next generations often avoid reading history, so we continue to fall for the same old fascination stories, over and over again. Technology changes and so does time and place, but human nature doesn't.

In the end, the key to successful investing is figuring out what’s real and what’s not real; separating fact from fiction. The historical evidence is clear. The majority of new fascinating technology is hyped up by the community most benefiting from it, and it is also important to note that the vast amount of those predictions will be wrong. Because it’s a fascination hype game.

AI is just at the tip of its frontier and the reality is we are still sorting out and defining what AI is, how it should be applied intelligently and best serve humanity. If we don’t work to create countervailing forces to the new tech aristocracy and do not get a seat at the table, we’ll be trampled by the herd. Our future depends on what we do next, so understanding the AI illusion for what it is and developing countervailing forces to protect and advance humanity’s future interests is critical — to avoid crossing the Rubicon of uncontrollable AI.

--

--

Perry C. Douglas
Perry C. Douglas

Written by Perry C. Douglas

Perry is an entrepreneur & author, founder & CEO of Douglas Blackwell Inc., and 6ai Technologies Inc., focused on redefining strategy in the age of AI.

No responses yet