Canary In The Coal Mine

Hype-Cycles and Why 6ai Strategy First

Perry C. Douglas
12 min readMar 16, 2024
@6ai Technologies

History is riddled with sure things that sound good in theory, only to be proven a bust in practical application. From the Hindenburg to the dot.com era, history teaches us that it’s wise to be skeptical, particularly in periods of hype and fascination. That its best to avoid the noise and nonsense and focus on what is real and what can be reasonably measured. Evidence instead of hype is a safe bet. This commonsensical thinking approach goes back to ancient times. Plato’s philosophy of fascination, for example, with the abstract and utopian view of the universe, versus Aristotle’s empirical, practical, and sound approach to just getting things done. In short, Aristotle’s way has proven to be the most useful for building civilizations and continuing innovation.

When it comes to the hype about generative AI, OpenAI/ChatGPT, etc., things are not any different. Just the time and place and the technology have changed, but human nature hasn’t. Believing in nice stories where logic and rational are pushed aside, blinded by greed and fascination is an everlasting human story in itself. And as our wise friend Albert Einstein once said, “Two things are infinite: the universe and human stupidity; and I’m not sure about the universe.”

So as a student of history; in preparing to write this article I couldn’t find anything more hyped in history than GenAI. You have an industry that continues to be pushed higher in the trillions of dollars of valuations, but revenues are only in the hundreds of millions of dollars. Reminding us of the dot.com era when valuations ran wild and so did the stories of a future utopian dot.com world. Where dot.com businesses were determined based on “potential,” “burn rate” and underlined by hype. Promises promises promises…that revenues would eventually come if you just believed. That would see x1000 soon, but it didn’t happen. The market crashed.

History usually repeats especially when you’re not paying attention to the warning signs flashing. Or bothering to connect the dots to see the emerging patterns. When you don’t pay attention you end up like Icarus; ignoring the warnings not to fly too close to the sun, eventual reality sets in and weights down your wings. And like Icarus, you fall from the sky and plunge into the sea of regretful stupidity. Generative AI hype is flying too close to the sun right now.

The diagram below shows how emerging technologies move through three key phases.

@Gartner Research

IT research firm Gartner positioned generative AI (GenAI) at the “peak of inflated expectations” in its hype cycle for emerging technologies. The next step is to sink into the “trough of despair.” After two to five years, the technology will eventually emerge with tangible benefits as it progresses through the “slope of enlightenment” into the “plateau of productivity.” However, a lot of damage will be done to those who do not understand or heed the warnings about hype cycles.

When technology is heavily promoted by a sector (i.e., big tech,) it is also positioned to benefit most from the hype. Things become exaggerated and inflated. People become wilfully ignorant believe in the hype and expectations… the herd gains momentum. But the it eventually fades as reality begins to prevail.

So Gartner’s model accurately reflects how a new technology shifts from pure hype to something more tangible and real.

6ai Technologies understands clearly how new tech cycles and human nature work, so we are focused on the real value function and applications of GenAI for strategy development.

Having worked in technology most of my career, I’ve seen different trends follow this pattern — SaaS, big data, IoT, metaverse, Web3, cloud computing, etc. Some landed, others crashed, and yet others are still in a holding pattern. However, I can’t remember a new technology like GenAI receiving the same amount of hype and being embraced as quickly as it has, — Brent Dykes, Forbes.

In a survey of over 1,000 companies, AIIA found more than two-thirds ranked GenAI as a top priority going forward. Respected technology experts like Bill Gates see AI as being on the same level as electricity, antibiotics, automobiles, computers, mobile phones, and the Internet.

However, it’s important to remain skeptical, and look for evidence to formulate a strategy to take on the future. And when it comes to GenAI it is also important to remember that it represents only a subset of AI. But the hype and general lack of knowledge about it, coupled with the promotion and money involved has generated significant investment growth.

Nevertheless, warning signs are still flashing. ChatGPT, for example, has not been able to have a sustainable and genuine Aristotle-like impact on business. It has been more like fun and fascination since it first appeared on the scene a couple of years back. But the fascination is beginning to run dry.

Venture capitalist Benedict Evans has raised the issue on X.

But we don’t need a VC to tell us that. I can just look at my daughter in university for good insight. When ChatGPT first came out she/students rejoiced believing that it would make her academic journey much easier. But now, it’s like, Dad can you please proofread my essay…ChatGPT is more trouble than it’s worth…and you can’t trust the #!*^ thing. Hallucination is one of the main problems with ChatGPT; and who wants to spend their time fact-checking it all the time?

From lawyers using it and screwing up in court, to lazy academics being frustrated that ChatGPT can’t do better research. Along with the many challenges with customer services bots…don’t know anyone who would say that bots are very useful in solving their customer service needs.

And with no credible evidence or indication that the hallucination problem will be solved anytime soon, or even at all. The investment or ROI value of ChatGPT relative to potential practical commercial application is hard to reconcile. So we are in a basic speculation bubble right now.

The idea that the world should build global policies and future economies on the premise of something that is increasingly being proven uncertain. Is simply not intelligent.

I am not saying anyone’s particular policies are wrong, but if the premise that generative AI is going to be bigger than fire and electricity…[we would be sadly mistaken,] says Gary Marcus. It most likely ends up being a mirage, along with a social-media level fiasco in which consumers are exploited with news and deepfakes. Where democracy is circumvented. However, this reality is already here.

In my mind, the fundamental error that almost everyone is making is believing that Generative AI is tantamount to AGI (general-purpose artificial intelligence, as smart and resourceful as humans if not more so) — Gary Marcus.

However, all of this irrational exuberance has more to do with the hype machine to drive stock prices higher and for VCs to push startup valuations into the stratosphere. So they can unload it to the next investor and make money. It’s a game of hot potato.

Dario Amodei, CEO of Anthropic, recently projected that we will have AGI in 2–3 years. Demis Hassabis, CEO of Google DeepMind has also made projections of near-term AGI. However, narrow machine intelligence can’t ever be human general intelligence. It’s just a bullshit narrative to keep the game going and the money flowing. And don’t believe the nonsense narratives either about solving the hallucination problem — by just adding more data. That’s just fantasy, more data equals more of an infinite pure mathematics philosophy loop. Nice to think about, but without practical applications for serious people trying to get meaningful stuff done.

Pure mathematics consists entirely of assertions to the effect that, if such and such a proposition is true of anything, then such and such another proposition is true of that thing. It is essential not to discuss whether the first proposition is really true, and not to mention what anything is, of which it is supposed to be true. […] Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true. People who have been puzzled by the beginnings of mathematics will, I hope, find comfort in this definition, and will probably agree that it is accurate.

― Bertrand Russell

This is why and how ChatGPT makes things up and doesn’t know what it’s talking about. Scientists create their own math through its impossible machine language, as described by Noam Chomsky. He says “A language is not just words. It’s a culture, a tradition, a unification of a community, a whole history that creates what a community is.” But a machine doesn’t know what the universe is and has no comprehension of natural language, thus can’t properly interpret it. Thus it makes up what it hasn’t been programmed with, or just hallucinates nonsense.

Applied mathematics, therefore, is the cornerstone of applied intelligence | ai. Which seeks the objective truth because the truth, nature, defines the universe and applied mathematics explains it. Physics without applied mathematics would be useless.

So believing that scaling for functional systems, just requires adding more data, this will just never add up. “It’s foolish to imagine that such challenging problems will all suddenly be solved. I’ve been griping about hallucination errors for 22 years; people keep promising the solution is near, and it never happens. The technology we have now is built on autocompletion, not factuality,” says Marcus.

Sharon Goldman in an article in VentureBeat, titled, Sam Altman wants up to $7 trillion for AI chips. The natural resources required would be ‘mind-boggling’ — her article came after the Wall Street Journal reported that OpenAI CEO Sam Altman wants to raise up to $7 trillion for a “wildly-ambitious” tech project to boost the world’s chip capacity, funded by investors including the U.A.E. — which in turn will vastly expand its ability to power AI models. If this happens, then Einstein would be correct, human stupidity is indeed infinite.

The destruction of human intelligence capacity would also be colossal, intellectual capital would be no more and machines/Big Tech would run the world. We, humans, would serve technology instead of technology serving us. This will effectively be the end of humanity.

“Morality is the basis of things and truth is the substance of all morality.”

— Gandhi

Why 6ai Technologies Strategy Solutions

Since the launch of ChatGPT, most are still facing the same original question. How to use it effectively to create value for their organizations and themselves — what is the practical utility value of ChatGPT? How best do we put this technology to use in our business interests? Many companies feel they must use ChatGPT technology with the feeling that if they don’t, they’ll be left behind. However, many are now discovering that GenAI tools like LLMs, while fascinating and even impressive at times. Can’t do simple tasks in the workplace that would help the organization's efficiency tremendously. Many are finding out that this is no plug-and-play. But plug and play is what potential users want most.

“In a survey of 10, 000 senior leaders, 97% said strategy is the most important thing to their organization’s success. But 96% said that they lack the time and the right tools to effectively engage in strategy development within their organizations.”

— Harvard Business Review

When asked about the most important business objectives they have set for their enterprise over the next two years, business owners responded: a substantial degree of alignment with a growth-oriented business strategy.

— MIT Technology Review Insights

The survey also points out that data-driven strategy is now regarded as a supreme driver of business value. However, few specific AI tools exist to fill those strategy development needs. The survey says that what leaders want are single and simple strategy solutions. Executives are saying they are currently evaluating or implementing new platforms to address their current data/strategy challenges.

In brief, leaders want Aristotle-like solutions: friendly consumer experience-based products that are easy to use.

  1. Aristotle practicality and performance systems over Platonism novelty,
  2. GenAI must contain vectorized databases,
  3. It must have human executive decisioning as the supreme override,
  4. Data sources must be verifiable, reliable, and empirically warrantable.
  5. Price and performance are critical. It must be very affordable with low-risk implementation

A recent study suggested that more than 70% of the large companies surveyed were still trying to figure out how best to use AI to benefit their specific business. Larger corporations may be able to hire data scientists and other specialists to deal with GenAI adoption, but that’s not financially practical for the other 80% of all the businesses (SMEs) out there. Therefore, the 6ai SaaS subscription model is most sensible for the majority of organizations and people out there. Providing a simplified product solution for digital transformation that is easy to use.

The Global Digital Transformation Market is expected to grow at a compound annual growth rate (CAGR) of 24.2%, and it is projected to reach a market size of USD 3.6 trillion by 2029. Currently, the market stands @ just over 8 billion. 6ai strategy can capture that market by delivering practical SaaS consumer experience software that can enable those without technical backgrounds, with the ability to develop highly effective strategies without having to learn complex software or digital tools.

6ai is focused on solving business/consumer problems directly. No consultants with their big expense. The six-step process is focused on identifying the problem first, generating fact-based insights and building strategies to drive growth and operational efficiencies.

Instead of embracing the latest AI technology with Plato-like fascination, 6ai strives to incorporate Aristotle-like utility tools that real people will find useful. Our knowledge bases retrieve the closest matching records to best answer specific queries. The simplified document process, storage, vectorization, and extraction process are shown in the image below.

The prompt engineering (PE) augmenting method allows our Focused Language Models (FLM) to connect with existing knowledge bases with speed and accuracy. Retrieving information at a higher dimensional level of relevant content.

The augmenting process enables FLMs to have access to up-to-date, well-sourced and relevant information that can be transmitted clearly and applied to generate insights for organizational strategy formation. The traceability of the data is critical! Traceability enables users to be confident in the source of the data they are using, bolstering the trustworthiness of the outputs. Thereby creating a stronger foundation for informed decisioning.

Our Insight Engine (IE) goes further in analyzing any claim, fact or assertion, and identifies any supporting evidence or contradictions for better contextualization towards effective strategy design. Encompassing: Structure, Ensemble, Event, and Context.

The retriever and reader methods enable the models to query corpuses of data, overcoming issues faced by the superfluous hallucinating nature of large language models. Open-source frameworks available make it easy to build focused template-PE tools. Our contextualized relevance-seeking approach, versus predicting the next word or sentence approach is one of the many important differences between ChatGPT and 6ai Technologies.

6ai sees technology differently than the general AI crowd. We view intelligent technology as not replacing humans, but to enhance human value. Creating a win-win situation for owners and workers. Marginalizing humans has not proven to add any sustainable profit and real shareholder value for organizations over the long term.

The scientific process interweaves human intelligence with artificial intelligence to maximize value creation for the future of work. It bridges the gap between the value functions available from big data and artificial intelligence, with the critical and indispensable necessity of human executive decisioning.

The process guides users through our proprietary six-step IP process, retrieving relevant factual information from reputable and reliable sources, that can be taken as empirically warrantable. Turning information into valuable insight. And identifies risk and reward scenarios to further optimize and align the strategy development process, for genuine growth outcomes.

So no matter how powerful AI technologies may seem, their abilities are only as good as how much humans are involved in the overall process. Inputs are critical — garbage in garbage out. GenAI technology is not a magic wand, proper GenAI platform solutions and tools still require human intelligence. In fact the technology is useless without it.

We are seeing more and more GenAI disappointment each day — the canary in the coal mine are early warning signs. There are always those early warning signs but people are so wrapped up in the hype and fascination, they are blinded to the bright light of reality.

The utilization of GenAI in business and people’s lives will no doubt be transformative. However, how GenAI technology is used will ultimately determine success. And harnessing its transformative power, purposefully, and responsibly, is in the best interest of humanity. And how it should be done! GenAI is necessary for transformative business growth in the 21st century, either to reposition, stay relevant, or for major transformative growth. However, having a formidable strategy to build success is the first principle step, and will be the ultimate determining factor for long-term sustainable growth and prosperity. This is the core objective and deliverable for 6ai — Technology for human purpose.

--

--

Perry C. Douglas

Perry is an Entrepreneur & Author - his new book is called: "ai - applied intelligence - A Renaissance in New Thinking..." and can be found on Amazon.