Chat GPT | Just Making Shit Up
…applied intelligence — addressing growth and socially responsible agendas, that are highly useful to humanity
What can be the value of Chat-GPT if you constantly have to fact-check it? Call me stupid but isn’t that more work and less value? However, most people are just buying the hype of Chat-GPT without even knowing the truth about it. At the end of the day, the whole AI and Chat-GPT scene is similar to many bubbles of the past, the closest being the dot.com tech bubble that burst in the 1990s. This is also when then Federal Reserve Chairman, Alan Greenspan, 1996, famously called that period irrational exuberance.
“But how do we know when irrational exuberance has unduly escalated asset values …. .which then become subject to unexpected and prolonged contractions…?”
The ongoing hype and fascination of all things AI has risen beyond Greenspan’s irrational exuberance framing, it has become a different type of bullshit!
According to well-known AI critic, Professor, and AI entrepreneur Gary Marcus, in a recent article titled GPT-5 and irrational exuberance — Rumors of AGI’s imminent arrival are greatly exaggerated. Marcus lays out how irrationality and misunderstanding about AI have moved into fantasy. People have gone into full-blown madness — now even creating the term artificial general intelligence (AGI).
General intelligence is a term that consists of a variety of cognitive talents. These skills enable people to learn new things and solve issues. This broad mental capacity underpins specialized mental talents in spatial, mathematical, mechanical, and linguistic skills. General intelligence is human intelligence. This does not and can’t ever exist in a machine, and the term artificial intelligence is misleading and wrong and has led people to believe in fantasy instead of reality. Furthermore, narrow-minded scientists, pushed by the venture capitalist community have now become ego-driven philosophers. Contributing to the AI fantasy — believing that by pilling on more layers of data, they can perfect deep learning to become more like general intelligence. However, deep learning in itself is not learning — just the imitation game of predicting the next words in sentence formation — bullshitting its way through, i.e., Chat-GPT.
Case in point. Recently Bing accused a lawyer of sexual harassment based on its ridiculous interpretation and misreading of an op-ed that was reporting about the exact opposite. Any reasonable person who can read…using their human general intelligence capacity could not have made such a stupid mistake. There are countless more of these types of GPT errors, going to show how it can make things up — it doesn’t know what it’s talking about. GPT can be dangerous not because it’s too smart but because it’s too stupid — and when you couple stupid people with stupid machines; as Forest Gump famously once said: “Stupid is as stupid does.”
People like to believe in fantasy stories, it’s human nature, and that’s why cons work so well — the bigger the fascinating story the bigger the suckers’ appetite for more. Chat GPT has not revolutionized the world yet, regardless of what many misinformed AI zealots on YouTube keep claiming. GPT is not magic, and according to Marcus, “It’s not going to be smart enough to revolutionize fusion; it’s not going to be smart enough to revolutionize nanotech, GPT-4 can’t even play a good game of chess (it’s about as good as chess computers from 1978).” So as we move to the increased new hype about the coming GPT 5 — every flaw it has just gets amplified. “GPT’s regurgitate ideas; they don’t invent them,” says Marcus.
The internet is filled with ignorant charlatans talking about what they don’t know. So in my view, people are magnificently misdefining what intelligence is or how you define it. Many bogus claims about deep learning are just scaled up to fantasy, the limitations are more than the value it presents to humanity. Deep learning has no natural way to deal with hierarchical structure and struggles with open-ended inference. It is not transparent enough about its answers as it doesn’t provide real sourcing about those answers. Which is how knowledge acquisition and verification works. In the end, GPT can’t be trusted.
We need to apply more critical thinking and not just be resigned to taking others’ words for it — in the rapidly advancing era of AI the ability to apply critical thinking has never been more important. MIT professor, Noam Chomsky — ‘The Father of Modern Linguistics,’ says that AI is essentially an efficient tool and really good for performing tasks, which is great he says. But he gives ChatGPT as an example of how AI systems are deficient: ChatGPT has an “impossible language” he says, one that violates the rules of real language and those violations are under how ChatGPT performs.
A natural human language does not ignore the linear order of words, according to Chomsky, but it can ignore everything else and attend mainly to the abstract structures in the language. Cognitive science tells us that the mind creates abstract structures and constructive delusions in language communication systems. This is unable to be picked up by programmed machines’ impossible language processors. ChatGPT for example, can’t distinguish real natural language and intent and interprets it based on its programmed data sets machine language.
Therefore, as a toy GPT-infused offerings might be fun to play with but as far as an effective business solution that adds real value, Chat GPT can’t hold a candle to human general intelligence. Therefore, the way forward is to apply applied intelligence | ai, to AI or LLM (Large Language Models.)
Real ai, therefore, is holistic and consists of how we can best use advanced technologies to create value for ourselves — organizations and society — applied intelligence is a proprietary thinking system and process developed by Douglas Blackwell Inc., that integrates intelligent technologies with cognitive human intelligence and turns the insights generated into strategy.
Leaders of businesses, institutions and governments alike can apply ai generated strategies to those value-creating tasks, and build capacity and capabilities to optimize value creation. The ai approach calls for the use of data science as an augmenter, a tool, to assist human general intelligence in finding the most relevant information for purposeful subject matter for knowledge building. The Real ai perceives things in comprehensive complexes of empirical facts so certain general features can be found and extracted, becoming insightful and useful to the formulation of strategy. Good ideas can be cognitively vetted through knowledge acquisition from reputable and reliable sources of information and insight. Actions can be taken with confidence and can be considered warrantable.
So ai | applied intelligence is Douglas Blackwell Inc., (www.douglasblackwell.net) proprietary solution that uses a customized Prompt Engineering (PE) methodology as an augmenting tool that enables leveraging large language models to connect with existing and developed knowledge, to retrieve information at a higher dimensional level of relevant content.
ai | PE augmenting enables language models to have access to up-to-date information, making the information more useful and reliable for strategy building. Faster and more precise to develop insightful information towards highly specific insight inquiries.
The retriever and reader methods enable the models to query large corpuses of text, overcoming the issues faced by the limited context of large language models by themselves. Open-source frameworks like Haystack, for example, make it easy to build PE tools…enabling LLM prototypes quickly. This contributes to how applied intelligence builds specialized Insight Engines.
Nevertheless, the performance of this method is only as good as the knowledge base provided. Garbage-in-garbage-out. So, for ai purposes, the narrow intelligence function of AI fits our contextualized relevance usefulness-seeking approach to information gathering, versus the hard function of predicting the next word or sentence constructing, i.e., ChatGPT or chatpot type systems which are really auto-complete on steroids.
Therefore, the main objective for ai is to be highly topic-specific, contextually accurate and geared towards insights gathering. Which can then be taken to build useful strategies for organizations. So our general intelligence capacity, underpinned by our applied intelligence system methodology allows humans to perform their executive decisioning functions, with confidence!
The Fifth Industrial Revolution requires a focus on synergistic collaboration rather than competition (replacement of humans). The purpose of applied intelligence…is to offer a software solution platform for the effective utilization of AI and data ecosystems to be utilized as an industrial strategy formation tool. Integrating technological and human strengths within organizations for a powerful productive value creation force. Developing holistic strategies addressing growth and socially responsible agendas that meaningfully contribute to the advancement and well-being of humanity — is where applied intelligence is headed.
So your future success in the new digital economy era comes down to a lesser dependency on technology and an increased dependence on the mind. Contrary to the conventional wisdom, in the age of AI, the mind will become even more critical than ever before.
Your ability to enhance your human value and prosper in the advancing AI world. Maximizing your earning potential and growth trajectory will ultimately come down to how well you can apply your intelligence.