The Problem With AI

Humanity & the Case for applied intelligence | ai

Perry C. Douglas
11 min readJun 26, 2023
Douglas Blackwell Images

Imagine Plato and Aristotle having a conversation. Plato begins, Aristotle, can you think of a circle as not being a perfect circle, but just appearing perfect in the abstract? Isn’t our human intellect best used to dream up ways to achieve a perfect circle? Aristotle intensively listens and responds to Plato, Plato, my good friend, this I cannot agree with you on. Circles naturally exist and function in the real world, and their imperfection serves many practical purposes. Hasn’t the imperfect wheel served us well enough, wouldn’t you say, Plato?

Aristotle goes on to say that the pursuit of knowledge shouldn’t be about the pursuit of abstract perfection, or the very pursuit itself. Instead, our knowledge is best utilized when harnessed to our ingenuity in creating the tools that serve our humanity best — my good friend, Plato.

After Aristotle drops the mic or more likely the clay tablet, the point he was making and how he saw the universe — natural limitations of our world, and the laws of nature are supreme! So wasting time trying to circumvent nature is nonsensical and only playing to serve the ego, and not the most productive use of our intellect. Plato’s thinking was philosophical, focusing on the abstract and utopian — a labyrinth — the consumption of the mind engaged in solving infinite problems that are not problems in nature as nature exists, just phenomena. Whereas Aristotle is purposeful, empirical, tangible, practical, and commonsensical in seeing how humanity can flourish through nature…reality, for the common good.

The new AI-based tools that are currently revolutionizing the way we interact with the universe, such as ChatGPT, Google’s Bard, and Microsoft Bing, are fundamentally altering our humanity. Taking power through the game of appearances because power knows that most people are lazy, don’t engage in critical thinking and never ask the most basic or obvious of questions. So unchecked AI will continue to take power and represent a rapidly growing existential threat to humanity.

According to globally acclaimed author and public intellectual, Yuval Noah Harari, we’ve become too dependent on data-driven technologies, and this is increasingly becoming harmful to humanity. This could result, he says, in ‘data colonialism’ and ‘digital dictatorship’, where big data in the form of artificial intelligence and machine learning pose the risk of monopolistic rule. “Few groups, corporations and even governments can monopolize the immense power of data and AI to create extremely unequal societies.”

At the heart of these AI tools is what is called Large Language Models (LLM,) which are supposed to interpret natural human language, through special types of deep learning models. LLM should be able to interpret natural language communications and provide accurate human-like outputs. But it can’t. These natural LLM models are trained on billions of pieces of data, designed to draw relationships between words in different contexts. This processing of vast amounts of information allows the tool to regurgitate a combination of words that are most likely to appear next to each other, in a specific given context.

Nevertheless, these natural LLMs have not been able to perfect authentic human language; but scientists still pursue these abstract limitations with Plato-like fascination. The theory of AI being a natural language interpreter, able to process information similar to how the human brain does it has not panned out. So “Everything must be taken into account. If the fact will not fit the theory — let the theory go.” — Agatha Christie

Humans communicate words through feelings, vision, environment, biology, and more — actively involved in decisioning. It is easy to get caught up in the hype of AI if you truly don’t understand what cognition is — cognitive science. If you do not understand the human brain, which has about 100 trillion connections between billions of neurons, and GPT-4, for example, has the equivalent of just one trillion connections. You will not grasp the incredible challenge that artificial systems have in trying to replicate authentic human language, naturally. What LLM actually does is the impression that it can interpret natural human language. But again, the human brain functions on a vastly complex experience-based system based on learning from nature, intellect, and belief systems intertwined in reality.

Humans do not perceive things as they are, rather we are constantly inventing our world and correcting our mistakes by the microsecond. We experience the world and the self with, through, and because of our bodies. Accordingly, intelligence is not only in the mind its also in our physical, our reactions based on our senses — what we see, feel, and hear. This is common sense contrary to the neural networks concept that pushes intelligence as something only in the mind. Therefore, communication and understanding are connected both to our subconscious and conscious existence and are critical to accurately processing what someone is saying. So human understanding is holistic and commonsensical, and ‘machine comprehension’ we don’t know about. Or how or even if a machine has ‘comprehension’ relative to human comprehension. AI, deep learning/LLM is built on a fallacy!

So we must take a moment to clarify ‘what-is AI.’ First, natural intelligence like what humans or honey bees have — both brains appear to have neuroplasticity which means that the brain is capable of learning, memory, and adapting and performing functions from various places in the brain. So, ‘intelligence’ has many facets, it’s not a one-dimensional variable, and in the case of artificial intelligence, we are simply building those many facets of intelligence into machine computation capability. We can build machines that can play chess by programming them to do so, however, the hallmark of human intelligence, for example, having a conversation: coherent, authentic, and meaningful conversations — the application of human common sense.

However, with the advent of ChatGPT machines have begun to essentially fake it, but the output evidence so far shows that it has been less than ideal, to say the least. It might be able to fool a stupid or naive person but not a well-informed critical thinker. The reality is that machines require a lot of training to do intelligent things, and when we put an over-reliance on machines we put ourselves at the mercy of the data.

Therefore, we must dive into the details to understand what machines can or cannot do, and where they can provide the best technology value for humanity. And most importantly, how humans can stay in control of raging technology.

Machines have ‘narrow intelligence’ not ‘general intelligence’ like humans have. Narrow intelligence can be amazing in solving specific problems, tasks, and things no human can do, and that’s great! So let’s continue to utilize technology in the practical ways Aristotle has explained to us. Because trying to calculate the perfect circle in the abstract is misguided and driving AI and humanity off a cliff.

As Noam Chomsky, ‘The Father of Modern Linguistics,’ says, LLM is just an impossible language — synthetic, not natural and made up. Chomsky points out that because of past experiences in time and space, emotion and state of mind, environment, basic human interaction and relationships — many variables contribute to how language is communicated and interpreted. All of which are linked back to general intelligence. Chomsky exposes the myths around artificial intelligence and what it can do versus the hype associated with it, and how it’s intellectually dishonest and dangerous to call machine language “natural.”

ChatGPT is an example of an impossible deep-learning language says Chomsky, one that violates all the rules of nature, grammar, culture, experiences, and symbolism. However, Chomsky points out that this is the only way that ChatGPT can perform, by creating its own rules — its machine language. Where thoughts come from, and how they’re generated and formed are critical to how they are translated into words, and subsequently interpreted. This is a highly unique human process. More sophisticated and complex than any programmed machine language can achieve.

Why do you think the so-called “Godfather of AI,” Harry Geoffrey Hinton quit Google to “speak his mind” on artificial intelligence? Saying that AI will destroy humanity? Because he realized the limitations of his abstract Plato-driven neural networks theory (the method that programs and teaches A.I. to process data in a way “similar” to how the human brain and biology work) doesn’t work. In a recent Globe and Mail article titled ‘I hope I’m wrong’: Why some experts see doom in AI, (June 24, 2023,) Hinton again airs his concern that AI systems may transcend human intelligence and become smarter than human intelligence in about 20 years. That worries him he says because “You have to ask the question: How many cases do I know where something more intelligent is controlled by something less intelligent?”

Nevertheless, regardless of all his shaky pontification the question still comes back to the fact that LLMs are not natural at all, and operate on an impossible language! Therefore, answers, views and perspectives depend on what type of questions you’re asking, how they are being framed and how you define intelligence. It is not whether or not AI is becoming more intelligent, as Hinton thinks it is, but that LLMs are developing and operating on their own and separate impossible language path; creating its own universe which it is in control of.

Two streams of intelligence are occurring: human general intelligence and narrow (LLM) intelligence. After all is said and done, in the end, this is what it comes down to. So the forward-looking question to contemplate is which one will win out — authentic language vs. synthetic machine language? The answer will be based on what we humans do next.

The Problem With AI

The problem with AI is that it can never be human — this is just the nature-of-things, but the nonacceptance of that reality adds to the continuation of constructed delusions about AI. This is fundamentally what can make AI dangerous to humanity — the belief among many scientists in its deep learning theory has led many in becoming unscientific, hubris, and delusional.

Gary Marcus, a leading voice on AI, a scientist, best-selling author and entrepreneur, anticipated many of the current limitations AI is now experiencing. Prof. Marcus’s research in human language development and cognitive neuroscience challenges the delusions and non-truths about AI. In addition, Dave Ferrucci, an award-winning AI scientist who started and led the IBM Watson team, is also aligned with Marcus about the limitations of AI. They both agree that deep learning is an incredible invention and great for doing pattern recognition, making predictions and extrapolating outputs from data. However, what it’s lacking is the ability to understand why that data produces the answers it does. AI doesn’t build any understanding or reasoning into its answers, and there are no models of thoughtful and rational decisioning embedded in LLMs.

For example, it can lead to the gigantic societal problem of bias which plagues our humanity. Many continue to complain about the bias problem with AI but don’t understand that deep learning was designed “exactly to find the biases in the data and to mimic them.” So machines are essentially being trained to exacerbate social conflict, discrimination and inequality. However, what we humans really want from AI is a “partner” to help us make rational, thoughtful decisions and explain those decisions rationally so that answers make sense to us. AI has yet to be able to do that.

These systems are “skilled mimics” according to Marcus and make up its data-driven “fluent language, but it’s not in touch with reality.” This ties to Chomsky’s impossible language critique of deep learning, which focuses on correlation versus causation. But causation explanations are what intelligent humans want, so they can take action with confidence.

In simple terms, what products like ChatGPT are doing is predicting, just connecting the biases, and putting together words that are not well-sourced. Causation and reasoning are not in machine learning which remains its paramount challenge. ChatGPT is like an advanced version of AutoPredict, what you get when you text on your smartphone. Or a souped-up search engine. The phrase, deep learning is essentially a notion about theoretical and conceptual depth. However, it really just means how many layers you have in a neural network, says Marcus. And the level of comprehension we need in deep learning systems is way outside its bounds, so deep learning does not mean deep understanding!

The human brain works through both statistical understanding of language models, but it also works as rational commonsensical models because the machine is tied to its database. This big abstract collection of data and sequences can’t execute rationally because it doesn’t exist in the natural world, or have sensory cognition interacting with nature. The deliberative process of deep learning is not grounded in any reality, the perception of objects in space and time, but humans are, and their surroundings aid in producing meaning to language.

Nevertheless, AI technology of course still plays a very significant role and must always want to use great technology as a tool for human advancement. However, we must equally understand that AI can’t do all the things many hallucinate about. It’s not magic! So we need to critically think and stop being suckers to narrow mined people with PhDs. Remember, scientists often operate in silos, boxed in by their own specialization, so it is not unusual that they can become delusional about the universe. So when their main response about how to perfect the circle — in making LLM with more human-like intelligence. Scientists keep answering that ‘with enough data,’ it can be done — you know the delusion meter is spiking.

applied intelligence | ai

What is needed is a practical Aristotle-like solution that recognizes reality/nature and serves humanity best. Where the utilization of AI is not primarily to predict correlation and statistical patterns of recognition from data. Nor forming unreliable responses — making things up! Where its usefulness shouldn’t be in helping students cheat on exams and assisting in creating a stupider generation. What will serve humanity best is a technology solution that can provide authentic insights based on reliable sourcing of information, with causation, explanation and reasoning as the common denominator of the outputs. This Insight Engine will not get trapped in the limitations of abstract deep learning concepts and theories. Being led down rabbit holes to infinity. Instead, it will serve and deal with what is real, practical, and useful to real people living in the real world!

This solution’s focus will be on the utilization of effective tools for humans to build intelligent strategies for real-world applications. Therefore, by combining the value functions of artificial intelligence (AI) with the value of human applied intelligence (ai,) we can develop a robust, relentless and insightful technology solution that can be practically applied to solving complex socioeconomic and business problems. Which, by the way, was the original intent of AI.

So ai solves the gap problem by intrinsically interweaving human intelligence with the best of artificial intelligence to maximize innovation and value creation. It fills the efficiency gap between the value provided by big data, and the human executive decisioning capacity needed to make well-informed strategy decisions.

The ai process guides us through the disciplined six steps to applied intelligence, a disciplined proprietary process from DouglasBlackwell Inc., which retrieves relevant information from reputable and reliable sources and turns it into valuable insights. At each integral step in the process, clients respond to customized prompts that generate more insightful information, vetted for usefulness in critical real-world applications. These insights are then relentlessly iterated to produce even higher dimensions of clarity, useful insights and strategy options out of the whitespace. Spin-off strategy options organically sprout from primary insights, polishing things down to highly useful and relevant information. Providing more creative options and opportunities for innovative-growth decisions to be made by humans.

Therefore, applied intelligence | ai bolsters the universe in firm control of humans, it ensures that AI tools are not simply reflective of the biased data feeds in deep learning. Further, ai allows us to create a world based on what we want to design, and not one controlled by ‘data dictatorships,’ biases and stereotyping. Artificial intelligence can’t represent our human desires (our human nature, shared human values, judgments, experience decisioning processes, etc.,) because the system was never built for that, so being mindfully aware of that fact, allows us to free ourselves from the negative legacy of AI engineering by-the-few.

AI is really good at the data and with more specific process-driven tasks, but we can choose how we use the tool. Remember, however, AI is not magic. It can’t authentically connect to the human values that we often express explicitly in language. Further, throwing more vast amounts of data at its limitations is just delusional! We must call bull shit when we begin to sniff it out.

AI is Plato and ai is Aristotle.

--

--

Perry C. Douglas
Perry C. Douglas

Written by Perry C. Douglas

Perry is an Entrepreneur & Author - his new book is called: "ai - applied intelligence - A Renaissance in New Thinking..." and can be found on Amazon.

No responses yet