The Future is ai…

Beyond Fascination for a Magic Bullet

Perry C. Douglas
8 min readJul 5, 2023
DouglasBlackwell Images

Current artificial intelligence (AI) has been filled with many promises over the years that have not come into being. In November 2016, for example, the so-called “father of AI” Geoffrey Hinton (who famously left Google recently, saying that AI will destroy humanity) once said that “It’s completely obvious that within five years that deep learning is going to be better than radiologists.” Since then we’ve seen over 400 AI radiology startups, all have relatively failed and not a single Radiologist has lost their job because of AI. There is now a shortage of radiologists — many have stopped pursuing it because they’ve been scared off by all the talk about AI replacing Radiologists. So the truth of the matter about Hinton leaving Google is not about his morality and virtuous concern for humanity, it’s because it’s becoming clear that his grand neural network theory is not coming into reality. But headed in the wrong direction. Moreover, and amazingly, in an interview in 2020, Hinton also said that “Deep learning is going to be able to do everything.” Therefore, call it arrogance, hubris, or plain constructed hallucination but I would not take any more predictions from Geoffrey Hinton.

Elon Musk estimated that by the middle of 2020, Tesla’s autonomous systems would have improved so much that people would not have to pay attention to the road. So far, you can’t even get a Tesla to cross the street without an accident far less with confidence. AI has not delivered on the hype and the phone book remains filled with names of people making claims about AI that have not come to pass.

The big hype machine about AI has been mainly about “Deep Learning” being the holy grail pursuit of AI. The be-all and end-all, the single grand solution that does everything. And the research surrounding AI/deep learning/machine learning has largely emphasized Hinton’s neural networks theories (trying to copy the biology of human intelligence.) Trying to interpret or match human general-purpose learning has not worked out. These scientists, however, are pushed by the venture capitalists community to pump out future failing start-ups. They play a disingenuous valuation game to make money off of hot trends, not necessarily about creating useful businesses that can serve humanity optimally. The strategy has been to continuously add more — ever-larger training data sets. More layers of data, they believe, is the answer to perfecting deep learning in becoming more like general intelligence, and of course, they also believe that deep learning will surpass human intelligence.

(General intelligence is a term that consists of a variety of cognitive talents. These skills enable people to learn new things and solve issues. This broad mental capacity underpins specialized mental talents in spatial, mathematical, mechanical, and linguistic skills.) General intelligence is human intelligence.

Deep learning has claimed to be able to extrapolate abstract knowledge. But it lacks ways of relating causal relationships, relationships between diseases and their symptoms for example. Or acquiring abstract ideas and making logical inferences; it’s a long way from even integrating abstract knowledge. For example, what objects are, what they’re for and how they are typically used.

Prof. Gary Marcus, CEO and Founder of Robust AI, Marcus, is a well-known machine learning scientist and entrepreneur who has been very critical of the many bogus claims deep learning continues to make. He’s the author of many books on cognitive science and AI and is Professor Emeritus at New York State University. Dr. Marcus’s focus is on cognitive science and human reasoning. He received his Ph.D. in 1993 from the Massachusetts Institute of Technology, where his advisor was the famous experimental psychologist Steven Pinker.

Marcus has been pointing out the myths about AI, by showing its many limitations. Many of those limitations surround deep learning which Marcus says is shallow with limited transfer capacity. Deep learning has no natural way to deal with hierarchical structure and struggles with open-ended inference. It is not transparent enough about its answers as it doesn’t provide sources and/or explanations for those answers; so it can’t be trusted. Therefore, deep learning is not well integrated with prior knowledge, and so far, it cannot inherently distinguish between causation and correlation. It presumes a largely stable world, which as we know doesn’t exist in reality, and it can’t compute the unpredictable nature of the universe and people, places, and things.

So Big Data + Artificial Intelligence does not equal General Intelligence!

For example, with ChatGPT 1 through 4, all that is added is more layering of data from its predecessor. Like the iPhone, updated versions of the same technology. Sure there’s an improvement in the operating system but it’s only an improved version of things. More data doesn’t bring artificial intelligence closer to general intelligence. So, with its ever-increasing massive database, ChatGPT still can’t draw on utterances, fluent grammatical speaking, etc., therefore, ChatGPT does not ‘know what it’s talking about, according to Marcus.

AI is certainly useful around pattern recognition in certain domains and brute force of data is undeniable in many situations, that don’t require general intelligence or authentic knowledge. So the struggle with AI is about what it doesn’t know, and can’t learn, the unknown, which renders it unable to effectively extract from general intelligence. Humans, however, can efficiently apply their vast intelligence network and deal with problems and communicate effectively. Understanding one another at a deep level, through human language, is something deep learning just can’t do.

There is no one way the mind works because the mind is not one thing. Instead, the mind has parts, and the different parts of the mind operate in different ways: Seeing a colour works differently than planning a vacation, which works differently than understanding a sentence, moving a limb, remembering a fact, or feeling an emotion.

— Chaz Firestone & Brian Scholl

The quote above illustrates the vast differences between general intelligence and computer intelligence — there are two separate languages at play here. The human mind is an experiential learning machine and deep learning is a fed singular data-reliant machine. You just can’t square the two language systems naturally. AI is undeniably an effective tool that is great for pattern (i.e. facial recognition) recognition and more, but it is illogical to conclude that just because it shows intelligence in one domain, it can be suited for everything. If it is good at X it will also be good at Y. But to arrive there without even analyzing the characteristics of Y is not scientific nor objective thought. This is nonsensical! Nevertheless, this is the type of constructive delusion Hinton and other AI scientists have engaged in their research silos. However, not adhering to or ignoring nature altogether is effectively not doing science.

DouglasBlackwell Images

There is no robust and dynamic perception in deep learning. Deep learning just refers to how many layers you have in your neural network, but that doesn’t mean there is any conceptual depth in the network itself. And you can’t get around the open-ended variables in general intelligence. Still, deep learning has shown to be good for some pieces of perception but remains inferior to general intelligence perception. Some of the key aspects of cognition shown in the pie chart above, are incredibly critical variables in the equation. The math of general intelligence allows it to go beyond particular instances presented, it can do abstraction by applying intelligence.

Again, AI can’t do everything, so get over it!

In contrast, what Douglas Blackwell Inc. proposes for the future, is a practical and useful non-delusional solution called applied intelligence (ai.) A hybrid knowledge-driven system based on sourcing, understanding and explaining outputs that give genuine useful insights, which can be gathered and actions are taken with confidence. A reasoning-based approach, underpinned by information centred around cognitive insight models that can provide relevant and robust outputs. Which can then be taken to develop specific strategies around the identified problems looking to be solved. Strategy is the basis for all successful endeavours in the universe! Without a good strategy, you’re left engaging in the rudderless navigation of the rough seas of life. So ai is not trying to do everything! Just what is relevant, required, and what is most highly useful to solving complex problems inputted into the system. It is not preoccupied with the impossible Plato-like perfection, this is a distraction, so ai only concerns itself with what is real and most critical to the person or organization, relative to its strategy agenda, and success pursuits.

As a practical Aristotle-like solution system and tool, ai provides authentic insights based on reliable sourcing of information and relies on the participation of humans for general intelligence competency. Causation, reasoning and analogy — plain common sense at the end of the day is essential to knowledge building. Don’t be fooled, doing alchemy is not advancement…if it’s not working or showing some value creation potential. Undoubtedly we are missing so much more when we only apply narrow artificial intelligence — many significant insights are not discovered because AI was never built for that.

What we need, therefore, is an Insight Engine, not a fixation with delusion and limitations, somehow believing that solving the impossible will provide value to humanity. That’s ridiculous! We want to take what is most useful from AI technology ecosystems, apply our intelligence, and build effective strategy solutions to advance, innovate, and create; successfully. By combining the value functions of artificial intelligence (AI) with the value functions of applied intelligence (ai), we can have bold, practical, and insightful technology that solves our efficiency gap problem in the digital age.

Machine learning alone is not a magic bullet, and neither is applied intelligence. But a hybrid-type solution which extracts from both is the most commonsensical and virtuous pursuit we can do.

As discussed in my previous two articles the Problem With AI, Part I & Part II; the pursuit of knowledge shouldn’t be about the pursuit of abstract perfection, or the very pursuit itself. Instead, our knowledge is best utilized when harnessed to our ingenuity in creating the tools that serve our humanity best.

To live we must create economic value for ourselves, that’s the reality of civilization, and the more efficiently we can do that the better our lives will be. Therefore, the strategies and tools we create help tremendously with that. Our very existence as societies comes down to securing our common future and that requires creating economic value. Wasting time with philosophical Plato-like pursuits in search of AI perfection doesn’t help humanity, and evidently, it is harming it.

There is never a single all-encompassing grand solution, a magic bullet that exists. All human advancements require a combination of things happening and working together. The world is not static, it’s complex, unpredictable and dynamic. And so, ai brings together the best of technology/artificial intelligence and intrinsically interweaves them with applied intelligence | ai. A robust, relevant and highly useful in real-world problem-solving applications around strategy development — ai is a utility hybrid model that serves humanity in the most practical of pursuits — living!

--

--

Perry C. Douglas
Perry C. Douglas

Written by Perry C. Douglas

Perry is an entrepreneur & author, founder & CEO of Douglas Blackwell Inc., and 6ai Technologies Inc., focused on redefining strategy in the age of AI.

No responses yet