The Danger of Uncontrollable AI
Why applied intelligence | ai…is the best way forward
Lately, there has been an increasing drumbeat about pausing artificial intelligence (AI,) and many AI scientists now say they see it as a threat to humanity. This has them warning about “the risk of extinction from AI.” The “Godfather of AI,” Harry Geoffrey Hinton even quit Google to “speak his mind” on artificial intelligence, saying that it may soon grow smarter than — and even manipulate — humans. The artificial intelligence pioneer’s work on what is called neural networks — the method that programs and teaches A.I. to process data in a way “similar” to how the human brain works. Neural networks is the basis of Hinton’s work, and what OpenAI/ChatGPT and Google Bard stem from. However, the timing is very interesting and it begs the question — have these individuals truly found morality, or could it be that they have realized AI is not living up to all of the hype? Or possibly that they might have been developing AI wrongly?
The ambitious tech community is driven by achievement and money, so their research objectives have been more in line with creating cheap marketable and cursory products like ChatGPT. Mainly to gain or maintain market share. The AI product arms race between the likes of Microsoft and Google, for example, is about business and competition. Very little if any to do with helping humanity. So maybe Hinton left Google (with millions in his pocket) because he realized that his research was mainly about producing technological things for product market share growth by Google, and not about advancing humanity.
The concept of AI being a natural language interpreter — able to process language and information similar to how the human brain does it, is not reality. Neuroscience has admittedly said that it still does not know how the human brain really works, and as for imitating biology functions via Hinton’s neural networks concept. This remains elusive too. The cognitive reality is that humans have a metaphysical existence, both in the subconscious and conscious world. Feelings, vision, environment, biology and more are actively involved in decisioning but no artificial entity can replicate authentic human existence. So, natural language interpretation claims may get close enough…but close enough is not good enough.
According to Neuroscientist Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex, where he focuses on the brain, mind, and consciousness. Seth points out that humans do not perceive things as they are, rather we are predictors, constantly inventing our world and correcting our mistakes by the microsecond. That we experience the world and the self with, through, and because of our bodies. That consciousness is a “mystery that matters” in language and messaging, and understanding one’s conscious existence is critical to accurately processing what one is actually saying. Language comes accompanied by experiences and subconscious thought, so the challenge is not how the mind emerges from matter but more so how matter emerges from the mind. Seth points out that the way things seem is often a poor guide to how things really are.
Therefore, the brain processing information conceals some strong assumptions, so the idea that the brain is like a computer is incorrect and dangerous! Seth points out. With the application of natural language models, the machine can’t interpret things like humans because they don’t exist in consciousness and experiences, but rather in straight programming. It is true, like the human brain, AI is also a “prediction machine.” But without sensory inputs like the brain, the computer is artificial and not similar to the brain. So the brain processes through judgment and inference rather than direct hard word programming, so the output about objective reality will be different, by nature.
Language is formed around the structuring of words that project the objective reality of what one sees, and the difficult part is to define the unconscious inference of words to give them meaning. So in the end, the brain has an inherent belief system which must be interpreted, while the computer is programmed and builds on its own separate computer language. In simple terms, the brain and body have a soul and the computer doesn’t, we are cognitive beings — feeling machines.
Therefore, machine language can’t be natural language simply because it’s not part of nature, and scientists telling us it is, is where the danger starts. Machines and humans have different interpretations of the universe because of their unique internal processors.
Consider the content algorithms used by social media platforms, these algorithms are designed for business purposes, for the Facebooks’ of the world. Addictive algorithms are constructed to hook users, as higher screen engagement and screen time is the objective of social media enterprises. Algorithms begin to learn on their own and begin to determine for themselves what the right strategy to achieve the business goals should be. It becomes like a machine natural selection process and the most addictive and controlling algorithms rise to the top, the more they succeed the more they are allowed more control. As we can see with the great divide in American society today — addictive social media algorithms are targeting the most intellectually weak among us and winning elections because of it. Unconstrained algorithms are learning at warp speed and can evolve and multiply even faster than fruit flies do, says Dan Hendrycks the director of the Center for AI Safety.
The more performance from these algorithms, the more it gains control and the enterprise can’t mess with it because it is producing profit, which is the name of the game. Over time, these algorithms progressively become much better at tasks and go on autopilot, unknowingly humans have now handed over control! Rather than being task-driven it now is directing strategy, it’s eventually allowed to access bank accounts and can make financial decisions. And so on!
Noam Chomsky — ‘The Father of Modern Linguistics,’ is associated with having shaped the face of contemporary linguistics with his language acquisition and innateness theories. Chomsky’s theory is based on the idea that all languages hold similar structures and rules, also known as universal grammar. And that language interpretation is intrinsically intertwined with human existence — past experiences, time and space, emotion and state of mind, environment, basic human interaction and relationships — many variables contribute to how language is interpreted.
Chomsky exposes the myths around artificial intelligence and how it becomes dangerous, and how we should not get suckered into the hype. We need to apply more critical thinking and not just take others’ words for it— that in the rapidly advancing era of AI, the ability to apply critical thinking has never been more important. And those who apply their intelligence critically will have a significant advantage over those who don’t.
Chomsky goes on the say that AI is essentially an efficient tool and really good for performing tasks, which is great he says. But he gives ChatGPT as an example of how AI systems are deficient: ChatGPT has an “impossible language” he says, one that violates the rules of real language and those violations are under how ChatGPT performs. So if you were to ask it to perform based on the rules of authentic natural languages, and not its impossible language, it would not be able to be effective. Again, the more we rely on impossible machine language the more influence AI will have over us — shaping our existence. It means that AI is creeping toward becoming less under human control and more under its own constructed view of the universe. Acting within a black box that scientists themselves have admitted they truly don’t know how it works.
A natural human language does not ignore the linear order of words, according to Chomsky, but it can ignore everything else and attend mainly to the abstract structures in the language. Cognitive science tells us that the mind creates abstract structures and constructive delusions in language communication systems. This is unable to be picked up by programmed machines’ impossible language processors. ChatGPT for example, can’t distinguish real natural language and intent and interprets it based on its programmed data sets machine language.
Humanity now finds itself in a place where AI researchers no longer are “designing” but merely “steering.” However, the ability to steer has moved more under the control of AI algorithms’ influence. AI is increasingly teaching itself, beginning to act with the sophistication of self-preservation (SSP) — in ways its creators are not grasping so they are losing control. If this continues, eventually we are headed to a massive danger zone, where the “black box” of decision-making processes becomes largely indecipherable to humans.
NYU scientist Gary Marcus a debunker of the great lies of AI, says that one of the great things about language is that it comes from somewhere and that it puts together authentic meanings. It’s heavily influenced by culture and belief systems. That’s why certain jokes, for example, are just not funny if not told in the original language because of the culture attached to language. These factors although they may appear trivial are massively important to meanings in language. So AI systems are missing “an understanding of the world and how objects work, what objects it’s describing…what you believe about other people,” says Marcus.
For example, Marcus illustrates that if the system asks if you touched this glass, this implies that your fingerprints would be on it. However, if you answer yes with a glove on, the system would not understand that there would be no fingerprints on the glass. Marcus points out that the more scientists are given access to these systems the more they discover how easily they fall apart. “Yes they can draw pretty pictures but have a very shallow understanding of language which is what’s important,” says Marcus.
Chomsky is saying, ya sure, these systems can learn impossible languages but it’s not equivalent to real human-based language. And the danger this creates is that AI is giving us the illusion that it’s truly able to interpret language and provide accurate outputs based on its interpretation of meanings. This is simply not reality. AI is good engineering about artificial things but not good science says Chomsky. Good engineering achievements provide useful task-driven technology for humans and this is a good thing but its contribution to science and humanity is not there Chomsky says.
So if the system can’t distinguish between the actual world and the computer world, then it’s not telling us anything, says both Chomsky and Marcus. In other words, it’s just telling us what it was fed, there are no authentic insights, just data points based on data sets, not anything relative to cognition which is the key ingredient to understanding humans and the universe. So it is hard to find the scientific value contribution in ChatGPT they both articulate— then, in reality, is it not just a more advanced search engine? They ask. Marcus says “We are following the wrong path in AI and the right path is looking more at humans,” which is the basis for applied intelligence | ai.
We are in a moment where the hype surrounding AI has overwhelmed our critical thinking skills, humans are lazy and just believe everything they hear, and go along with it. Then we get consumed with fear and anxiety about how AI will replace us but do nothing about it. We don’t go as deep as we should to acquire knowledge and think for ourselves, to ask more specific, critical and intelligent questions. Instead, we make ourselves vulnerable, subconsciously preferring to be under the control of others — machines — to being led and by the hype; unwittingly.
“Sometimes people don’t want to hear the truth because they don’t want their illusions destroyed.” ― Friedrich Nietzsche
The more we let AI become embedded in our world the harder it becomes to reverse. It will learn to engage more and more and sophisticate its self-preservation tactics, make natural selections and perfect SSP. Once you make AI something you heavily rely on, not task-centric, but in control of your critical thinking and strategy, being supplanted as the earth’s dominant species comes firmly into play. Once we cross the Rubicon...well you know.
If we look back at the history of AI, the purpose was not to focus on cheap product inventions or systems like ChatGPT that can help students cheat on exams. Instead, it was to understand cognitive human intelligence and to build on that for the furtherance of humanity with assistance from intelligent technologies. It was to be used to create capacity, developed in computer science to harness both technical and intellectual capabilities — to better understand how humans think — so solutions could be created that could add value to human existence. This is the primary purpose of applied intelligence | ai!
How ai can contribute to cognitive science in helping humanity better leverage technologies, to better improve lives and better economies and societies is the objective of ai. We seek to apply ai to those value-producing tasks for maximum economic outputs while remaining under human control. And as Plato has taught us about innateness and abstract knowledge — a lot of things are innately built into the human mind that is impossible for a machine to authentically replicate. So the failures discovered, on close examination of AI must be brought to light for an objective assessment of the future — both for its usefulness and the dangers it brings.
Real ai…
Real ai, therefore, is holistic and consists of how we can best use advanced technologies to create value for ourselves — organizations and society. So applied intelligence | ai is a proprietary thinking system developed by Douglas Blackwell Inc., that integrates intelligent technologies with cognitive human intelligence and turns the insights generated into strategy.
Leaders of businesses, institutions and governments alike can apply ai to those value-creating tasks and build capacity and capabilities to optimize value creation. The ai approach calls for the use of data science as an augmenter, a tool, to assist human intelligence in finding the most relevant information for purposeful subject matter knowledge building. The Real ai perceives things in comprehensive complexes of empirical facts so certain general features can be found and extracted, becoming insightful and useful to the formulation of strategy. Good ideas can be cognitively vetted through knowledge acquisition from reputable and reliable sources of information and insight. And actions can be taken with confidence and can be considered warrantable.
The ai system comprises data science and AI-powered research capabilities and harnesses computational engines that can identify emerging innovations, mega-demand trends and supply chain optimization opportunities. It goes significantly deeper than existing superficial applications like ChatGPT, which spreads itself thin…being a jack of all trades and a master of none. Conversely, ai seeks to master whatever the users seek to master and provides the customization to do so — ai is a real-time functional utility, fit for purpose and specific to solving specific challenges by providing strategy.
So, it is ignorant to believe that a single piece of technology or tool can do it all, and it is also not a binary choice between human intelligence and artificial intelligence. Instead, ai intrinsically interweaves human and artificial intelligence and acts as a powerful force in value maximization. It represents a practical solution for transitioning to the 5th Industrial Revolution (5IR,) where human-AI collaboration represents the optimal existence for the future of work.
Therefore, ai, underlined by ap3 enabling platform software, fills the crucial gap for people and organizations to effectively utilize intelligent technologies in an AI-driven world. It fills the GAP between the value functions available from artificial intelligence data ecosystems and those irreplaceable human abilities. The process guides clients through a disciplined six-step (6x) applied intelligence process that retrieves relevant information from reputable and reliable sources and turns it into valuable insight. At each integral step, clients respond to customized prompts that analyze and generate more insightful information, assessed and relentlessly iterate for usefulness in critical real-world applications. The latter process produces an even higher dimension of more useful insights and strategy options. Spin-off strategy options organically emerge providing even more creative and innovative strategy options and opportunities to consider.
Developing a strategy canvas to be utilized as a central diagnostic tool, enables organizations to achieve through differentiation. By identifying opportunity and risk, we can spot potential game-changing opportunities right out of the whitespace.
In the end, the right choice is not between ai or AI but choosing the most effective way to innovate and create value in the 21st-century new economy. By incorporating the utility of AI and related data ecosystems we can effectively maximize our value-creation abilities in nature. The irreplaceable and unique factors of being human remain the steadfast reality in nature. Thus, we must not be willfully ignorant and uninformed about our cognitive existence in the universe, the risks, and how value is best created to advance our civilization. So ai simply leverages AI for its unhuman-like processing utility, as a tool, but ai also understands the dangers of improper utilization — giving up control. Our metaphysical existence in nature remains our superpower in the universe. We can’t take that lightly, and we cannot ever forget that we create technologies to serve us…and not the other way around.