Headed for the Dumbest Generation?
Legendary Primatologist Jane Goodall on AI and Humanity
About a decade ago legendary primatologist Jane Goodall appeared in a documentary titled Surviving Progress. The documentary examined the notion of what progress actually meant and if some forms of progress were harmful. That is, how helpful is progress to mankind? It presented a convincing argument about how technology if not used appropriately and in moderation, can lead to the deterioration of civilization.
They focused on ‘progress traps,’ instances where human knowledge and ingenuity, coupled with belief systems drove the development and use of technologies, in ways that may appear to solve problems in the short term. But could end up creating significantly more adverse conditions and longer-term problems for society to deal with. In other words, could technological progress be considered counterproductive to humanity?
One example, presented as a dramatic reenactment in the film, is the prehistoric hunt for woolly mammoths — it was progress when our ancestors figured out how to hunt efficiently. Killing two beasts at a time instead of one, however, when they figured out how to slay the whole herd by driving them off a cliff, they killed off their entire future food supply, and put their survival in jeopardy. That’s a progress trap.
Today big tech may be driving us off of a similar cliff, driving for what they call technological efficiency and creating products that essentially replace humans in the workplace. Never before in human history has there been such a push for a technology geared to replace humans. So what are they going to do with all those superfluous people?
Generative AI today is not known for its efficiency or adding tremendous value to enterprises’ bottom line, instead, much of what GenAI is known for today, is deepfake porn, digital scams, and the circumvention of democracy and destruction of democratic institutions. Not to say that GenAI should not be pursued and that it does not have great potential to serve humanity best, it does, but it comes down to who we let dominate its development and how we choose to use the technology.
Nobel laureate Maria Ressa recently said in an interview that she has lived through the dangers of social media and now fears that AI might be worse.
“I’ve said this so many times, without facts you can’t have truth, without truth you can’t have trust, and without trust, you can’t have democracy.”
She goes on: “Fifteen years ago, there was a similar wave of optimism around social media, with promises of connecting the world, catalyzing social movements and spurring innovation. While it may have delivered on some of these promises, it also made us lonelier, angrier and occasionally detached from reality.”
AI safety is a big concern for humanity but big tech seems unconcerned. A November 1st, 2023 research article by The Washington Post, titled How AI is crafting a world where our worst stereotypes are realized; demonstrates how AI-generated images can paint a picture of a world that amplifies bias in gender, race and beyond. The data that produce these toxic images are fundamentally fueled by Big Tech and the New Tech Aristocracy. This group rules the universe and doesn’t care about the harm it produces: AI-generated stereotyping, inaccuracies, making stuff up, and blatant lies.
Jane Goodall speaking at the Starmus Festival conference, organized by the astrophysicist Garik Israelian and the legendary astrophysics PhD Brian May — was a breath of fresh air and wisdom. Among other things she talked about, like climate change, politics, and hope, Goodall discussed how citizens have a responsibility to engage in AI policy, and how AI poses a threat to nature and the environment.
She was worried about the consequences of bad AI actors, the already opened Pandora’s box of LLMs and the blind and ignorant pursuit of AGI. She also warned about allowing big corporations to dominate our lives, and not allowing them to make the important decisions about the direction of AI.
She topped it off by raising concerns about whether students would learn much and be useful to society in the future if they overly relied on chatbots. Doing away with critical thinking and learning from research and figuring things out — how can there be innovation and creativity if everyone is using the same programmed information ecosystems to solve problems?
So the pursuit of abstract AI becomes the problem. Goodall was diplomatic and too nice during her talk to be brutally honest with the audience, but I’m not, I surmise that she was saying that we are headed for the dumbest generation.
“If you look at the new round, these are similar companies. They’re the same companies in some cases. If you look at large language models and generative AI and look at the idea, which is speculative in nature. They make large assumptions without any evidence,” says Maria Ressa.
LLMs are not smart, they are programmed by people, so technology’s outputs just reflect and reinforce existing biases in society. So stupid is as stupid does, as my good friend Forrest would say.
And as German philosopher Friedrich Nietzsche once put it: “Whoever fights monsters should see to it that in the process he does not become a monster. And if you gaze long enough into an abyss, the abyss will gaze back into you.” So you become what you choose to consume.
Therefore, democratic governments are not exercising their power to regulate big tech, ceding to big campaign contributions instead. Even the universities have turned into corporations taking billionaire corporate money from the likes of Facebook, Microsoft, and many more.
And the up-and-comers are not producing technologies that seem to be helpful to humanity, but more like anything you can slap GenAI onto. Ressa points to an AI startup called Replica which offers you a constant companion. She says that if the first generation of AI and social media weaponized our fear, anger and hate, Replica will surely serve to weaponize our loneliness. People don’t know what is real or not these days and what you don’t know is often what can harm you the most.
Goodall says that if such dangerous developments and behaviour persist and humankind continues to exploit the Earth’s resources, with the over-consumption of energy for massive AI server farms, for example, we’ll exacerbate our own demise. She is so right, just the other day OpenAI/Microsoft’s boy wonder, Sam Altman, began asking the investment world for 7 trillion dollars to build out his vision of what AI and the world should look like.
And in a recent talk at Stanford, a grandiose Altman asserted, selling a big vision and promise while not addressing inconvenient truths. Said without qualification that “we’re making AGI and ’ll Burn $50 Billion a Year to do it he said. Downplaying why his current approaches are not getting us any closer to AGI.
Sharon Goldman in an article in VentureBeat, titled, Sam Altman wants up to $7 trillion for AI chips. The natural resources required would be ‘mind-boggling’. She openly questioned Altman’s grandiosity and hubristic behaviour, particularly on what it would do to our planet.
Fortune reported in September 2023 that AI tools fueled a 34% spike in Microsoft’s water consumption; Meta’s Llama 2 model reportedly guzzled twice as much water as Llama 1; and a 2023 study found that OpenAI’s GPT-3 training consumed 700,000 litres of water. However, OpenAI has never disclosed its numbers, but they will surely be mind-blowing. So we can’t bury our heads in the sand to the potential costs and destruction to the planet.
And it is not surprising either that already there is pressure from Generative AI industry players to keep coal plants on. A call for going backward in the name of technological advancement is a ridiculous and counterproductive consideration.
Therefore, it’s naive to believe that technology alone can be a good gauge of human progress. The evolution or the survival of the species may be most dependent on the decisions we make about the technology we use.
“A thousand years of history and contemporary evidence make one thing clear: progress depends on the choices we make about technology. New ways of organizing production and communication can either serve the narrow interests of an elite or become the foundation for widespread prosperity.”
— Power And Progress | Our 1000-Year Struggle Over Technology & Prosperity; by MIT Professors Daron Acemoglu & Simon Johnson.
Goodall’s position on evolution is well known and well supported by other experts in the field, with the belief being that human thinking is still very similar to ape thinking. In how humans react instinctually in much of the same way apes do. The message is that technologies might change but human nature remains the same.
The human brain’s evolution and social instincts have lagged behind the development of its technologies, creating dangerous situations throughout time, that might be characterized, as Goodall puts it, as the equivalent of “giving a child of three a loaded gun.”
Reconsidering Progress
Surviving Progress is a documentary with many thoughtful arguments that should stimulate your intellectual mind, and lead you to think more deeply and thoughtfully about the relationship between technology and progress. Even reconsider what you know or what you think you know about it.
The film covers a wide range of concerns for humanity, it is provocative and raises more questions than it answers. However, the overriding question is a valid and challenging one, which is: Is humankind sufficiently evolved enough to handle its know-how and not misuse technology that could eventually destroy civilization?
This is a question for consideration by everyone.
About 6ai Technologies | 6aitech.com
6ai Technologies is rapidly democratizing access to technical capabilities by utilizing Generative AI purposefully and responsibly. Generating highly useful insights for strategy creates extraordinary value for humanity.