Thinking Fast & Slow…at the Same Time

OpenAI — the 7 Trillion Dollar Problem for Humanity

Perry C. Douglas
5 min readFeb 11, 2024
@sharongoldman

Although we’ve seen the hype and fascination with ChatGPT begin to plateau if it hadn’t already. Sober and well-informed critically thinking minds are also beginning to ask where’s the beef in ChatGPT? Microsoft has said that it is fully prepared to “support OpenAI’s mission and we are ready to provide any resources needed to advance your vision…” But the intelligent question becomes what is this going to look like, and the risk management process — what are the potential pitfalls identified and how are they going to be addressed?

Similar to how an environmental assessment is required before property development can proceed, we need a humanity assessment on OpenAI before we go ahead and even consider just handing over $7 trillion to Sam Altman. A guy who’s never even existed in any of his former companies; and really, he’s essentially the next useful “marketable-visionary” in line for Microsoft to tout out — to distract from its ultimate goal — expand its market share exponentially and dominate the world.

To do what Altman/OpenAI wants to do will require over 7 trillion dollars according to many credible AI industry professionals and entrepreneurs, pioneers, academics, activists and well-informed observers alike. Up to now, AI has mainly been about promises promises promises. Generative AI particularly is best known for churning out deepfake scams and manipulation of people which has served to degenerate the world and circumvent democracy.

These large language models, ChatGPT for example, only provide you with the most likely or known answers based on the data it has, and of course, everyone has access to it. So how can a user of ChatGPT find any competitive advantage value from it if everyone has access to the same answers? Further, ChatGPT can’t ever account for randomness, and random events are what continually make our world. So as far as authentic strategy value is concerned, ChatGPT has none. People need to bring thinking back, use skepticism and logic and apply some applied intelligence to get to the objective truth…about OpenAI.

Sharon Goldman in an article in VentureBeat, titled, Sam Altman wants up to $7 trillion for AI chips. The natural resources required would be ‘mind-boggling’ — the article came after the Wall Street Journal reported that OpenAI CEO Sam Altman wants to raise up to $7 trillion for a “wildly-ambitious” tech project to boost the world’s chip capacity, funded by investors including the U.A.E. — which in turn will vastly expand its ability to power AI models.

Further, Fortune reported in September 2023 that AI tools fueled a 34% spike in Microsoft’s water consumption; Meta’s Llama 2 model reportedly guzzled twice as much water as Llama 1; and a 2023 study found that OpenAI’s GPT-3 training consumed 700,000 litres of water. However, OpenAI has never disclosed its numbers, but they will surely be mind-blowing. So we can’t bury our heads in the sand to the potential costs and destruction to the planet.

It is not surprising that already there is pressure from Generative AI industry players to keep coal plants on. A call for going backwards in the name of technological advancement is a ridiculous and counterproductive consideration. The reality is that Generative AI has been mostly promises rather than measurable delivery. And no argument can be possibly made that we should tear up the planet and take an unprecedented and enormous risk on something unproven, relative to its value to risk differential for humanity.

OpenAI has not even turned a profit yet nor produced a comprehensive cost-benefit/basic opportunity cost analysis for humanity. So we should just go out and hand over $7 trillion? Trillions more than what it would cost to eradicate hunger. $7 trillion would create the mother of all too big to fail. And there is no bailout package for destroying the planet, the world economy, exacerbated global inequalities and geopolitical conflict for land and resources, which will most definitely lead to wars.

The destruction of human intelligence capacity would also be colossal, intellectual capital would be no more and machines would run the world. We humans would serve technology instead of technology serving us. This will effectively be the end of humanity.

Copyright rules wouldn’t be respected and enforced, artists, intellectuals, and writers and other content creators would be diminished. The dis and misinformation and deepfake stuff we are seeing now will only pale in comparison to the future deepfake frauds and cybercrimes, fake books and scams — the world will be unlivable and headed to a dystopian-like existence.

“Worse…companies like OpenAI seem content to let society bear all the costs, like some careless industrial plant belching out toxic chemicals back in the day,” says computer scientist Gary Marcus. As Sharon Goldman noted, wars will become normalized with corporations funding them, not unlike the Pope/Roman Catholic church many centuries ago. Raping and pillaging the world for its gold and silver and many other natural riches. In the name of furthering it’s Christianity religion to dominate the world. But that just didn’t work out too well for the rest of us. “Shortages of rare earth minerals such as gallium and germanium have even helped inflame the global chip war with China”. Battles over resources are as old as time itself.

Many leaders in AI have signed a letter saying that uncontrollable AI development can kill us. So some hard thinking about the future of AI development and how fuller and more diverse participation from society should be included in the decision-making. This cannot be left in the hands of one man or corporation — Sam Altman and Microsoft.

Phil Libin, who is both a CEO and venture capitalist, says that quick investment decisions made primarily on hype and arms race mentality are premature. Because we essentially don’t know what we are doing. Things are just developing way too quickly he says, and are driven by phony valuations led by VCs and underpinned by Big Tech.

To be clear, no one is talking about stopping AI development, research and innovation, just employing some basic common sense and adherence to the lessons of the past when it comes to investing. Basic applied intelligence thinking. AI safety must be a main priority going forward and that job can’t ever be left in the hands of…the Sam Altman and Microsoft's of the world.

Any big mistakes can set AI development and humanity back decades, so screw-ups in AI unlike any technology and invention of the past, will have mega and dire consequences for humanity. Which we may not even be able to imagine in the present.

Reckless and uncontrollable expansion of AI is not cool, it will come with real-life pain and suffering, pulling the world backwards — economies, societies and the environment — humanity as collateral damage. So we must think fast and slow at the same time.

--

--

Perry C. Douglas
Perry C. Douglas

Written by Perry C. Douglas

Perry is an entrepreneur & author, founder & CEO of Douglas Blackwell Inc., and 6ai Technologies Inc., focused on redefining strategy in the age of AI.

No responses yet