I’ll Burn $50 Billion a Year

I Don’t Care…We’re Building AGI

Perry C. Douglas
9 min readMay 10, 2024

As the minor upgrades of ChatGPT-4 show, there isn’t much material difference, compelling or useful value to be found, and the hype seems to be plateauing — coming home to roost for Sam Altman. The Wizard/Altman of Oz. Selling a big vision and promise while not addressing inconvenient truths.

In a recent talk at Stanford, a grandiose Altman asserted, without qualification that “we’re making AGI”, downplaying why his current approaches are not getting us any closer to AGI. The elephants in the room: hallucinations and unreliability, not even close to human-like logic and reasoning. And despite having a disproportionate boatload of money thrown at OpenAI and Generative AI in general, it continues to struggle. This has been happening for decades, the promises about AI did not start with the launch of ChatGPT.

In interview after interview, Altman keeps making unsubstantiated claims about AGI agents being on their way to AGI, again presenting no scientific evidence. So as French philosopher and mathematician, Voltaire, in his efficacy about reason, says skepticism is central to deducting the objective truth. And Altman also goes against Einstein’s process of discovery & concluding towards finding that that objective truth — avoidance of priori hypothesis — starts with preconceived notions or ideas.

At Stanford, he portrayed “AGI” as a sure thing, saying “I think every year for the next many we have dramatically more capable systems every year”, which of course there is no evidence of, just hype. By choosing Standford to have these talks, Altman knows he won’t be challenged. In the movie Dune 2, the High Priestess said that to convince people to go along, it is much easier to start with those who already believe.

As Geoffrey Miller has put it about Altman, “…his implicit message is usually ‘We need AGI to solve aging & discover longevity treatments, so if you don’t support us, you’ll die.” This can be taken as extreme, but it’s more desperation than anything else.

Sam just seems like a guy that sold a really good story and Big Tech bought into it, he got a ton of money and there is no going back, get rich or die trying as 50 Cent says. Forced to continue to double down because there has been so much big money applied and how can all these “smart” people not be right? Well, history is full of examples of these occurrences.

Under the above scenarios, frontmen like Altman tend to separate from reality and exist in a hypomania state, rationalizing down and succumbing to making hubristic claims. They tend to speak vaguely with cursory storytelling-like narratives, never going deeply into substantiating the science behind the narrative.

Altman’s narrative is that AGI will solve all of the world’s problems, and he’s just the one to lead, so you must give me 7 trillion dollars so I can do that for humanity. Anoint me as the arbiter of your lives?

However, he doesn’t answer the elephant in the room question, about, so far, we’ve not gotten close to even understanding how to effectively proceed to AGI.

So far, OpenAI and big tech have been selling narratives and solutions about reducing businesses head counts and increasing people’s anxieties, worried about losing their jobs. More digital scams using GenAI, victims of deepfake porn, and circumvention of democracy just to name a few top of mind. Not exactly great for humanity.

We’ve yet to see evidence of how great GenAI is, or how it may exponentially benefit humanity. Instead, the harm of GenAI is outpacing the potential good. However, we don’t need to perfect AI or get to AGI to be useful for humanity, but, our decisions around the responsible development and use of GenAI, and prioritizing AI safely, will go a long way for humanity.

Jane Rosenzweig of Writing Hacks recently said that “Altman is a guy who has the potential to put whole industries out of work who is saying, with no capital letters, that what he’s doing is what the world has always wanted and then acting surprised that anyone has questions. It’s truly amazing in a sense.”

Altman is a master at producing superficial rhetoric, then hiding from questioning and counterarguments, alternative scientific perspectives. Altman tends to distract with its about humanity and that OpenAI is a nonprofit. However, you better believe that Microsoft, its main funder and largest shareholder, is all about profit.

At the end of the day OpenAI sells chatbots, and ChatGPT 1–4 fundamentally remains so! But there is a lot of collateral damage taking place via Altman/Microsoft pursuits: artists, writers, publishers — the creatives, are being screwed by OpenAI. Their work is being used by ChatGPT without consent or licensing fees being paid. Copyright infringement laws exist for everyone, except Big Tech it seems.

And the assertion that to get to AGI we just have to give Altman more money and get more data is just plain stupidity. There is no math for that. All you would be creating is an infinite loop of hallucination!

Why Altman is dangerous, is not that he’s a bad person, I don’t know him but I would bet he’s a good human being. But many of Shakespeare’s plays have taught us that hubris and arrogance can lead to dramatic downfalls. Altman is no visionary, not special, the universe is random and randomness and privilege have gotten him to this position. His narrative shows that he’s clearly over his head and grasping, forced to continue to subconsciously fool himself and the public, to continue to raise capital to feed the beast. Technology might change but human nature always stays the same.

“Altman has been honing this formula literally for years, spinning a heady mixture of mysticism, apocalyptic warning, and speculative exaltation, all the while inoculating himself against great fears while hinting at vast fortunes to be made,” says Gary Marcus.

I believe GenAI has a lot to offer, but we must proceed objectively and scientifically, and like in calculus we need proof statements along the way — to get us from here to there. Otherwise, we’re just engaging in philosophy which can drive a new tech religion tone. Further, he may be able to fool the Stanford wannabees but in the real common sense and skeptical world, speculative narratives and unquestioning assumptions must not be allowed to fly.

The good news for humanity is that many competitors have caught up or are close to GPT-4, starting to offer similar products which is good for the competition and society. We’ll eventually get to offering these products for free. Also, tons of copyright lawsuits, from multiple publishers, artists, and writers, are pending. We have yet to see net profits from OpenAI, and if that ever comes to fruition they’ll have to be shared with Microsoft. Which is another story having to do with OpenAI’s valuation.

Training LLMs is incredibly expensive so sooner or later basic business principles and common sense should prevail. We haven’t heard about any new demos coming or working, any reliable agents, and nothing about GPT-5.

So why care about any of this? Because opportunity costs and humanity is paying the price.

“In seeking to grab vast resources for an approach that is deeply flawed, Altman is diverting resources from alternative approaches from AI that might be more trustworthy, reliable, and interpretable, potentially incurring considerable environment costs, and distracting from better ways of using capital to help humanity more generally.”

— Gary Marcus

In the end, after all is said and done, Altman’s place in history will be determined by whether or not he delivers on his promises.

Applying applied intelligence | ai

As Mahatma Gandhi once said: “Truth is by nature self-evident. As soon as you remove the cobwebs of ignorance, it shines clear."

From Newton’s laws of motion (persistence, resistance, and action) to the principles of thermodynamics (order and disorder), are the truths in physics that can help us navigate our thinking and the universe effectively.

This process takes one to a higher dimensional level of thought, forcing us to find good explanations before we construct our rationality and arguments.

So, to truly maximize the productive value of language models, a more focused and higher dimensional paradigm of information retrieval, augmentation and integration. That is underpinned by the contextualization of historical examples and must be authentically achieved. To create any real and sustainable value.

Adhering to the laws of the natural, the metaphysical world, doesn’t restrict one’s imagination, it enhances and strengthens it. It prevents one’s thinking process from falling victim to the abyss of pursuing non-scientific discovery, to satisfy emotional states of existence. Fooling oneself and effectively engaging in fiction.

There is always an interplay between certainty and uncertainty, interconnections, relativity and energy and intention. So ideas must be tested and things must add up with good explanations before one can move forward with meaningful points of view.

…applied intelligence is a scientific methodology that guides users through a proprietary six-step (6ai) process, retrieving relevant factual information from reputable and reliable sources.

The overriding aim is to gather and develop insights that can be taken as reliable and empirically warrantable to develop fit-for-purpose strategies.

Staying within reality also allows us to utilize technology more effectively because we see technology for what it is, a tool, to serve humanity best. Not the other way around. Therefore, the effective utilization of data ecosystems and artificial intelligence, harnessed and utilized properly and responsibly, is a powerful augmenting tool for strategy and outperformance.

Keep an open mind about the universe and focus on finding out what you don’t know first, rather than dependency on what you do know, or think you know. Knowledge, like energy, can be stored, so manage it and be selective on when, where, and how you choose to use it.

Believing in nice stories and being selective with your evidence is not scientific, that’s subjective and emotional, and detracts from your productivity. Highly effective people do not shy away from the objective truth, they seek it out. Because the faster you objectively understand the landscape the better off you’ll be to build successful fact-based strategies for winning.

The best performers amongst us dare to be courageous in seeking out what is objectively true before they form their point of view and pursue their goals applied intelligence thinking helps with improving understanding: of integrated systems of people, money, knowledge, and information, about how to make good decisions for meaningful value outcomes.

@6ai Technologies

Those, like Altman and the Big Tech, seeking to create more wealth for themselves, need those useful suckers who put emotion before objective analysis. Those who buy into nice stories and get hooked like a new religion. Don’t be that useful sucker for the Big Tech aristocracy.

Stories are highly useful to Big Tech and its billionaires — getting you to believe in their crafted narratives gets you to buy more stuff. Which serves their wealth aspirations rather than your own.

Invest in critical thinking and the things that reflect your values. Align your technology choices with what serves your interests best. Technology is a great augmenter but everyone has access to the same technology, so how you use it is what creates your competitive advantage.

The Doppler effect tells us that our perspective and situations can change based on our positioning, so be diligent and scientific when forming your stances. Establish a physics-based mindset and approach. Don’t jump to conclusions, skepticism is your friend.

Having empathy is also critical because empathy opens up your mind to seeing things from the other person’s perspective, which helps you to examine viewpoints other than your own. Becoming a better critical thinker.

According to Einstein’s theory of general relativity, major events can alter and shape our perspectives and belief systems. Nevertheless, everything always comes back to reality —the laws of nature don’t change, regardless of the stories Altman is telling.

Physics explains the universe and math provides the proofs! So becoming more mathematical with your thinking process may be helpful to you.

Accordingly, applied intelligence helps you develop a logical basis for thinking, bolstering and driving the scientific process, which can be further augmented through intelligent technologies’ of course.

About 6ai Technologies

6ai Technologies is rapidly democratizing access to technical capabilities by utilizing Generative AI purposefully and responsibly. Generating beneficial insights for strategy creates extraordinary value for organizations and individuals.

--

--

Perry C. Douglas
Perry C. Douglas

Written by Perry C. Douglas

Perry is an entrepreneur & author, founder & CEO of Douglas Blackwell Inc., and 6ai Technologies Inc., focused on redefining strategy in the age of AI.

No responses yet