Karma is a Bitch
The Inherent Fragility of an OpenAI-Led Ecosystem
While OpenAI wants $20 monthly for their “AI magic,” DeepSeek’s: “Here you go, do your thing!” Sign up with Gmail, and you’re in. No credit card, no hassle, no subscription headaches.
Meanwhile, OpenAI is keeping its code locked up tight, doing that whole premium subscription thing (you know the popup — “unlock advanced features!”), while DeepSeek is just trying to make AI accessible and expand its business by giving stuff away. This is simply good business. And thanks, OpenAI, for starting things off with your “non-profit,” but you got way too big for your britches and pulled the old bait and switch with people. Once burnt, twice shy.
OpenAI has responded to the DeepSeek surprise by saying, as reported in the New York Times, it found that the Chinese AI start-up breached its proprietary models to train its open-source models. OpenAI says that DeepSeek “potentially breached their intellectual property,” but when asked for more detail about its claims, it declined to further comment or to provide any evidence to back up its claims.
These claims are highly doubtful, and it’s more desperation than anything else. Even if DeepSeek did whatever OpenAI claims, so what? This is what open source is about…to democratize AI for others to use to build more software. And remember, OpenAI was originally formed as a non-profit for the ‘betterment of humanity’ to democratize AI via an open-source resource for practitioners, i.e., for developers and individuals, to use, learn, and build new stuff.
However, OpenAi’s snake oil salesman, Sam Altman, decided to flip the script on everyone to serve his financial interest. Getting rid of all his talent, those who believed in the original intent and AI safety. Emboldened with billions from Microsoft, Altman decided to make and position OpenAI as a ‘growth’ company — pursuing wealth first and abandoning the original non-profit principles and vision.
So, they should be the last to talk about “breaches.” OpenAI’s whole business is built on breaching and infringing on copyrights — stealing from writers, artists, and countless content producers, breaking all kinds of copyright laws to satisfy its need for free data to fuel its models. For example, the New York Times has a major copyright lawsuit against OpenAI and Microsoft, accusing them of using millions of their articles/content to train their models without permission or compensation. There is no doubt that without free data, OpenAI wouldn’t have been able to build what it has. So, Karma is a bitch! Deal with it now.
DeepSeek is not fundamentally different from OpenAI, except for cost, of course, and it has exposed the major con: OpenAI and the Big Tech ecosystem have been selling us on how they need to spend billions and even trillions of dollars to build these LLM models. But DeepSeek demonstrated that open-source LLMs can be built at a fraction of the cost. They have blown they have pulled the curtain on the Wizard of Oz, aka Sam Altman.
And you can expect more companies from around the world to come up with similar products to DeepSeek, especially in this geopolitical environment being set by Donald Trump. Countries are learning that the world is moving into an era of non-economic cooperation.
But for capitalism to function in the best interest of the global community we absolutely can’t have all the knowledge and wealth in the hands of a very few billionaires. This is harmful to entrepreneurship which is the engine of global economic growth, harmful to the consumer, and to the global capitalist system, creating great inefficiencies and imbalances and more acute problems to solve.
It is important to note that MIT reported similar, two years back, to what DeepSeek is proving now. But with all the ChatGPT hype the report was likely shelved or drowned out with all the noise and nonsense.
Nevertheless, the MIT Technology Review, Insights, titled The Great Acceleration: CIO perspectives on generative AI, effectively showed that you could indeed train models for dramatically less than what Big Tech was telling us.
- “That smaller open-source models…could rival the performance of large models and allow practitioners to innovate, share, and collaborate…one team build an LLM using the weights from LLaMA at a cost of less than $600, compared to the $100 million involved in training GPT-4.”
- The report went on to say that “Much of this technology can be within the reach of many more organizations. It’s not just the OpenAIs and the Googles and the Microsoft of the world, but more average-size businesses, even business, even startups.”
Both MIT and DeepSeek show us that there is no competitive moat for OpenAI! It’s not a viable business, just a commodity.
I’m not one to float conspiracy theories, but one can’t help but think or rule out that this university research was suppressed by the Silicon Valley Big Tech aristocracy powers that be. You know how it is with billionaires putting pressure on universities not to go against their business interests. If they do, they’ll wake up the next day, finding research grants and big billionaire donor funding gone!
OpenAI has been driving the AGI narrative through good old-fashioned propaganda and hype, constructed delusion, to make us believe what couldn’t logically be true was true — like a new religion with Sam Altman as the high priest.
Information and information ecosystems are never neutral it is always used for an objective, they can be formatted and delivered to gain power and control, which has been the goal of the Big Tech aristocracy and its oligarch billionaires.
But the observable, objective truth is that OpenAI is a commodity business, and DeepSeek, too, just flooded the market with cheaper supply. Markets will adjust as they always do, and prices will come down across the board in the sector. But in the near term, DeepSeek has market advantage. Goldman Sachs Chief Information Officer Marco Argenti said, as reported in The Information, that his team is also interested in using DeepSeek. So, if DeepSeek is good enough for Goldman, that should be the latest indication open-source models have truly arrived after a long struggle to catch up to closed-source rivals.
NYU professor, computer scientist and AI expert Gary Marcus has often compared OpenAI to WeWork, which is a fitting analogy because both companies were built on hype, smoke and mirrors. Ultimately, that situation went awry for WeWork, and it seems the same pattern is unfolding for OpenAI. OpenAI is “highly overvalued, and we’ve just witnessed their business model sort of unravel — a 157 billion dollar valuation is hard to legitimize when you’re losing $5 billion-plus annually,” said Marcus on CNBC earlier this week.
It is pretty straightforward: if DeepSeek is offering what OpenAI charges for, for considerably less or for free with similar or better value, then Houston, OpenAI has a problem!
DeepSeek will impact the chip supply game; there will be supply and demand adjustments as reality and human psychology work it out. But the big winner in all this will be the consumer and some truth in the universe for a change. The real ‘big problem’ for OpenAI is that LLMs are just not reliable enough, and their hallucination problem is incurable. In the end, this, in my view, is the fatal flaw they must deal with because the chickens are coming home to roost.
The $500 billion “StarGate” initiative Altman and the useful idiot Trump came out with last week may have been the “top of the market” in stock market analogies terms. But it’s impossible to predict. And one can’t stop and think that the universe might have stepped in with DeepSeek…to deep-sink StarGate. Either way, humanity will take it.
The landscape has changed virtually overnight, which shows how fragile the OpenAI narrative is, and with OpenAI burning money, it certainly looks like another potential WeWork. Like WeWork, we may also see OpenAI’s valuation plummet, and I couldn’t help but notice that the CEO of Softbank, the main multi-billion-dollar investor and chief WeWork promoter, was standing behind Altman at the White House StarGate announcement. History repeats, especially when you’re not paying attention.
The reality is AI is anybody’s game! OpenAI was just on the hustle trying to lock down knowledge in his and other billionaires’ financial interests; that’s the long and short of it, but that’s never good for capitalism and human progress.
OpenAI/LLMs are not the holy grail. There are many more breakthroughs to come, and the majority of them we can’t even think of yet. So don’t listen to those who are trying to sell you their stuff! Furthermore, LLMs are too well understood, and there is nothing else we can effectively do with them.
Even Facebook’s Chief AI scientist, Yann Lecun, said at a conference in 2023 that he wouldn’t advise PhD students to go into LLM research, as reported by Fortune magazine: “Meta’s chief AI scientist Yann Lecun admitted during a talk in Paris this week that he’s not exactly a fan of the current state of chatbots and the large language models (LLMs) they’re built on.” He told the Meta Innovation Press Day crowd, “They say it’s not safe. They are right. It’s not. But it’s also not the future.” He went on to argue that today’s LLMs, such as OpenAI’s GPT-3 and GPT-4, which undergirds ChatGPT, will eventually be replaced by better and more robust algorithms.
However, the problem for now is when an entire wealth-first ecosystem has been created with an abundance of resources, including financial, academic, IP, technical and more, with a vast underlying wealth network. This built ecosystem itself has become arrogant, hubristic and overconfident, in Shakespearean fashion, which will lead to a great downfall.
You just can’t bypass reality, the laws of nature and physics which apply to the limitations of technology. Believing you can do anything scientifically if you just throw more money at it is stupid and delusional. But it is, nonetheless, the prevailing delusional tech bro aristocracy thinking. Which has been spun into the ecosystem for the singular pursuit of wealth, and not authentic innovation and discovery to build great companies that can advance civilization.
It’s an ecosystem for the deceptive valuation game, constantly raising funds to create inflated valuations and startup paper-billionaires. This excess has bred complacency and intellectual laziness! Like the spoiled children in the hit show Succession, these immature adults, possess so much wealth and abundance that they mistake success for merely throwing more cash at a problem. But they lack skill, intellect, perseverance, and empathy which are among some of the key characteristics to problem-solving and achieving great things. They are incapable of doing anything meaningful with their lives or contributing meaningfully to a company or organization.
Suffering is important and necessary; it forges resilience and empathy, and suffering with limited resources — scarcity can drive determination, perseverance, creativity, and resourcefulness. Necessities for invention.
China found itself isolated and deprived of access to the best AI technologies and IP, particularly the abundance of top GPUs. The actions of the US, aimed at slowing the advancement of AI in China, did the opposite, created through conditions of scarcity. This scarcity led China to become even more focused and more determined to find effective solutions, igniting China’s entrepreneurial spirit to compete and win!
It triggered what we all know about evolutionary processes, that humans have always been able to apply their intelligence not only to survive but to thrive in scarcity. To apply intelligence and figure out challenging environments via the sophistication for self-preservation.
Stressful conditions generate robust responses, strengthening adaptation and invention, and according to Stoic philosophy, suffering can indeed build resilience because it forces individuals to confront adversity, develop mental fortitude, and learn to manage their reactions to difficult situations. Ultimately helping them thrive in the face of adversity. This is often described as “embracing the obstacle” or “amor fati” (love of fate). The US wrongly calculated their actions, helping to build antifragility in the Chinese AI ecosystem and, at the same time, exposing OpenAI’s fragility!
This is well explained and part of the philosophy of Nassim Nicholas Taleb, in his best-selling book Antifragile: Things That Gain From Disorder; “Humans tend to do better with acute than with chronic stressors,” Taleb writes.
The Demand-buying Game
OpenAI has done a good con job of convincing many that it’s a growth company when it is a commodity. It has also been able to position itself as an incumbent leader in Silicon Valley and created an ecosystem to drive its narrative. This is similar to a multi-level marketing scheme that continuously has to go out and raise new money and bring in new suckers. But just like with MLM, only the very top ever make money.
OpenAI wouldn’t have been able to do what it has without an abundance of GPUs, i.e., Nvidia chips. Nvidia has and continues to invest in OpenAI, providing it with more cash to turn around and buy more Nvidia chips. Creating the false impression in the marketplace of natural demand when, in reality, it is purchased demand. If we don’t keep our eyes wide open and apply our human superpower of skepticism, then we won’t recognize the play, and we’re always going to have these myopic views and be easily swayed by bogus big tech narratives. Narratives that are often supported by dense, ill-informed media looking for hyped stories to sell.
The upper-level players are interdependent, for Nvidia to continue to grow, it needs the OpenAI and its minions to keep up the hype game hype: bigger and bigger LLMs. And OpenAI needs Nvidia’s stock price to keep rising, to say, look, there is great demand happening around big and bigger language — and so on — more consuming on contrived rigged demand. OpenAI must continue to keep consumers of LLMs on the line with its science fiction story of “we are going to build AGI soon.” And the wilfully ignorant herd continues to stampede to their demise.
As Nvidia’s stock prices skyrocket, it provides the appearance of legitimate and rational market demand; it also provides more cash for Nvidia to invest more all over the board to ensure the fake growing demand continues, that there is enough money for companies in the ecosystem to buy more chips. This merry-go-round goes around and around, where it stops nobody knows. Maybe DeepSeek has stopped it or just slowed it down enough for intelligent people caught on it can jump off. However, it is hard to predict markets that are driven by irrational human psychology, unfortunately, history tells us there must be a crash and blood in the streets for suckers to get the message.
And this is usually how the bubble inflates: ever-increasing phony made-up valuations and skyrocketing stock prices. Increasingly raising more money for unproven systems and companies. Simultaneously, however, the system grows increasingly fragile, and it extends to the broader US economy.
In 2024, the S&P 500 Communications Services and Information Technology index gained 37.39%, following 2023’s gain of 57%. By comparison, in 2024, the broader S&P 500 Index gained 25.02%. Without a doubt, the vast majority of stock market returns come from technology, and more so, top performers are AI-in-it driven or related. Even those sectors classified as Financials and Communication Services, for example, there, too, everything AI drive their stock market performance. But, like always, when the bubble bursts it will take the US economy with it.
As the ecosystem expands and valuations and wealth grow for the players in the Silicon Valley ecosystem game, money can make cowards of many, and everyone goes along with it because it’s all about the money, always! The ecosystem becomes lax as they toe the line because they will be punished if they step or upset the money cart. Again, the ecosystem weakens more, leaving an opening for determined countries and companies like China to come in. Enter DeepSeek, with many more DeepSeek-like companies on the horizon.
So, the wealth abundance approach has hangover consequences, but OpenAI must continue to keep the music playing, doubling down when they know demand has been artificially generated. The US economy goes deeper into the OpenAI abyss, but we can’t turn back now, we’re too big to fail. So, the ecosystem then goes out and finds even bigger marks, a whale, i.e., President Donald Trump’s White House backing of the StarGate project.
All of these actions are meant to protect and keep the incumbents going, but at the same time, capitalism suffers when so much wealth is concentrated in so few hands. When everyone is drinking the cool aid, critical thinking, true innovation and entrepreneurship are stifled — a new tech oligarchy system is forged. And there is no one left to think differently and challenge the status quo, to create world-changing companies.
Nevertheless, the ecology of market forces will prevail…back to natural equilibrium, the state of balance between all things in an ecosystem. The fragile will shatter, and the resilient will prosper. Markets will crash and burn for rejuvenation purposes as the bumpy ride back to equilibrium takes its natural course. The usual stuff.
DeepSeek has validated the 6ai approach, i.e., 6ai leveraging the investments of incumbents’ open-source to build more specific solutions for practical purposes that anyone can use. Why reinvent the wheel if we can take what we need from an already-trained open-source model? This is the underlying approach that 6ai Technologies is using to build its strategy development software.
If OpenAI has spent billions upon billions of training models, and DeepSeek is doing it much smarter, with less than $10 million, then it makes no rational sense not to leverage these. Our spending will be on our unique value proposition and useful features.
6ai Technologies is not trying to replicate LLMs, quite the opposite; we are building off of open-source, taking what is useful to us and configuring it into our proprietary IP: FLM-T, CCQF and CCQF, toward a highly specific strategy development product for organizations and individuals to use.
This thinking is also the basis of our proprietary applied intelligence | ai process — our process is our code! And ai does it better! It’s highly focused, reliable, practical, easier to manage and significantly less expensive than training Large Language Models from scratch.
The raison d’être for applied intelligence is to provide a disciplined framework for learning and applying complexes of empirical facts through a process of logic to generate productive scientific insights for people and organizations to build and execute strategy.
So whether you’re a small business owner, a non-profit organization, an institution, a student or individual, even a small independent consult. 6ai acts as a huge knowledge, expertise and insight resource partner for whatever strategy you need to develop to help you outperform in the 21st century. 6ai’s software isn’t about replacing professionals or people — it’s about being an augmenting ally to enhance productive human value for both the individual and organization — win-win.
6ai provides an easy-to-use/do-it-yourself software solution that doesn’t require any special training or skills…accessible to all and at a fraction of the cost of hiring consultants, specialists, or advisors.
See the interface and business model images below: