“Inside AI Unicorn Anthropic’s Unusual $750 Million Fundraise”
Applying applied intelligence To The Valuation Hype Game
This is the title of a recent Forbes article on the latest financing round of Anthropic, the OpenAI rival founded by ex-OpenAI employees. Founded in 2021 and funded by companies including Google, Salesforce and Zoom, Anthropic had already raised a combined $750 million from two funding rounds in April and May. In October, Google agreed to invest up to $2 billion in the startup, a commitment involving a $500 million upfront cash infusion and an additional $1.5 billion to be invested over time.
Anthropic is the developer of Claude 2, a rival chatbot to OpenAI’s ChatGPT that is used by companies including Salesforce-owned Slack, Notion and Quora. Claude 2 can summarize up to about 75,000 words, which could be the length of a book. Users can input large data sets and ask for summaries in the form of a memo, letter or story. ChatGPT, by contrast, can handle about 3,000 words.
For those of you who may follow my writings, one of my main themes has been that AI investing is not led by natural market forces or investors. Nor is it led by rationally weighing the useful utility of that technology to serve humanity best. Furthermore, we are not paying attention to the motivations of the big tech companies building it.
Therefore, what we are not recognizing or what we are blinded by the hype to, is the scheme by a small group of corporations with tons of cash. Investing solely to protect their market share and to expand their business in the future. So it’s like R&D with a twist — fund research and they can draw on the new technology in the future, and have an economic percentage of the company if they succeed. Or take over and own it in the future, i.e., how Microsoft is positioning itself with OpenAI. Why do you think Microsoft is investing billions in OpenAI and had the power to reinstate Sam Altman as CEO in that Game of Thrones board episode the other day?
For big tech, their massive investments in startups exert their power over the universe, the arbiters of our lives. Reinforcing and expanding market reach. This of course is highly anti-competition and anti-capitalism, stifling innovation, creativity and diversity in thinking — a tiny group of white males deciding what’s good for the rest of us. So the big tech aristocracy works in informal collusion, putting up barriers to entry for everyone else.
So what is unusual, as Forbes puts it about this financing, is that the lead VC (part of the courtiers for the big tech aristocracy) investor, Menlo Ventures, couldn’t finance the massive $750 million by traditional methods. So they created a SPV (Special Purpose Vehicle) to do so. This is simply about manufacturing valuations and keeping the money and game going, pushing valuations higher and creating massive but bogus gains — moving to the next round of financing, and so on.
These unusual actions are also indicative of what usually happens at the top of the market. Like in the past, whether it be dot.com or the Subprime mortgage market, the AI-hype gravy train is nearing the station, and these types of “unusual” moves are warning signals flashing. Or as as former Fed Chair, aka The Maestro, Alan Greenspan once famously said: irrational exuberance. So when financial instruments instead of the fundamentals of investing lead the way, please start sleeping with one eye open as my Caribbean grandmother used to say. Or look out for when they are coming to take the punch bowl away.
Secondly, there are real icebergs in the way of generative AI. The New York Times fired the first shot the other day — suing both OpenAI and Microsoft. This lawsuit alleges that Microsoft Copilot, which is built on ChatGPT, infringes on the copyrights of the New York Times, by using its vast library to train its model, which apparently can be prompted into outright plagiarism. Extensively, this suit blankets and goes to the very foundation of generative AI business models. This is what history teaches us — that it’s the unpredictable (or predictable) events that can shape the future more. Those events are the mighty variables that are independent of what you are doing or what you think you know. Like in calculus, you can have all the relevant variables in your favour but the mighty independent variable still comes along and upsets your apple cart.
The Black Swans are always lurking — the danger is in what you don’t know.
These high flyers will be significantly challenged to succeed if they have to pay licensing fees or adjust their business models which will bring in competition and new technology. That’s a big risk. Suddenly zillions of dollars in business run into a massive obstacle — The Times suit is only the first of many suits to come. In the UK for example, the courts have been ruling 100% against companies/people trying to patent or copyright work done via AI. In one case the judge plainly said AI can’t be original work, that can only come from humans.
OpenAI and Midjourney, for example, business models require them to commit plagiarism…there is no other way to say it. Because if everyone else must pay licensing fees, royalties, and adhere to citing sources etc., then why should OpenAI not be required to do the same?
OpenAI’s lobbying campaign is pathetic and stinks of what privileged people, who get high on their own hype supply, often do, when faced with challenges. Whining and complaining and making threats: give everything to us free or we will die…not fair…we are smart white guys who must be allowed to cheat to win — either we get to use all the existing IP we want for free, or you won’t get to generative AI anymore.
“These threats are empty because open-source LLMs are already out in the wild, and a large number of people know how to build them. LLMs may eventually be replaced by better technologies, but for now, they aren’t going anywhere, even if some of the commercial purveyors go out of business” says Gary Marcus.
OpenAI knows their argument is bullshit too; they know licensing is inevitable and that’s why they’ve been running around the world working on licensing deals. Therefore, having to pay will fundamentally change the economic value proposition for these types of enterprises, and of course, once reality comes back into town, valuations will come back down to earth too. I don’t have a crystal ball, we’ll just have to wait and see but you can apply some applied intelligence, basic ai common sense in the meantime.
I just wanted to provide thinking people with these couple of highlighted points to consider and remember that history always repeats itself, particularly when you’re not paying attention. “Unusual” and hype-based acts in the market often are warning signals that the top of the market may be nearing. So pump-breaks, generative AI under the prevailing business models, i.e., OpenAI, Anthropic, Midjourney, etc., is not a slam dunk. We are only at the beginning. So apply some applied intelligence and don’t be the usual useful sucker retail investor, always left holding bag in this hot potato valuation game.