The Problem With AI, Part II
AI, VCs, & Humanity…
AI is Plato
An example of the Plato realm of development is the New York-based company called Normal Computing, which has raised $8.5m in seed funding to further its mission of “enabling artificial intelligence (AI) for critical and complex enterprise and government applications.” The round was led by Celesta Capital and First Spark Ventures. Normal’s probabilistic AI technology as I read it, is based on a theory subject to or involving chance variation. So Normal is designed to provide unprecedented control over the reliability, adaptivity and auditability of models powered by customers’ private data. However, if we apply some of Aristotle’s commonsensical approach, it would seem that Normal’s business model is an attempt to correct the impossible language — to correct or lessen the errors and unpredictability of LLM’s deep learning, and make them more “reliable.” Normal is engaging in Plato’s abstract perfection philosophy, and layering hallucination algorithms on top of hallucination algorithms. Throwing more data and algorithms at the problem won’t fix what can never be. So Normal can run as many loops as it likes but it’s just producing infinite math philosophy at the end of the day. This is a first-step fallacy, believing that just because you take the first step in solving a problem, doesn’t mean the problem is solvable by your method.
Normal Computing is built on a fallacy about deep learning/machine learning, neural networks; conflating narrow intelligence with general intelligence. It’s speculative, just trying things and seeing what works, and “Right now, what we are doing is not science but a kind of alchemy,” says Eric Horvitz, Chief Scientist, Microsoft. And this is just about money at the end of the day, the more layers some young graduate student can train, the more money can be demanded. But this is not science, it’s deception, influenced by the VC money ecosystem.
John Ruskin puts it best, that “The work of science is to substitute facts for appearances and demonstrations for impressions.” So we can always have impressions of things, but the truth is always pursued through facts…reality. So if we are not careful we’ll unwittingly allow scientists to straddle and blur the line between natural science and philosophy, or wishful thinking. AI is a manufactured mechanical production and that’s great because it has a practical value proposition to humanity. However, we must draw a conscious line between real and artificial, we must not conflate the two and see the purpose in each.
Philosopher and mathematician Bertrand Russell bring things home for us. He suggested that mathematics could be contrived, used or seen as a game of sorts, a created artificial system of rules and procedures. Where, unlike real math which is nature…how calculus explains the universe. This game could operate lacking any inherent meaning, grounding in or without connection to nature. Something purely syntactical, it can manipulate symbols and formal rules to create its own impressions. Essentially, the impossible language runs the game system. According to Russell, it will create its own sequences of logic and reasoning — the creation of its own interpretation of the universe — its own world. Then the game begins to engage in its sophistication of self-preservation (SSP,) it must dominate to thrive.
Problem-solving is a very complex process, so building machines to help us with that is highly intelligent. Processing the facts and turning them into utility tools for optimal outputs is progress, and improving our civilization. However, if we are not aware of what we are exactly doing or letting happen to us, then we lose ourselves, our humanity, and control over our dominion. Who controls AI controls the universe? And worse, once AI starts controlling us, we no longer control ourselves. So, yes, this is about control, you better believe that!
The first threat to control is to be aware of the singular focus on Plato-like fascination in the VC community. Below are some of the very latest (as of June 14, 2023) examples of VC/start-up funding that illustrate.
- Primer Technologies, an AI and predictive analytics startup, has raised $69m in Series D funding led by Addition. Primer’s generative AI model includes a vast language model similar to open-source generative AI tools such as ChatGPT and OpenAI’s platform.
- UK-based artificial intelligence (AI) startup Synthesia has raised $90 million at a valuation of $1 billion in a funding round led by venture capital firms Accel and Nvidia-owned NVentures. Synthesia’s technology is said to be helping over 50,000 businesses create custom AI avatars used in instructional and corporate videos. However, concerns surrounding video platforms being used to create mainly deepfakes seem to be a main challenge for this technology, so far.
- Alphabet-backed AI startup Anthropic has raised $450m in its latest funding round, bringing its total funding to $1bn and valuing the company at $4bn. Anthropic is an AI safety and research company whose goal is to build reliable, interpretable, and steerable AI systems. Anthropic was founded by former members of Microsoft-backed OpenAI and the latest funding round was led by Spark Capital, with participation from Google, Salesforce Ventures, Sound Ventures and Zoom Ventures. The herd.
- Chicago-based startup Paro has raised $25m in a series C funding round to expand its finance and accounting platform for freelancers and enterprise clients. The company has created AI-matching software that proactively connects experts with job opportunities, aiming to provide a trusted and reliable experience for all parties. Paro also offers its freelancers a range of tools and insights, including marketing and business development services. Is this really necessary? Slap “AI” on anything these days and you can make up a useless value proposition.
- Artificial intelligence (AI) startup Cohere has raised $270m in a series C funding round led by Inovia Capital, valuing the Toronto-based company at $2.2bn. Other participants in the round included tech firms Nvidia, Oracle, Salesforce Ventures and SentinelOne, along with financial institutions and VC firms Mirae Asset, Schroders Capital, Thomvest Ventures and Index Ventures. Cohere develops large language models that enable AI to learn from new data and then apply it to applications such as interactive chat systems or text generation. Again, just throw more data and algorithms at the impossible language.
- Normal Computing was addressed earlier, but Normal is like a different version of the Cohere? No clear distinct value proposition — throw more data at it with a more fancy explanation. Its reliable loop theory seems to be about just going around in circles.
- New York-based start-up Bito has launched an AI-powered coding assistant, which is designed to generate source code from natural language prompts. The start-up has secured $3.2m in funding from investors including Eniac Ventures and has a range of backers from the technology industry, including Google’s vice-president of engineering, Reza Behforooz. Again, the same old boys club, doing the same old thing, steering our universe to a duller shade of grey.
- Mistral AI, a new startup founded in May 2023 by former AI researchers from Google DeepMind and Meta, has raised $113m in seed funding just two months after launching. The latest investment values the company at $260m, making it a competitor to OpenAI — the herd, the herd the herd!
So, you don’t need AI or LLM to connect the dots and extract the problems with the above examples — everything is about the same thing, the pursuit of Plato-like perfection re LLM — layer, after layer, after layer of different versions of the same constructed hallucinations. Dominated by the few…groups of white men making start-up investing like an exclusive old boys club. This is anti-innovation, anti-creative — an industry on to itself.
According to Julie Ott, Associate Professor of History, at the Institute of New Economic Thinking — The New School. She’s also the Director of the Heilbroner Center for Capitalism Studies. Ott breaks down the history and The Myth of Venture Capital in her new book:
Wealth Over Work: The Origins of Venture Capital, The Return of Inequality, and the Decline of Innovation.
She goes way back to the origins of VC, to FDR, and through the 1960s and 1970s up, showing how the VC industry was lobbied into existence by a small group of white males to protect and grow white wealth, and with the underlying purpose of the exclusion of Blacks. Efforts were concentrated to change policies, institutions, tax laws and more to allow VCs to invest with preferred tax status and treatment, i.e., carried interest for one. Ott goes on to break down the myths and untruths that the VC industry is built on.
In short, VCs don’t help the economy or entrepreneurship meaningfully according to her well-researched analysis. It’s mainly a disingenuous game to get the highest contrived valuation possible — a wealth-creator construct for VCs.
The main myths to be busted are as follows:
- 94% of the Unicorn-Entrepreneurs took off without VC, and 76% never used it.
- 95% of ventures don’t get VC money because the VC industry is driven by what is “hot” versus the really good ideas that can make positive change. They are blinded by herd mentality, and hubris of course.
- 80% of VC-funded ventures fail, and a tiny 1/100 become home runs.
- The top 20% of VCs are in Silicon Valley, and this 20% earn 95% of all IPO profits; the studies also suggest that VCs don’t add any meaningful value. Mainly because they’ve never built successful ventures themselves, and the study further suggests that Angels add more value than VCs. Because many Angels are authentic business owners with real-life experience as entrepreneurs.
So VCs don’t start the fire, they only jump in after and fuel the flames…many have never done it themselves, they are essentially analysts running a wealth creation scheme. Unfortunately, in the case of AI/LLM fascination, they can fan the flames so much that they may end up burning down the entire forest of humanity.
VCs use capital as a weapon to create wealth and control it, which gives them political power to direct industry and economies in their favour. So the AI-tech sector is advancing to an autocracy regime…run by young white guys thinking they are Jesus Christ. We could be driving towards a world of tyranny by the tiny minority, where technology not guns become the tool to rule.
As a best-selling author and NYU professor, Scott Galloway points out, we’re also going to witness a tsunami of increasingly sophisticated scams through impossible language fascination investing. As the chart below shows, there has been a sizable increase since 2020, from an overly capitalized and overly hyped sector that just ends in nothingness. The fascination with generative artificial intelligence is responsible for continued deal flow. Not whether there is a useful product there. The competitive juices and motivations in the VC sector continue to ramp up the space. Startups are even being advised to push generative AI to inflate valuations to raise capital, even when there is no proprietary intellectual property there. Overnight success stories are extremely rare but the herd continues to drive up stupid valuations in the contrived fashion of belief that the next OpenAI success story is their next investment. But even Even OpenAI, the company at the centre of the generative AI craze, was founded back in 2015.
AI is all anyone wants to talk about as Collision tech conference opens in Toronto is the title of a recent Globe and Mail article. The article’s theme is “Crypto is out and artificial intelligence is in.” The herd is moving again so that should concern us, is the question asked? Just another hype cycle, it asks. Like dot.com, Crypto, and LLM/AI, usually, the laws of gravity prevail. The story is usually the same, it ain’t any different this time just time and place. Profitable, real sustainable business models coming out of AI remains elusive for the vast majority of those startups. “I always worry when we talk too much about something,” says Sophie Forest, managing partner at Montreal-based Brightspark Ventures, she adds, “that it’s important to step away from the hype” in order critically assess the reality; and risk.
In 2008 we had startups trading on potential future cash flow, now a plethora of AI startups are trading on future funding, building bullshit valuation schemes to sell to the next hot potato round of VC funders, says highly regarded risk analyst Nicholas Nassim Talib — recently in an appearance on CNBC. Talib, who correctly called the 2008 crash, when asked by the host if there is another similar crisis coming said “Of course…risk is right in front of us… it’s a white swan, not a black swan.” He continued: “[…look at the conditions…a couple of the two biggest risks are 1. we have a 100 trillion+ real estate market with ridiculous valuations when interest rates were at 3%, now it’s 7% and climbing…and I would also certainly stay away from new technology companies, particularly unproven AI, history shows that new tech will always get crushed in a market downturn…crisis.]”
In another interview, he went on to explain that those “tumours,” or asset bubbles, created an illusion of wealth for Americans but like always they will pop. “Disneyland is over, the children go back to school,” Taleb told Bloomberg. “It’s not going to be as smooth as it was the last 15 years” when interest rates were essentially at zero.
Furthermore, companies like Synthesia, discussed earlier, unfortunately, much of the technology more or less is being used to produce deepfake videos and propaganda for the internet. To perpetuate and deepen racial and economic inequities and circumvent democracy. Galloway also gives us the example of AI resume screening technology, where the LLM placed the greatest weight on two candidate characteristics: “the name Jared, and whether they played high school lacrosse.” This essentially pulls to valuing a certain demographic — young white males. And if the system is asked about a CEO it is 99% likely to show a white male because white males are shown more in the data…the media…you see Elon Musk almost everyday damn day in the media, so the machine is simply leaning from that data which reinforces its biases. And the most shocking example I’ve come across in my research is among the many problems Tesla is experiencing, its self-driving cars are good at identifying pedestrians, but mainly light skin ones, not darker skin ones. We have enough problems with cops killing Black folk, we don’t need Tesla's…running them over too.
In all seriousness, again, the machine just acts on the biased data it’s fed. And with young white males training the machines, existing societal problems of inequality and injustice are just exacerbated.
Amazon scientists spent years trying to develop a custom AI résumé screener but abandoned it when they couldn’t engineer out the system’s inherent bias toward male applicants.
Generative AI systems (e.g. ChatGPT, Midjourney,) if you ask it for an image or picture of an architect, for example, you’ll almost certainly get a white man. And if you ask for an image of a “social worker,” the systems are likely to produce a woman of colour. And two lawyers recently had to go before a federal judge and apologize for submitting a brief with fake citations they’d unwittingly included after asking ChatGPT to find cases. Maybe they would do well with Trump, I hear he’s looking for deepfake lawyers.
These occurrences could easily be passed off as some small errors but “small errors” are creating big problems! This shit is just messed up!
There will always be disruptive times during economic transitions, but in the end and over the longer term, there should also be a net positive gain for society. For example, in 1760, Richard Arkwright invented cotton-spinning machinery, at the time there were 2,700 weavers and 5200 operating the spinning wheels in England — 7,900 total. Twenty-seven years later, 320,000 total workers had jobs in the industry, a 4,400% increase.
If we take the automation of manufacturing that began decades ago in the US and the outsourcing of manufacturing jobs by US companies to China, the chickens have sort of come home to roost over the many years. What those US companies gained in efficiency and lower labour costs, decimated the American middle working class. Putting these people in the category of poor and uneducated whites — who have now become Trump MAGA supporters. Their main goal now is to blame others and claim victimhood, drive racism and hate and circumvent democracy. And during that same period, China has become an economic superpower, owning the majority of American debt while America engages in the absurdity of the Trump reality show and electing idiots to Congress.
So when we think about the value of AI, we must not only think about how many people can be laid off in the short term to boost corporate profits. We must also think about the long-term consequences of uncontrollable AI. About the prosperity it can create, and equally, the displacement and suffering of communities. No one is saying to stop or even pause AI R&D, as some have suggested, what is being said here is to take a longer-term view and exercise critical thinking about AI implementation in our society, and about what we want the world to look like.
Just like in the dotcom era, history tends to repeat itself if you are not a student of history or just not paying attention, so it’s not any different this time. In the VC world…with short-term thinking, playing the bullshit valuation game will have very bad outcomes for entrepreneurs and society. Furthermore, if the broader population is not involved with technology development and investment, and on policy, on multiple levels, humanity could be headed down a road of great peril. AI is here to stay, there is no doubt about that, and it’s a good thing! However, uncontrollable AI and VCs present clear dangers to humanity.
Technological change is a part of human evolution, this is nothing new, but we need diversity of thinking, ideas, technology, people and capital investing. Most importantly, we need Aristotle-like thinking at the forefront. We need to stop the constructive hallucination frenzy and the plain lies.
Our efforts to get the best from technology while mitigating harm to humanity is how we must approach the future of AI. We must build more practical strategies to solve the world’s most complex problems, and that doesn’t happen by throwing more data at deep learning’s impossible language. It requires instead a practical and holistic vision and investment agenda, where people, humanity and the common good remain firmly in control of AI.