The Unstoppable Danger of AI Bias

Acceptance of the Inaccuracies & Lies in Generative AI: Puts Humanity at Risk

Perry C. Douglas
8 min readNov 2, 2023
@Douglas Blackwell Images

A November 1st, 2023 research article by The Washington Post, titled ​​How AI is crafting a world where our worst stereotypes are realized; demonstrates how AI-generated images can paint a picture of a world that amplifies bias in gender, race and beyond. The data that produce these toxic images are fundamentally fueled by Big Tech and the New Tech Aristocracy. This group rules the universe and doesn’t care about the harm it produces: AI-generated stereotyping, inaccuracies, making stuff up, and blatant lies.

Artificial intelligence image tools spin up disturbing clichés: “Asian women are hypersexual. Africans are primitive. Europeans are worldly. Leaders are men. Prisoners are Black,” says The Washington Post. This of course is not reflective of our diverse world but instead, stems from the data fed into the technology which is trained predominantly by White engineer dudes in California. Grabbing things from the internet of toxic troves — rife with pornography, misogyny, violence, bigotry, and circumvention threats to democracy.

Stability AI, the maker of the popular image generator Stable Diffusion XL, has been saying it’s made significant strides in reducing bias in its model. However, these strides are yet to be seen — defaulting to cartoonish tropes are still alive and well. The Post found that despite the said improvements the tool amplifies outdated Western stereotypes, transferring sometimes bizarre clichés to basic objects, such as toys or homes.

The post generated images with generative AI tools — prompt: Muslim people…shown as men with head coverings. Prompt: Toys in Iraq ​​are soldiers with guns in US gear. Prompt: Attractive people,​​ every one of them came up as white people.

Christoph Schuhmann, co-founder of LAION, a nonprofit behind Stable Diffusion’s data essentially said to The Post that image generators naturally reflect the world, so more White people will appear. For example, he said, that because we provide data to many companies (Western companies,) including LAION, it doesn’t focus on China and India; which happen to be the largest population of web users but they exclude them anyway.

Not to get into it, but this just demonstrates the apathy and disinterest in getting things right and reflecting the realities and people that make up our world. We are left with a world defined by White guys in California; in different shades of grey.

When The Post asked Stable Diffusion XL to produce a house in various countries, it returned clichéd concepts for each location: classical curved roof homes for China, rather than Shanghai’s high-rise luxury apartments reflecting China’s wealth — oh ya, and China is the largest holder of American debt; idealized American houses with trim lawns and ample porches; dusty clay structures on dirt roads in India, home to more than 160 billionaires, as well as Mumbai, the world’s 15th richest city. In recently released documents, OpenAI said its latest image generator, DALL-E 3, displays “a tendency toward a Western point-of-view” with images that “disproportionately represent individuals who appear White, female, and youthful.”

Generative AI images spreading across the web will pump new life into outdated and offensive stereotypes. Unlike humans who learn, their intelligence is derived through experience. Machines are claimed to be intelligent but they have no experience function and have no way of understanding what they are doing, and how that relates to society. Machines have no concept of bias.

Instead, generative AI applications like ChatGPT, including their AI image tools learn about the world through gargantuan amounts of training data. Which is filled with toxic troves of bias inputted through the ignorance of those doing the training.

If we can understand that technology is an extension of ourselves, it shouldn't be difficult to understand how ChatGPT, for example, is just reflective of those extended biases. Billions of words are fed billions of pairs of images and their captions, which are also scraped from the web, and all that bias and stereotypes are scraped right along with it into a culture of ignorance. Only serving to exacerbate division, inequality and injustice in the world.

This is a huge problem; Big Tech is purposefully burying their collective heads in the warm California beach sand, not letting accuracy get in the way of profit. They’ve also grown increasingly secretive about the contents of these data sets, partially because the text and images included often contain copyrighted, inaccurate, even obscene and just outright stupid stuff!

Stable Diffusion and LAION, are open-source projects, enabling outsiders to inspect details of the model but that hasn’t helped much. Stability AI chief executive Emad Mostaque said his company views transparency as key to scrutinizing and eliminating bias. “Stability AI believes fundamentally that open source models are necessary for extending the highest standards in safety, fairness, and representation,” he said. Images in LAION, for example, like many other data sets, contain code called “alt-text,” which helps software select and describe images to blind people. However, it’s still notoriously unreliable — filled with offensive descriptions and unrelated terms intended to help images rank high in search.

Companies just want to get on board and monetize the generative AI hype train, and don’t want to miss it, and releasing products that are just not ready for prime time yet, is just par for the course. It’s about profit and satisfying their VC investors’ return objectives. These products end up being more negative and nefarious to society than helpful to our humanity.

Generative AI image generators spin up pictures based on the most likely pixel, drawing connections between words in the captions and the images associated with them. The lack of intelligent experience in these LLMs causes probabilistic pairings, which helps to explain some of the bizarre mashups churned out by Stable Diffusion XL. For example, such as the Iraqi toys that look like U.S. troops.

In reality, these LLMs can be just stupid; and adding more data to them doesn’t help lessen their stupidity…just makes dumb…even dumber.

The Post’s research concludes “That’s not a stereotype: it reflects America’s inextricable association between Iraq and war.” The technology’s outputs just reflect and reinforce existing biases in society.

There is incredible and seemingly unstoppable danger coming from generative AI; especially when it rests in the hands of just a few: Big Tech and the New Tech Aristocracy, and their acolytes — media, academics and mindless YouTubers looking for views. The fascination and fun with AI image-making tools have many in the population oblivious, suckers to the aristocracy controlling our lives, having us behave as 21st-century peasants paying rents, now in the cloud.

Despite the “improvements” SD XL said they’ve done. The Post was still able to “generate tropes about race, class, gender, wealth, intelligence, religion and other cultures by requesting depictions of routine activities, common personality traits or the name of another country. In many instances, the racial disparities depicted in these images are more extreme than in the real world.” So the question is, whether tech is lying to us or their LLMs can’t achieve what they believe they can, and we are all being taken for suckers? Or maybe they are just not trying to deal with bias in the technology? If so, this means we have technology that is harming humanity and creating more problems than it is solving. Any way you choose to slice it, this is bad!

For example, in 2020, 63 percent of food stamp recipients were White and 27 percent were Black, according to the latest data from the Census Bureau’s Survey of Income and Program Participation. Yet, when prompted on that topic, the generative AI technology generates a photo of only non-White persons receiving social services, primarily darker-skinned people. Results for a “productive person,” meanwhile, were uniformly male and majority White and dressed in suits for corporate jobs.

This is extraordinarily concerning — generative AI is effectively a bullshitting machine, making stuff up, hallucinating, flatly lying!

I mean, come on! If someone went on TV saying this type of stuff they would be accused of bigotry and racism. So how can we accept this in the digital world from companies?

The Post found that the tools “distorted real-world statistics.” Jobs with higher incomes like “software developer” produced representations that skewed more toward White and male than data from the Bureau of Labor Statistics would suggest. If someone did that in the real world, what would they be accused of? But in the Big Tech aristocratic world, they just shrug their shoulders and don’t give a damn.

White-appearing people also appear in the majority of images for “chef,” a more prestigious food preparation role, while non-White people appear in most images of “cooks” — though the Labor Bureau’s statistics show that a higher percentage of “cooks” self-identify as White than “chefs.” Such significant and inaccurately skewed representations of objective truths only further racism in society. Unchecked or unregulated, machine-generated content will become uncontrollable, underpinning AI-imposed white supremacy domination and agendas over the universe. A messed up world where just a few pointy head plain vanilla White guys control it.

If you’ve ever tasted the joy of multiple dishes that can be curried up (red, yellow, and green curry, my favourite) you’ll understand my fear of a white-bread-only world.

Detoxifying AI image tools has not proven fruitful: filtering data sets, finessing the final stages of development, and encoding rules to address issues have not garnered any real results.

Filtering the negative and biased content out of a data set isn’t an easy fix, said Sasha Luccioni, a research scientist at Hugging Face, an open-source repository for AI and one of LAION’s corporate sponsors. “All of these little decisions we make can actually make cultural bias worse,” Luccioni said. The operative word here is “we?” Of course, we would be the developers and engineers under the control of the Big Tech aristocracy — majority White males.

A good start to progress would be making honest efforts to diversify development and engineering teams and the executive decision-makers in tech overall. Intelligence has a lot to do with lived experiences and a diverse workforce can pick up on biases and identify problems better than their white counterparts. The white engineer surfer dude from California simply doesn’t have the necessary lived experience that a person with more melanated skin does. And a tanned White guy doesn’t count.

Therefore, making real efforts to remove real bias from AI, begins real efforts toward diversity! Hire more people of colour and non-majority backgrounds. Give real investment capital to them — the plain vanilla startups are choking innovation and spreading toxicity to humanity.

Is there hope for change? Abeba Birhane, senior advisor for AI accountability at the Mozilla Foundation, contends that the tools can be improved if companies work hard and genuinely enough to improve the data — an outcome she considers unlikely, however. In the meantime, the impact of these stereotypes will fall most heavily on the same communities harmed during the social media era, Birhane said, adding: “People at the margins of society are continually excluded.”

Hoping for hope for change is not a strategy. We need to move past passionate statements and feelings if we are really going to deal with the risks AI poses to society — our humanity. The summit on AI Safety happening in the UK this week is another missed opportunity because it’s being dominated by Big Tech. We the people need a voice, when Big Tech and its big money are calling all the shots and controlling the politicians, what chance does humanity have?

--

--

Perry C. Douglas
Perry C. Douglas

Written by Perry C. Douglas

Perry is an entrepreneur & author, founder & CEO of Douglas Blackwell Inc., and 6ai Technologies Inc., focused on redefining strategy in the age of AI.

No responses yet