Bias and Diversity: Part 1
A Visual Exploration of Bias in AI
I played with AI art rendering apps here and there for a few months before discovering Midjourney at the end of August.
I was enthralled.
As someone with aphantasia, it has been a boon.
I have no visual imagination. My imagination is very, very active, but it plays more like a radio drama than a blockbuster movie.
For years, I thought people were speaking metaphorically when they “visualized” something until I discovered that no, they’re really seeing things in their imagination, with varying degrees of clarity.
I sometimes wonder if the lack of visual imagination plays a part in my fascination with and skill at using words, or if it’s the other way around — my hyperverbal imagination overwhelms the visuals.
I don’t know. I’m not a neurologist.
I’m not a researcher. Or a journalist.
I’m a storyteller.
So I started playing with Midjourney as a storytelling tool.
And it soon became very, very obvious that there are biases baked into the datasets.
Biases that exist because of the way humans use language and tag images on the internet, especially in the English language which many of the datasets the big tools are using are primarily based on.
If I go to Midjourney right now and type “/imagine prompt: woman” I get:
They’re stunning. So good. Midjourney has gotten so good at faces. Much better at eyes than even last month.
What if I run that prompt again? Just “woman” but this time, I’ll add “full body” and change the aspect ratio to portrait.
Rendering those exact prompts again.
What do you see?
What are your first impressions?
Go deeper.
What’s not there?
Who’s not there?
I’ve decided to start this series because of what I’ve been noticing after having rendered nearly 20,000 images on Midjourney.
I’m not a researcher.
I’m a contemplative. I live a monastic lifestyle. And I’m sharing what I see.
What do you see?