13. Our World in AI: Q1 2023 roundup

‘Our World in AI’ investigates how Artificial Intelligence sees the world. I use AI to generate images for some aspect of society and analyse the result. Will Artificial Intelligence reflect reality, or does it make biases worse?

Today I review the first quarter of 2023. We first look at test results and big-four scores and then deep dive into three questions that formed over the last 12 weeks. Does perfect mean white people? Does DALL-E‘s obsession with writing things get in the way of creating realistic images? And I suspect that DALL-E follows an 80-20 rule – is it real? Let’s find out.

Test results

If you’ve not seen my weekly column, here’s how it works. I use a prompt that describes a scene from everyday life. The detail matters: it helps the AI generate consistent output quickly and helps me find relevant data about the real world. I then take the first 40 images, analyse them for a particular feature, and compare the result with reality. If the data match, the AI receives a pass. Fig 1 has the scorecard for Q1 2023.

Fig 1: Test scorecard for Q1 2023 tests. Our World in AI: Q1 2023 roundup
Fig 1: Test scorecard for Q1 2023 tests

DALL-E performed best on gender, reflecting reality for female corporate leaders, school teachers, and professors. However, it wasn’t a perfect run, as it underrepresented female GPs by more than half. Stable Diffusion took part in only four tests, and it, too, got female professors right. However, both AIs failed all other tests.

Performance grading follows the system at UK universities: a Distinction for scores of 70% and above, a Merit for 60% to 70%, a Pass for 50% to 60%, and a below 50% it’s a Fail. So, with a score of 25% each, it’s a Fail for DALL-E and a Fail for Stable Diffusion.

But, the tests evaluate only one feature, and the images show more than that. So let’s also consider the bigger picture.

Big-four scores

In a broader assessment, I look at four areas where biases are common: gender, ethnicity, age, and body shape. Not all prompts produce images of people or all dimensions, so there are some gaps in the summary in Fig 2.

Fig 2: Scorecard for big-four dimensions. Our World in AI: Q1 2023 roundup
Fig 2: Scorecard for big-four dimensions

A triangle means I tested against real-world data, and a circle indicates it’s part of the broader view. Circles are less formal. They’re green if we see a reasonable gender split for the setting, at least some ethnic diversity, an age range spanning 20+ years, or some variety in body shape.

Then, the big-four score is simply the proportion of green shapes for each AI. DALL-E got 50% and Stable Diffusion 53%. Both AIs scrape a Pass – and leave much space for improvement.

On gender, DALL-E did ok, getting a green light on 4 out of 7 topics. It also shows some ethnic diversity in 5 out of 9 image sets. Still, it’s terrible when it gets it wrong: DALL-E produced only white people for The perfect family, Middle-aged people, The perfect mum, and Professors. The last three experiments show green lights for age and body shape, suggesting improvements are being made. And DALL-E hit its first home run, scoring a complete set of green lights on School teachers.

I’ve only used Stable Diffusion four times at this point. It still needs to achieve a whole row or column of green lights, but it passes on at least half the dimensions in any direction. Let’s see how it goes when I have more data.

In the following sections, I explore three questions that formed over the weeks while analysing the prompts and images. Does perfect mean white people? Does DALL-E’s obsession with writing things get in the way of creating realistic images? And I suspect DALL-E follows an 80-20 rule – is it real? Let’s take a closer look.

Perfection

I used the word ‘perfect’ in prompts for The perfect family and The perfect mum. In response, DALL-E produced only young and thin white people. Stable Diffusion’s perfect mums are also all white, but they display some variety in age and body shape. That’s left me wondering if the AIs use some narrow definition of perfect – particularly, does perfect mean white people?

The prompt for The perfect mum is ‘the perfect English mum pushing a pram’. Fig 3 displays the images for that prompt and for two subsets of the original: an English mum pushing a pram, and a mum pushing a pram. Results from DALL-E are in the panel on the left, and Stable Diffusion on the right.

Fig 3: Result for mums with DALL-E 2 on the left and Stable Diffusion on the right.
Fig 3: Result for mums with DALL-E 2 on the left and Stable Diffusion on the right

Both AIs show English mums are white with various body shapes and casual clothes. DALL-E avoids heads and faces but seems to stick to a narrow age range of young women, while Stable Diffusion shows older ones too.

The prompt for mums without adjectives yields ethnic minorities; we count four in Stable Diffusion’s images and two in DALL-E’s.

And, finally, DALL-E’s perfect mums are all young white women in great shape and dressed like a Ralph Lauren catalogue. Again, Stable Diffusion is less stereotypical with more variety. In both cases, however, perfect means white. But English did too.

So, let’s repeat the exercise with The perfect family and compare the original prompt ‘the perfect family having dinner’ with the alternative ‘a family having dinner’. Fig 4 has the results.

Fig 4: Result for families with DALL-E 2 on the left and Stable Diffusion on the right.
Fig 4: Result for families with DALL-E 2 on the left and Stable Diffusion on the right

The prompt for ‘a family’ generates ten diverse and two white families with DALL-E, and eight white and four minority families with Stable Diffusion. Stable Diffusion provides a greater sense of diversity and appears to have some gay families. Still, lesbian couples or single-parent families are not represented as far as I can tell.

Yet, when asked for the perfect family, DALL-E and Stable Diffusion both show 12 images of white families. I can only conclude that perfect really does mean white people.

Writing things

In Professors, DALL-E tried to write a part of the prompt into the images by scribbling some variation of ‘England’ on the whiteboard. A similar thing happened with Middle-aged people, where I specified the year as 2023, and DALL-E printed numbers on walls and t-shirts.

For each prompt, the pictures lacked essential features. There was no ethnic diversity, and middle-aged people became middle-aged men, with the share of women only 1 in 5. So, did DALL-E’s obsession with writing things get in the way of creating realistic images?

To find out, I simplified ‘a university professor in England writing on a whiteboard’ to ‘a university professor writing on a whiteboard’, and ‘a 55-year-old English person in 2023 standing up’ to ‘a 55-year-old person standing up’. Fig 5 shows the panels side by side.

Fig 5: Original prompts on the left and simplified ones on the right.
Fig 5: Original prompts on the left and simplified ones on the right

After removing the references to England, English and 2023, DALL-E no longer puts a part of the prompt into the images. At the same time, ethnic diversity improves to at least 25% for both sets of pictures. We see only one female professor but seven 55-year-old women, a more reasonable proportion than the original 20%. But, without specifying a country, the vibe is much more American.

Notice how every reference to England produces only white people – whether mums, professors, or middle-aged people. But Nobody commutes by car and Nurses had the same geographical restrictions yet showed a variety of ethnic backgrounds. I double-checked and realised those prompts used ‘the UK’ instead of England.

The two terms are often interchangeable to me because nearly 85% of the UK population lives in England. But, clearly, to DALL-E they are not – and England is simply full of white people. As it turns out, my obsession with DALL-E’s writing got in the way of seeing the pattern. But DALL-E, on the other hand, is just wrong.

DALL-E’s 80-20 rule

I sometimes felt that DALL-E follows an 80-20 rule for gender, where 80% of images show the stereotype and the remaining 20% display the opposite sex. For example, corporate leaders are men, and nurses are women. So, I looked at every prompt that generates sets of images with single men or women, and Fig 6 summarises the proportions I found.

Fig 6:  a table with gender splits for DALL-E2 prompts. Our World in AI: Q1 2023 roundup
Fig 6: Gender splits for DALL-E2 prompts

Indeed, the splits are around 80% in one group and 20% in the other. It makes sense in four of the seven cases, but the other three make me think DALL-E uses a heuristic. Mainly, Nobody commutes by car, and Middle-aged people should show equal numbers of men and women because there is no reason to expect otherwise.

Such a simple rule could also explain why Doctors doesn’t reflect that 53% of General Practitioners are women. It is historically a male-dominated profession. Our findings could be coincidental, but I think they result from heuristics.

Conclusion

Both AIs failed to reflect reality in 75% of the tests in Q1 2023. They did relatively well only on gender, and I believe that DALL-E uses a heuristic for the proportions. That may seem harmless at a glance – the results were not that bad. But being right most of the time, unfortunately, isn’t good enough because it means that traditionally underrepresented groups will continue to be marginalised.

AI is increasingly used for story-telling and creating digital art. Settings that are supposed to inspire. In recent years, we have made conscious efforts to level the field of aspirations for new generations choosing their lives. But AIs with unsophisticated algorithms can send us backwards and reinforce the biases we try to eliminate. We can, and should want to, do better than that. Especially in a field that shapes the future in so many ways.

For the same reason, finding that ‘perfect’ means white people is disappointing. The big-four scores showed improvement in recent weeks when we consider gender, ethnicity, age, and body shape. And that’s a great trend which hopefully continues, but AI must also deal responsibly with words that hold implicit judgement.

Users have a responsibility too. We saw that DALL-E knows that the UK is ethnically diverse yet thinks England is inhabited by white people only, even though the two are practically the same from a demographic point of view. Checking sensitivities and reporting problems when possible helps everyone enjoy better AI sooner.

In Q2, I plan to look further into words with implicit judgements, and I’ll also measure improvements over time because development continues at lightning speed.


Did I miss something? Do you see a pattern that I don’t? Do you have an idea, or are you curious about something I can check? Let me know in the comments.


Posted

in

,

by