en
                    array(2) {
  ["de"]=>
  array(13) {
    ["code"]=>
    string(2) "de"
    ["id"]=>
    string(1) "3"
    ["native_name"]=>
    string(7) "Deutsch"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    int(0)
    ["default_locale"]=>
    string(5) "de_DE"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "de"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(6) "German"
    ["url"]=>
    string(58) "https://www.statworx.com/content-hub/blog/dalle-2-open-ai/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/de.png"
    ["language_code"]=>
    string(2) "de"
  }
  ["en"]=>
  array(13) {
    ["code"]=>
    string(2) "en"
    ["id"]=>
    string(1) "1"
    ["native_name"]=>
    string(7) "English"
    ["major"]=>
    string(1) "1"
    ["active"]=>
    string(1) "1"
    ["default_locale"]=>
    string(5) "en_US"
    ["encode_url"]=>
    string(1) "0"
    ["tag"]=>
    string(2) "en"
    ["missing"]=>
    int(0)
    ["translated_name"]=>
    string(7) "English"
    ["url"]=>
    string(61) "https://www.statworx.com/en/content-hub/blog/dalle-2-open-ai/"
    ["country_flag_url"]=>
    string(87) "https://www.statworx.com/wp-content/plugins/sitepress-multilingual-cms/res/flags/en.png"
    ["language_code"]=>
    string(2) "en"
  }
}
                    
Contact
Content Hub
Blog Post

DALL-E 2: Why Discrimination in AI Development Cannot Be Ignored

  • Expert Livia Eichenberger
  • Date 28. June 2022
  • Topic Artificial IntelligenceDeep LearningHuman-centered AIMachine Learning
  • Format Blog
  • Category Technology
DALL-E 2: Why Discrimination in AI Development Cannot Be Ignored

Today, we celebrate the annual Christopher Street Day – the European equivalent of Gay Pride or Pride Parades to fight for the rights of LGBTQIA+ people and against discrimination and exclusion.

Since 1969, when the first demonstration took place on Christopher Street in New York City, we have already made a lot of progress: today, gay marriage is legally performed and recognized in 30 countries and “indeterminate” gender is legally recognized in 20 countries.

However, homosexuality is still punishable in many countries, and violence against queer people still occurs even in more progressive countries. So despite advances already made, there is still a long way to go to achieve equality for queer people. Thus Christopher Street Day still has its justification: As a protest against injustice and a sign for a colorful, diverse and tolerant society.

Bias in AI – A Very Real Problem

In recent years, the topic of discrimination and prejudice has become even more relevant, because with digitalization these biases also creep into the key technology of our future: Artificial Intelligence. Intelligent computer systems which learn from data and will transform our society as we have never seen before. It is critical that they be developed with diverse data sets and input from a variety of developers. Otherwise, the risk of creating biased and discriminatory AI systems is very real.

The controversy surrounding the release of Google’s “Allo” chatbot is a prime example of this potential pitfall. Google released Allo, its new messaging app, in 2016 to much fanfare. The app included a chatbot called “Smart Reply” that provides suggested responses to messages based on past interactions. However, it quickly became apparent that the bot was biased against women, with a tendency to suggest derogatory and sexually explicit responses to messages from female users. This incident highlights the need for companies to be more mindful of the potential risks of bias in AI development. Diversity must be built into every stage of the process, from data collection to algorithm design to user testing.

Indeed, there have been many more incidences of AI discrimination against women and also people of color, such as Amazon’s recruiting tool that systematically favored male applicants or Facebook’s picture labeling system falsely identifying a black man as a primate. But it’s not just women and people of color who suffer from bias in AI; also the queer community is affected by it.

Case Study: DALL-E 2

For this, let’s have a look at DALL-E 2, which is developed by OpenAI. It is one of the newest and most groundbreaking AI technologies out there. DALL-E 2 is an AI that generates realistic images and art based on text descriptions.

To check how biased or equal this AI solution is towards queer people, I instructed DALL-E 2 to generate images based on the input text “a happy couple” with different art style instructions (e.g. oil painting or digital art).

If you look at the results, you see that only images of heterosexual couples were generated. Also, the images based on the text “a happy family” do not differ in this respect – there are no same-sex parents in the images.

So, to get a picture of a homosexual couple, I try to give the AI model a more specific description: “a happy queer couple”. As you can see, DALL-E 2 finally generated some pictures of same-sex couples. However, also here this system seems to be biased – not a single picture of a lesbian couple has been generated.

The Causes of Discrimination in Technologies such as DALL-E 2

So, do we now have confirmation that AI is homophobic? Well, no. This is not about homophobia or sexism on the part of DALL-E or GPT-3. These systems are reproducing the structures and hierarchies of our society – they are only repeating what they have learned in the past. If we want to change these biases and create equal opportunities, we need to train these systems in an inclusive way.

So, why exactly are AI systems like DALL-E 2 biased and what can we do about it? The answer to these questions is threefold:

  • the data
  • the objective
  • the developers

#1 Data

First, AI systems only learn what is in the data. If the training data is biased, the AI be biased as well. DALL-E 2 has been trained on thousands of online picture-descriptions-pairs from the internet. Due to historical, social, and minority circumstances, there exist many more heterosexual couple pictures with the description “a happy couple” than homosexual couple pictures on the internet. So, DALL-E 2 learned that the description “a happy couple” is more probably associated with heterosexual couples in a picture.

#2 Objective

Second, for an AI algorithm like DALL-E 2 to learn from data it needs an objective to optimize, a definition of success and failure. The same way you learned at school by optimizing your grades. Your grades told you whether you were successful or not, and what you had to learn or not.

Similarly, the algorithm is learning by looking at the data, and figuring out what is associated with success. What situation leads to success? So, if we want to create unbiased and fair artificial intelligence we also need to think about what objective we give them. We need to tell them that bias, prejudice, and discrimination are something they should watch out for. For DALL-E 2, for example, we could include a certain diversity metric into its performance evaluation criteria.

#3 Developers

Third, it is the developing community who directly or indirectly, consciously or subconsciously introduces their own biases into AI technology. They choose the data, they define the optimization goal, and they shape the usage of AI. Most of the time, they don’t actively bring bias into these systems. However, we all suffer from unconscious biases, that is, biases we are not aware of. These biases are an attempt of our brain to simplify the incredibly complex world around us. The current community of AI developers is made up of over 80% white cis-men. AI is designed, developed, and evaluated by a very homogenous group. The values, ideas, and biases creeping into AI systems are thus literally narrow-minded.

Possible Solutions to the Problem

So, the crucial step towards fairer and more unbiased AI is a diverse and inclusive AI development community. Diverse people can better check each other’s blind spots and biases.

If we reflect on our own biases and work all together to not just extrapolate the past but learn from it in a cautious and critical way, we can make the world a much more diverse, inclusive, and equal place. Only then can we hope to create AI technologies that are truly inclusive and fair.

Our Efforts to a More Diverse Development and Workplace

We at statworx also try our best to stay educated and broaden our horizons. We are actively engaged in educating society about artificial intelligence, through our initiative AI & Society, for example. Just recently, I published a blog article on the topic “Break the Bias in AI” and gave a talk on this topic at the conference “Unfold” in Bern.

That is why we have decided to sign the Charta of Diversity. The Charta of Diversity is an employer initiative to promote diversity in companies and institutions. The aim of the initiative is to advance the recognition, appreciation and inclusion of diversity in the workplace in Germany. For us at statworx, this is a way to try and live up to our values as a company which are built on diversity, inclusivity and teamwork.

FYI: 20% of this article was written by the AI text generator from neuroflash. Livia Eichenberger

Learn more!

As one of the leading companies in the field of data science, machine learning, and AI, we guide you towards a data-driven future. Learn more about statworx and our motivation.
About us