My robot speaks 10 languages, how about yours?

03 May 2021
Co-design, intersectionality and representation in artificial intelligence.

We’re creating a chatbot at the moment. It is, at times, as complex as you’d think.

The easy bit is deciding what the bot can tell you. The hard part is anticipating how people might ask the questions that get them there.

The way I ask the bot “what are the current COVID-19 restrictions” will be different from the way you will.

“Do I need to wear a face mask at Woolies?”

“How many people can I have over?”

“wat r the current rona restrictionz?”

Now take that, and localise it into ten different languages. As you can probably guess, the process has been a long one – involving translators and focus groups and linguists and community leaders, to name a few.

Think HQ recently enjoyed a subscription to South by Southwest (SXSW) – a conference and festival celebrating the interactive, film and music industries. When scrolling through the SXSW program, the session titled Intersectionality and Design for Responsible AI felt pointedly relevant.

The session – an argument for the importance of representation and diversity in the teams creating the systems of the future – explained the foundation of these emerging technologies, like AI and like our chatbot, and how they can only build on what they know. The past.

And that can be a problem. AI, of course, has no inherent context. It builds on data from our history – years and years of systematic racism, misogyny and prejudice. Fun.

Things like gender and racial bias can inadvertently then be built in. In an example, the presenters said an algorithm might think “man is to woman, as king is to queen”. Okay. But then, “man is to woman, as doctor is to nurse”, and so on. A pattern, and a problem, emerges.

As this evolves into actual, real world technology, we can see how these in-built biases in data start to manifest in products. People with dark skin have reported soap dispensers not recognising their hands. Asian people that camera sensors will tell them they’re blinking.

To address these biases, we need to be intentional. We can’t just go with the flow – because the flow is backwards.

It comes back to the principles that are core to any good, representative work: have diverse teams working on projects that reflect not only the audience you’re looking to speak to, but the population in general.

This is especially important in AI. Here, we are creating the systems of the future, so they need to reflect the future we want.

It sounds hard, but it’s just about being thoughtful and intentional. No matter what kind of work you do, you are contributing to the buildings blocks of the future. Here are some key takeaways to think about:

  • Start with people at the centre. And that’s all people. When you’re thinking about your target audience, who’s in it? Why are those people in it? Why aren’t other people?
  • Make your teams diverse, with whatever you’re doing. Like attracts like – but we need to disrupt that. If we don’t we’ll get caught in the same old cycles with the same old outcomes.
  • Remember that people are the sum of their experiences. Intersectionality is the way we describe the compounding effect of discrimination that results from people’s different identities – your ethnicity and your sexuality and your gender, etc. The way I experience the world is different to anyone else, and the way I think (and the work I produce) will be different, too.
Joseph McMahon
Copywriter