Timnit Gebru and the fight to make artificial intelligence work for Africa – Mail and Guardian

The way Timnit Gebru sees it, the foundations of the future are being built now. In Silicon Valley, home to the world’s biggest tech companies, the artificial intelligence (AI) revolution is already well under way. Software is being written and algorithms are being trained that will determine the shape of our lives for decades or even centuries to come. If the tech billionaires get their way…….

npressfetimg-1035.png

The way Timnit Gebru sees it, the foundations of the future are being built now. In Silicon Valley, home to the world’s biggest tech companies, the artificial intelligence (AI) revolution is already well under way. Software is being written and algorithms are being trained that will determine the shape of our lives for decades or even centuries to come. If the tech billionaires get their way, the world will run on artificial intelligence. 

Cars will drive themselves and computers will diagnose and cure diseases. Art, music and movies will be automatically generated. Judges will be replaced by software that supposedly applies the law without bias and industrial production lines will be fully automated — and exponentially more efficient. 

Decisions on who gets home loans or how much your insurance premiums will be made by an algorithm that assesses your creditworthiness, while a similar algorithm will sift through job applications before any CVs get to a human recruiter (this is already happening in many industries). Even news stories, like this one, will be written by a program that can do it faster and more accurately than human journalists. 

But what if those algorithms are racist, exclusionary or have dangerous implications that were not anticipated by the mostly rich, white men who created them? What if, instead of making the world better, they just reinforce the inequalities and injustices of the present? That’s what Gebru is worried about.

“We’re really seeing it happening. It’s scary. It’s reinforcing so many things that are harming Africa,” says Gebru. 

She would know. Gebru was, until late 2020, the co-director of Google’s Ethical AI program. Like all the big tech companies, Google is putting enormous resources into developing its artificial intelligence capabilities and figuring out how to apply them in the real world. This encompasses everything from self-driving cars to automatic translation and facial recognition programs. 

The ultimate prize is a concept known as Artificial General Intelligence — a computer that is capable of understanding the world as well as any human and making decisions accordingly 

“It sounds like a god,” says Gebru. 

She was not at Google for long. Gebru joined in 2018, and it was her job to examine how all this new technology could go wrong. But input from the ethics department was rarely welcomed. 

“It was just screaming about issues and getting retaliated against,” she says. The final straw was when she co-authored a paper on the ethical dangers of large language models, used for machine translation and autocomplete, which her bosses told her to retract.

In December 2020, Gebru left the company. She says she was fired; Google says she resigned. Either way, her abrupt departure and the circumstances behind it thrust her into the limelight, making her the most prominent voice in the small but growing movement that is trying to force a reckoning with Big Tech — before it is too late to prevent the injustices of the present being replicated in the future. 

“Gebru is one of the world’s leading researchers helping us understand the limits of artificial intelligence in products like facial-recognition software, which fails to recognise women of colour, especially black women,” wrote Time magazine when it nominated Gebru as one of the 100 most influential people in the world in 2022. 

“She offers us hope for justice-oriented technology design, which we need now more than ever.”

 Artificial intelligence is not yet as intelligent as it sounds. We are not at the stage where a computer can think for itself or match a human brain in cognitive ability. But what computers can do is process incomprehensibly vast amounts of data and then use that data to respond to a query. Take Dall-E 2, the image-generation software that created The Continent’s cover illustration this week, developed by San Francisco-based OpenAI. 

It can take a prompt such as “a brain riding a rocket ship heading towards the moon” and turn it into an image with uncannily accurate — sometimes eerie — results. But the software is not thinking for itself. It has been “trained” on data, in this case, 650 million existing images, each of which have a text caption telling the computer what is going on in the picture. This means it can recognise objects and artistic styles and regurgitate them on command. Without this data, there is no artificial intelligence. 

Like coal shovelled into a steamship’s furnace, data is the raw material that fuels the AI machine. Gebru argues that all too often the fuel is dirty. Perhaps the data is scraped from the internet, which means it is flawed in all the ways the internet itself is flawed — Anglo- and Western-centric, prone to extremes of opinion and political polarisation and all too often it reinforces stereotypes and prejudices. Dall-E 2, for instance, thinks that a “CEO” must be a white man, while nurses and flight attendants are all women. 

More ominous still was an algorithm developed for the United States’ prison system, which predicted that black prisoners were more likely than white people to commit another crime, which led to black people spending longer in jail. 

Or perhaps, in one of the great paradoxes of the field, the data is mined through old-fashioned manual labour — thousands of people hunched over computer screens, painstakingly sorting and labelling images and videos. Most of this work has been outsourced to the developing world — and the people doing the work certainly aren’t receiving Silicon Valley salaries. 

“Where do you think this huge workforce is? There are people in refugee camps in Kenya, in Venezuela, in Colombia, that don’t have any sort of agency,” says Gebru. 

These workers are generating the raw material but the final product — and the enormous profits that are likely to come with it — will be made for and in the West. “What does this sound like to you?” Gebru asks.

Timnit Gebru grew up in Addis Ababa (Timnit means “wish” in Tigrinya). She was 15 when Ethiopia went to war with Eritrea, forcing her into exile, first in Ireland and then in the US, where she first experienced casual racism. A temp agency boss told her mother to get a job as a security guard, because “who knows whatever degree you got from Africa”. A teacher refused to place Gebru in an advanced class because “people like you” always fail. 

But Gebru didn’t fail. Her academic record got her into Stanford, one of the world’s most prestigious universities, where she hung out with her friends in the African Students Association and studied electrical engineering. It was here that both her technical ability and her political consciousness grew. 

She worked at Apple for a stint, and then returned to the university where she developed a growing fascination with artificial intelligence. “So then I started going to these conferences in AI or machine learning, and I noticed that there were almost no black people. These conferences would have 5 000 or 6 000 people from all over the world but one or two black people.” 

Gebru co-founded Black in AI for black professionals in the industry to come together and figure out ways to increase representation. By that stage, her research had already proved how this racial inequality was being replicated in the digital world. A landmark paper she co-authored with the Ghanaian-American-Canadian computer scientist, Joy Buolamwini, found that facial recognition software is less accurate at identifying women and people of colour — a big problem if law enforcement is using this software to identify suspects. 

Gebru got her job at Google a couple of years later. It was a chance to fix what was broken from inside one of the biggest tech companies in the world. But, according to Gebru, the company did not want to hear about the environmental costs of processing vast data sets, or the baked in biases that come with them, or the exploitation of workers in the Global South. It was too busy focusing on all the good it was going to do in the distant future to worry about the harm it might cause in the present. 

This, she says, is part of a pernicious philosophy known as long-termism, which holds that lives in the future are worth just as much as lives in the present. “It’s taken a really big hold in Silicon Valley,” Gebru says. This philosophy is used by tech companies and engineers to justify decisions in product design and software development that do not prioritise immediate crises such as poverty, racism and climate change or take other parts of the world into consideration. 

Abeba Birhane, a senior fellow in Trustworthy AI at the Mozilla Foundation, says: “The way things are happening right now is predicated on the exploitation of people on the African continent. That model has to change. Not only is long-termism taking up so much of the AI narrative, it is something that is preoccupied with first-world problems. 

“It’s taking up a lot of air, attention, funding, from the kind of work Timnit is doing, the groundwork that specialist scholars of colour are doing on auditing data sets, auditing algorithms, exposing biases and toxic data sets.

In the wake of Gebru’s departure from Google some 2 000 employees signed a petition protesting against her dismissal. Although not acknowledging any culpability, Sundar Pichai — the chief executive of Alphabet, Google’s parent company — said: “We need to assess the circumstances that led to Dr Gebru’s departure, examining where we could have improved and led a more respectful process. We will begin a review of what happened to identify all the points where we can learn.” 

In November 2020, a civil war broke out in Ethiopia and once again Gebru’s personal and professional worlds collided. As an Ethiopian, she has been vocal in raising the alarm about atrocities being committed, including running a fundraiser for victims of the conflict. As a computer scientist, she has watched in despair as artificial intelligence has enabled and exacerbated these atrocities. 

On Facebook, hate speech and incitements to violence related to the Ethiopian conflict have spread with deadly consequences, with the company’s algorithms and content moderators entirely unable or unwilling to stop it. For example, an investigation by The Continent last year, based on a trove of leaked Facebook documents, showed how the social media giant’s integrity team flagged a network of problematic accounts calling for a massacre in a specific village. But no action was taken against the accounts. Shortly afterwards, a massacre took place. 

The tide of the war was turned when the Ethiopian government procured combat drones powered by artificial intelligence. The drones targeted the rebel Tigray forces with devastating efficacy —  and have been implicated in targeting civilians too, including in the small town of Dedebit, where 59 people were killed when a drone attacked a camp for internally displaced people. 

“That’s why all of us need to be concerned about AI,” says Gebru. “It is used to consolidate power for the powerful. A lot of people talk about AI for the social good. But to me, when you think of the current way it is developed, it is always used for warfare. “It’s being used in a lot of different ways by law enforcement, by governments to spy on their citizens,by governments to be at war with their citizens, and by corporations to maximise profit.” 

Once again, Gebru is doing something about it. Earlier this year, she launched the Distributed Artificial Intelligence Research Institute (Dair). The clue that Dair operates a little differently is in the word “distributed”. 

Instead of setting up in Silicon Valley, Dair’s staff and fellows will be distributed all around the world, rooted in the places they are researching. 

“How do we ring the alarm about the bad things that we see, and how can we develop this research in a way that benefits our community?” Raesetje Sefala, Dair’s Johannesburg-based research fellow, puts it like this: “At the moment, it is people in the Global North making decisions that will affect the Global South.” 

As she explains it, Dair’s mission is to convince Silicon Valley to take its ethical responsibilities more seriously — but also to persuade leaders in the Global South to make better decisions and to implement proper regulatory frameworks. For instance, Gmail passively scans all emails in Africa for the purposes of targeted advertising, but the European Union has outlawed this to protect their citizens. 

“Our governments need to ask better questions,” says Sefala. “If it is about AI for Johannesburg, they should be talking to the researchers here.” 

So far, Dair’s team is small — just seven people in four countries. So, too, is the budget.

 “What we’re up against is so huge, the resources, the money that is being spent, the unity with which they just charge ahead. It’s daunting sometimes if you think about it too much, so I try not to,” says Gebru. 

And yet, as Gebru’s Time magazine nod underscored, sometimes it is less about the money and more about the strength of the argument. On that score, Gebru and Dair are well ahead of Big Tech and their not quite all-powerful algorithms.

This article first appeared in The Continent, the pan-African weekly newspaper produced in partnership with the Mail & Guardian. It’s designed to be read and shared on WhatsApp. Download your free copy here.

Source: https://mg.co.za/africa/2022-06-09-timnit-gebru-and-the-fight-to-make-artificial-intelligence-work-for-africa/