Timnit Gebru on misconceptions about artificial intelligence – Quartz

A little over a year has passed since Timnit Gebru was fired from Google.

The 38-year old Ethiopian-American researcher and former co-lead of the company’s Ethical AI unit, believes she was pushed out for working on an academic paper that raised red flags about using large language models in Google’s quest to develop “superintelligent” AI systems. The r…….

npressfetimg-3886.png

A little over a year has passed since Timnit Gebru was fired from Google.

The 38-year old Ethiopian-American researcher and former co-lead of the company’s Ethical AI unit, believes she was pushed out for working on an academic paper that raised red flags about using large language models in Google’s quest to develop “superintelligent” AI systems. The research highlighted the ways AI can misinterpret language on the internet, which can lead to “stereotyping, denigration, increases in extremist ideology, and wrongful arrest,” as Gebru and her co-authors put it.

Tired of tussling with the internal politics of mega corporations, Gebru has struck out on her own. She recently launched an independent practice called Distributed AI Research Institute or DAIR—a homonym for “dare”—with funding from the MacArthur Foundation, the Ford Foundation, the Kapor Center, the Open Society Foundations and the Rockefeller Foundation. The mission: to encourage tech companies to consider all perspectives—especially those from marginalized groups—when designing products and services. Gebru is also determined to make AI research understandable and useful to the general public. She’s currently working on a project that seeks to establish a transparency standard for machine learning development.

Gebru talked to Quartz about her vision for the institute and how she expects it to challenge some deeply entrenched practices in Silicon Valley.

This interview has been condensed and edited for clarity.

Quartz: Why was this the right moment for you to start your own initiative?

Timnit Gebru: I’ve thought about starting an independent research institute for a long time. I would have done it slowly, maybe first on the side, but with the way that I got fired and the way that it all blew up, I could not imagine going to another large company—even a small company. I’ve worked at several companies and the idea of doing that fight again—I just honestly couldn’t do it. I didn’t have it in me. This was the only thing I could really imagine doing next.

How do you reflect now on your dismissal from Google?

It really shows how little they thought of me and how little they respected me. It gives you a peek into how they treated me internally. If they were even a little nervous about litigation or PR, I don’t feel like they would have done that.

(Editor’s note: Google declined to comment directly on Gebru’s departure.)

Is your institute a kind of counterpoint to Silicon Valley’s practices? What practices do you espouse?

I’m trying to create a small, viable institute and I don’t want to just grow for the sake of growth. Caring about people’s health and well-being is one of the values of DAIR. In AI, there’s so much bravado about how much people work. I just don’t believe that’s necessary. For our institute, I only want people to do what they can do while living their lives. I want to do that for myself too.

I heard on the news that Chinese tech workers were revolting and pushing back on these crazy hours that they’re expected to work—and that is huge. I would love to see more of it because I think we all get brainwashed, whether it is by our government or our tech executives about this arms race in tech. Ultimately, it might be great for the executives, but not for the average citizens. I work with our research fellow Raesetje Sefala and sometimes I remind her to enjoy her weekend—we’re not doing surgery, you know. Perhaps if I had started a company 15 years ago when I was in my 20s, I might have had a different attitude about it.

These big tech companies are run by highly-narcissistic men and the media and popular culture tend to glorify how they are, even if they are extremely disrespectful to people and drive them to the edge.

How did you land on the name “Distributed AI Research Institute”?

“Distributed” was the first word that came to my mind when I was thinking about having a research institute. When I worked at Google, the ethical AI team was very distributed—we had people in New York, Montreal, Johannesburg, Zurich, and in Accra.

It’s really important because there were just points of view and expertise you would never have had without a distributed team. I also didn’t want to uproot people from their communities, because where they’re situated has a lot to do with what knowledge they have and the perspective they offer. Generally speaking, distributed is usually more robust because you can’t just point to one person or one thing.

In a recent Guardian op-ed, you outline a system where big tech controls philanthropy and influences the government’s agenda. Ultimately you argue that an independent source of funding is needed for this type of research to thrive. Where could it come from?

It could be the National Science Foundation getting more funding for AI research or maybe a separate National Artificial Intelligence Foundation that can fund critical work on AI from many different disciplines. What I caution people about is this: A lot of times, the money that the government gives out goes to the usual suspects who brought us here in the first place.

Your research has been a beacon for marginalized communities often ignored by Silicon Valley. How do you guard against tunnel vision in your work?

One way is through the distributed nature of the institute. We have to make sure to hire people with different points of view. Let me tell you that the most well-meaning people—people I really admire—still think of a white person when it comes down to who they want to hire.

To break out of that, you have to go to different conferences, different communities, and constantly ask yourself who you might be excluding. We have to ask who are the people we don’t really know. There will always be limitations and tunnel vision in some form, but I think you can combat that through self-reflection and taking a proactive approach.

What’s the biggest misconception about artificial intelligence?

For me, the biggest misconception is that it’s discussed in terms of fate—like it’s this external being that we have no control over. People need to remember that AI is something human beings create and something that we can shape in a way that doesn’t destroy society.

Source: https://qz.com/work/2099933/timnit-gebru-on-misconceptions-about-artificial-intelligence/