AI has a BIG Tech problem

A handful of companies dominates not only how artificial intelligence is developed but critiqued. It’s time for that to change.
By Katherine Shwab

 

Timnit Gebru—a giant in the world of AI and then co-leader of Google’s AI ethics team— was pushed out of her job in December. Gebru had been fighting Google over a research paper – that she had co-authored, which explored the risks of the AI models the search giant uses to power its core products, including almost every English query on Google. The paper highlighted the potential biases (racial, gender, and others)) of these language models, as well as the outsize carbon emissions required to compute them.Google wanted the paper retracted; Gebru refused. After the company abruptly announced her departure, Google AI chief Jeff Dean insinuated that her work was not up to snuff—despite Gebru’s credentials. The backlash was immediate. Thousands of Googlers and outside researchers signed a protest letter and called out Google for attempting to marginalise its critics, particularly those from underrepresented backgrounds.

A champion of diversity and equity in AI, Gebru is a Black woman, and was one of the few in Google’s research organization. In the aftermath, Alphabet CEO Sundar Pichai pledged an investigation, the results of which have not yet been released. (Google declined to comment for this story.)

To many who work in AI ethics, Gebru’s ouster was a shock but not a surprise, and served as a stark reminder of how Big Tech dominates their field. A handful of giant companies are able to use their money to direct the development of AI and decide who gets to critique it.

At stake is the equitable development of a technology that already underpins many of our most important automated systems.

From credit scoring and criminal sentencing to healthcare access and whether you get a job interview or not, AI algorithms are making life-altering decisions for people, with no oversight or transparency. The harms these models can cause out in the world are becoming apparent: false convictions based on biased facial recognition technology, discriminatory hiring systems, racist predictive policing dashboards. For AI to work for all members of society, the power dynamics across the industry have to change. The people most likely to be harmed by algorithms—those in marginalized communities—need a say in its development. “If the right people are not at the table, it’s not going to work,” Gebru says. “And in order for the right people to be at the table, they have to have power.”

Big Tech’s influence over AI ethics is near total. It begins with companies’ ability to lure top minds to industry research labs with the promise of prestige, computational resources and in-house data, and cold hard cash. And it extends throughout academia, to an extraordinary degree. A 2020 study of four top universities found that a majority of AI ethics researchers whose funding sources are known have accepted money from a tech giant. Indeed, one of the largest pools of money dedicated to AI ethics is a joint grant funded by the National Science Foundation and Amazon, presenting a classic conflict of interest. “Amazon has a lot to lose from some of the suggestions that are coming out of the ethics in AI community,” says Rediet Abebe, a computer science professor at UC Berkeley who cofounded the organization Black in AI with Gebru to provide support for Black researchers inan overwhelmingly white field. Perhaps unsurprisingly, nine of the first 10 principal investigators awarded grant money from the NSF-Amazonpool are male, and all are white or Asian. (Amazon did not respond to a request for comment.) Meanwhile, it’s not clear whether inhouse AI ethics researchers have any kind of say in what their employers are developing. Large tech companies are typically more focused on shipping products quickly than on understanding the potential impacts of their AI. Many watchdogs believe that Big Tech’s investments in AI ethics— including in-house teams like the one Gebru used to lead—are little more than PR. “This [problem] is bigger than just Timnit,” says Safiya Noble, a professor at UCLA and the cofounder and co-director of the Center for Critical Internet Inquiry.

“This is about an industry broadly that is predicated upon extraction and exploitation and that does everything it can to obfuscate that.” Noble, whose research center does not take money from Big Tech, is part of a growing network of independent scholars and activist organizations that are trying to keep Big Tech accountable for algorithmic harms, particularly to vulnerable communities. Efforts such as scholar-activist Joy Buolamwini’s Algorithmic Justice League (AJL), Data for Black Lives, Stop LAPD Spying Coalition, and the Our Data Bodies Project are sharing impacted people’s stories, compelling companies to amend their algorithms, and pushing for AI regulation at all levels of government. They’ve made some progress. Thanks to the work of AJL and others, several prominent companies, including Amazon, last year issued moratoriums on selling facial recognition algorithms to police, and bans on police use of facial recognition technology have been spreading across the country. The Stop LAPDSpying Coalitionsuccessfully sued the police to force it to release information about its predictive policing tactics, revealing how they impact local communities. Cathy O’Neil, data scientist and founder of the algorithmic auditing consultancy ORCAA, credits activists with changing the conversation so that AI bias is “a human problem, rather than some kind of a technical glitch.”

It’s no coincidence that black women are leading many of the most effective efforts, which focus more on community organising than abstract white papers. “Black women have dealt with these stereotypes their entire lives and experienced products not working for them,” says Deborah Raji, a fellow at Mozilla who has collaborated with groups such as AJL. “You’re so close to the danger that you feel incredibly motivated and eager to address the issue.”

Still, this work takes resources—typically the largest eoffoundations and donors—and time, which those affected by biased AI can’t mechanisms to force change at Google, from gettingthe company to walk away from its drone image-analysis project to ending its practice of forced arbitration in sexual harassment cases.

Individuals within big tech firms can not only protest when their products might cause harm,they can push to ensure that diverse teams are building and auditing these products in the first place—and band together.

A month after Gebru was pushed out of Google, hundreds of workers at its parent company, Alphabet, announced that they were unionising. For Alex Hanna, a sociologist and senior researcher on Gebru’s former team, the Alphabet Workers Union is crucial for more equitable tech. “The union is a strong counterweight, especially for AI workers who want to speak up,” she says. Hanna knows that building this force will take time. Not everyone is so patient. “These companies have so much power,” says Mia ShahDand, a former Google employee turned activist who runs the non-profit Women in AI Ethics to increase representation in the field. “Someone has to dismantle them, whether it’s inside or outside.”

Video
Share

Your name

Your e-mail

Name receiver

E-mail address receiver

Your message

Send

Share

E-mail

Contact

Send

Sign up

Send

E-card

Your name

Your e-mail address

Name receiver

E-mail address receiver

Your message

Send

1