For Raising the Standard

wit-pijl

Joy Buolamwini – Computer scientist, artist, and founder, Algorithmic Justice League

 

By Amy Farley

Joy Buolamwini got Jeff Bezos to back down.

Amazon announced in June that it was issuing a moratorium on police use of its controversial facial recognition software, called Rekognition, which it had sold to law enforcement for years. The move marked a remarkable retreat for Amazon’s famously stubborn chief executive, and he wasn’t alone. IBM pledged that week to stop developing facial recognition entirely, and Microsoft committed to withholding its system from police until federal regulations were passed.

These decisions occurred amid widespread international protests over systemic racism, sparked by the killing of George Floyd at the hands of Minneapolis police. But the groundwork had been laid four years earlier, when Joy Buolamwini, then a 25-year-old graduate student at MIT’s Media Lab, began looking into the racial, skin type and gender disparities embedded in commercially available facial recognition technologies. Her research culminated in two groundbreaking, peer-reviewed studies, published in 2018 and 2019, that revealed how systems from Amazon, IBM, Microsoft and others were unable to classify darker female faces as accurately as those of white men – effectively shattering the myth of machine neutrality.

Today, Buolamwini is galvanising a growing movement to expose the social consequences of artificial intelligence. Through her nearly four-year-old nonprofit, the Algorithmic Justice League (AJL), she has testified before lawmakers at the federal, state, and local levels about the dangers of using facial recognition technologies with no oversight of how they’re created or deployed. Since George Floyd’s death, she has called for a complete halt to police use of face surveillance: Many companies, such as Clearview AI, are still selling facial analysis to police and government agencies. “We already have law enforcement that is imbued with systemic racism,” she says. “The last thing we need is for this presumption of guilt of people of colour, of black people, to be confirmed erroneously through an algorithm.”

Buolamwini was inspired to investigate algorithmic bias when she was coding a project at MIT and the off-the-shelf computer-vision technology she was using had trouble detecting her face. When she put on a white mask, obscuring her features entirely, the computer finally recognised she had a “face.”

The problem was familiar. As an undergraduate at Georgia Tech, Buolamwini had to “borrow” her roommate’s face to teach a robot to play peek-a-boo. Later, she encountered robots at a start-up in Hong Kong that had similar issues. At the time, Buolamwini thought that the tech companies would soon fix the problem. At MIT, she realised they didn’t even know there was a problem.

For her Master’s thesis, she began testing facial-analysis applications with photos of parliamentarians from Europe and Africa, work that became the basis for her “Gender Shades” paper. Co-authored with Timnit Gebru (who is now part of the Ethical Artificial Intelligence Team at Google), the study showed that the error rates for widely used systems, including those of IBM and Microsoft, were significantly higher for darker female faces than for lighter males – up to 34% for IBM. She and researcher Deborah Raji, then an MIT intern and today a fellow at the AI Now Institute, followed up with a study last year that showed some improvement in the algorithms of the companies in “Gender Shades” – and revealed troubling flaws in Amazon Rekognition, which misclassified women as men 19% of the time, and darker-skinned women as men 31% of the time.

Within a day of receiving the “Gender Shades” findings, in 2018, IBM committed to addressing AI bias. Microsoft’s chief scientific officer, Eric Horvitz, says: “It was immediately all hands on deck for us.” Amazon responded by trying to discredit the study, prompting more than 70 AI researchers to publish a letter last April supporting the research and calling on the company to stop selling the software to law enforcement.

With these studies, Buolamwini helped found a new field of academic research. “She was the first person to realise that this problem exists, to talk about it, and do academic work around it until the powers that be took notice,” says Kade Crockford, the director of the Technology for Liberty Program at the ACLU of Massachusetts, who worked with Buolamwini to advocate for Boston’s recent ban on using facial recognition for surveillance (see page 64). At the end of 2019, the Commerce Department’s National Institute of Standards and Technology completed its first audit of commercially available facial recognition technology. It tested the algorithms of nearly 100 companies and found false-positive rates for one-to-one matching of Asian and African American faces that were between 10 and 100 times higher than for those of Caucasian faces.

Buolamwini’s research speaks for itself. But by lending her own voice to the cause, she has given it more urgency. Her 2016 Ted Talk, in which she introduced the AJL, has been viewed 1.3 million times, and she is the subject of the new documentary Coded Bias. “Joy has the rare ability to articulate not just the science, but why it matters,” says Shalini Kantayya, who directed Coded Bias. This makes her a powerful advocate. When Buolamwini testified before the House Oversight Committee last spring, she revealed not just the problems embedded in algorithms, but the people and communities who are harmed by them, such as a Muslim college student who was misidentified as a terrorist suspect and a group of Brooklyn tenants whose landlord tried to install a facial recognition entry system for their rent-stabilised buildings.

As the Black Lives Matter protests took hold during the summer, Buolamwini used her platform to call on companies to donate at least $1 million each to organisations such as Data for Black Lives and Black in AI that advance racial justice in the tech sector. The AJL released a white paper exploring the concept of an FDA-like authority to oversee facial recognition technologies, and the organisation is creating a set of tools to help people report harmful AI systems.

Buolamwini draws further attention to AI’s pitfalls through spoken-word pieces. Her 2018 video, “AI, Ain’t I a Woman?” highlights iconic Black women, from Sojourner Truth to Michelle Obama, who are misclassified by AI. When the EU Global Tech Panel wanted to raise the danger with defense ministers of using image-recognition technology to guide lethal autonomous weapons, in 2019, it played them the video. “Creating art gets the conversation going in a way that might not be achieved with an hour-long talk or a research paper,” Buolamwini says.

Parris Goebel –  Choreographer

If Rihanna’s groundbreaking Fenty x Savage lingerie fashion show last fall – a music-and-dance extravaganza full of inclusivity and body positivity – made Victoria’s Secret seem like a relic, her secret weapon was choreographer Parris Goebel. Rihanna relied on the New Zealand native to create a spectacle (now streaming on Amazon) that had a diverse, 100-person group of models, activists, and dancers ferociously twerking, strutting, and celebrating across the stage. “There are so many stigmas around who should show themselves in lingerie, that to put women of all shapes and sizes on a world stage in lingerie was a kick to all those brands who have made us feel like we’re not enough,” says Goebel, who has choreographed for the likes of Ariana Grande and Justin Bieber, directed music videos, and worked with Jennifer Lopez on February’s Super Bowl half-time show, which also showcased Goebel’s award-winning dance crew, the Royal Family. Goebel – who will soon direct a film adaptation of her dance production “Murder on the Dance Floor” for Sony – is committed to giving dancers the credit they deserve. “Sometimes we aren’t valued, especially when it comes to pay. So I try to make as much noise as possible within my artistry. People have no choice but to look at what (my dancers) are doing.”

Melanie Bender –  General manager, Versed

A skincare line appeared in Target and a handful of other retailers last year that sold for drugstore prices but contained coveted, high-performance ingredients such as vitamin C, squalane and glycolic acid. The brand, called Versed, says that within nine months it was outselling beauty-aisle mainstays such as Olay and Burt’s Bees. A spinout from Clique Brands (home of the Who What Wear website and clothing line), Versed was spearheaded by retail veteran Melanie Bender, who was determined to buck the industry trend of adding “marketing ingredients” to products – just enough of a key element to list it on a label. “It’s sneaky,” Bender says. “There may not be enough of that ingredient for it to actually work.” She and Clique’s in-house R&D team joined with several labs to create products that contain high doses of these ingredients but still mostly cost less than $20 each. They also avoided more than 1 300 known toxins commonly used in beauty products, allowing Versed to become one of the few mass-market skincare brands certified by the Environmental Working Group, which verifies product safety.

Jeffrey Whitford –  Head of global corporate responsibility, MilliporeSigma

“Sustainability in science is one of the final frontiers,” says Jeffrey Whitford, head of corporate responsibility at the Merck-owned MilliporeSigma, which provides chemicals to industries including pharma, cosmetics and food. In 2019, Whitford’s team launched Cyrene, a safer, greener alternative to two solvents typically used in plastics, fibers and adhesives. Cyrene, made from renewable cellulose, is so promising that the EU awarded the company a grant to build a Cyrene production plant in France. Whitford also led the creation of DOZN, an evaluation system that compares the sustainability of chemicals, including environmental hazards they pose. Last year, the company debuted a free web-based tool called DOZN 2.0 that can be used by any scientist.

Mary D. Nichols –  Chair, California Air Resources Board

 

Shortly after becoming president, Donald Trump offered the Big Three automakers a deal: He’d roll back Obama-era emissions regulations (requiring automakers to meet an average fuel efficiency of 54 miles per gallon by 2025) if they promised to invest in US manufacturing. When most climate advocates lamented this reversal, Mary D Nichols, an environmental lawyer and chairperson of the California Air Resources Board, understood that car companies selling electric and hybrid vehicles would prefer to support a policy that would mitigate climate change. “They were searching for a way they could show (consumers) they weren’t siding with the Trump rollbacks,” she says. First, she tried to collaborate with the US Environmental Protection Agency on a new emissions standard. When that failed, she took a novel approach. California had already established emissions rules in 2012, but to give automakers more time to meet goals, she struck a deal with them directly. Honda, Volkswagen, Ford, BMW of North America, Rolls-Royce and Volvo all agreed to reduce greenhouse gases by 3.7% each year for five years, giving them an extra year to reach the Obama-era goal. Since then, 13 other states have agreed to uphold these standards. Thanks to Nichols’s efforts, 40% of the US car market is now pushing electric vehicles into the future.

Video
Share

Your name

Your e-mail

Name receiver

E-mail address receiver

Your message

Send

Share

E-mail

Contact

Send

Sign up

Send

E-card

Your name

Your e-mail address

Name receiver

E-mail address receiver

Your message

Send

1