AI is getting political in Washington, and it’s about time | Artificial Intelligence
Amid rage brewing on both sides of the political spectrum over testimony by Christine Blasey Ford and Judge Brett Kavanaugh, it would be understandable if you missed some significant artificial intelligence news in Washington D.C. in recent weeks.
This week, a group of four senators — two Democrats, two Republicans — put forward the AI in Government Act to do things like carry out unique research on federal AI policy, work across agencies, and form an AI advisory board similar to the one created by the European Union earlier this year.
The bill has the support of Microsoft, Intel, and the Internet Association, an organization whose members represent some of the biggest tech companies in AI, including Amazon, Facebook, and Google.
Last week, Senator Kamala Harris (D-CA) and seven of her colleagues in the House and Senate signed and sent letters to the Federal Bureau of Investigation (FBI), Federal Trade Commission (FTC), and Equal Employment and Opportunity Commission (EEOC) asking questions about their use of facial recognition software.
The FTC and EEOC were asked things like how they address claims of discrimination that may be the result of algorithmic bias and if it has received complaints about facial detection in the workplace or as part of hiring practices.
The Senators also asked the FBI for more information on false positive identifications and performance levels when scanning the faces of people of color, as well as information related to audits.
In 2016, the Government Accountability Office (GAO) recommended the FBI take a series of actions, such as an annual audit to gauge the accuracy of facial recognition systems used by the FBI.
The FBI uses the Next Generation Identification-Interstate Photo System (NGI-IPS), a database of 30 million images, and has access to state driver license photo databases that include 64 million people.
Deadlines for questions put to the FTC and EEOC elapse today, while questions to the FBI requested a response by Oct. 1.
What seems to have been overlooked by many about these letters is the fact that they were signed by Senators Kamala Harris (who also introduced the AI in Government Act), Cory Booker (D-NJ), Elizabeth Warren (D-MA), and Richard Blumenthal (D-CT).
In other words, virtually every signatory to the letters is considered a potential 2020 presidential candidate for the Democratic Party.
When citing potential AI discrimination against women of color, the letters relied heavily on the work of Gender Shades, a research project coauthored by Joy Buolamwini of MIT Media Lab and Microsoft Research’s Timnit Gebru.
Released in February, the study analyzed facial recognition software from leading companies like Face++ and Microsoft and found disparities in accuracy based on gender and skin tone, with performance particularly lacking for women of color.
Microsoft has since taken steps to improve its Face API, and president Brad Smith took the unusual step of asking Congress to regulate facial recognition software.
Both Gebru and Buolamwini have been at an assortment of tech and AI conferences this year to discuss their findings and related work.
Speaking at the C2 conference in Montreal in May, Gebru pointed out that failure to address disproportionate impact of such systems can be dangerous and carry decades-long ramifications.
As an example, she pointed to the auto industry. As women crash test dummies were only recently introduced, women have been more likely to be injured in car accidents for decades. In 2012, the National Highway and Transportation Safety Administration (NHTSA) required automakers to issue safety ratings for both men and women.
Gebru and others from Microsoft Research recommend datasheets like the kind common in engineering to bring standardization and documentation to datasets, pre-trained models, and AI.
“Currently, any model can be used for anything by anyone. There are no rules that say if you’re going to use facial recognition by law enforcement, what kind of properties should it have? There are no rules that say whoever is using these automated tools needs to let us know why they’re using it. There are no standards [and] there is no documentation,” she said.
Underscoring research by Gebru, Buolamwini, and other scholars cited by the senators is analysis by the American Civil Liberties Union (ACLU) released in Julythat found Amazon’s Rekognition software misidentified nearly 30 members of Congress as criminals. Among those incorrectly labeled criminals by Amazon, nearly 40 percent were people of color.
For context, about 19 percent of the members of the 115th Congress are African-American, Asian-American, or Hispanic.
The AI in Government Act is the latest but not the first piece of legislation to be introduced in Congress to determine federal policy as it relates to artificial intelligence. Regardless of whether it passes or how each of the institutions questioned by senators respond, it seems clear that bias in AI and federal policy as it relates to AI are growing issues in Washington.
AI will drive efficiencies in businesses and make Gmail much easier to use, but it’s also going to eliminate a number of jobs and impact matters of justice, equality, and opportunity.
And as federal policy around the deployment of AI by federal agencies or how they respond to claims of discrimination by opaque algorithms is clarified and implemented, the decisions should not only shape domestic politics but inform the U.S. government’s stance on AI systems deployed by institutions outside its borders. As Dr. Safiya Noble said, artificial intelligence will be a major human rights issue of the 21st century.
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to bookmark our AI Channel.
Thanks for reading,
Khari Johnson
AI Staff Writer
P.S. Please enjoy this World Economic Forum video about how AI may shake up the job market in China.
From VB
Apple tests predictive keyboard that taps AI to read whole documents
Apple doesn’t generally let its employees blog, but its Machine Learning Journal is an exception, spotlighting the quiet labors and current projects of its ML research teams. The latest entry previews potential future improvements to Apple’s QuickType predictive keyboard, based on a major challenge: teaching a machine to use all of a document’s content to guess […]
Pymetrics raises $40 million for AI-powered job recruitment
Pymetrics today announced it has raised $40 million to expand its work using a combination of neuroscience and artificial intelligence to help companies recruit the right job candidates. The funding will allow Pymetrics to continue its product development and expansion in markets outside the United States. More than 60 companies use Pymetrics in their hiring […]
Google’s DeepMind and Unity join forces to create simulated environments for AI training
Google’s DeepMind and Unity Technologies, creator of one of the best-known game development platforms, today announced plans to collaborate in order to create virtual worlds and move AI forward. The Unity engine is currently used by 50 percent of mobile games and 60 percent of AR/VR experiences. The Unity ML Toolkit for training intelligent agents […]
Deep North’s school security system uses AI and cameras to detect threats
Deep North (formerly VMAXX), a Silicon Valley startup with offices in China and Sweden, hopes to leverage artificial intelligence (AI) to prevent violence and “other safety issues” facing schools. Today it announced a program that will offer a select number of institutions the opportunity to field-test its threat-detecting object recognition and computer vision technology. It’s […]
Google uses AI to make AMP Stories, Google Images, and video search better
Artificial intelligence (AI) underlies many, if not most, of Google Search’s features. It’s how Google News personalizes podcast, video, and article recommendations, and how Google’s lightweight Search app for Android Go reads the text of websites in more than two dozen languages. And the Mountain View company is intent on expanding its influence. At an event […]
Microsoft launches Cortana Skills Kit for enterprise developers
Microsoft today introduced the Cortana Skills Kit for Enterprise to give businesses the power to create custom voice apps for their employees, as well as for users. At launch, the skills kit for enterprise will be available by invite only. Invitations for companies and developers will be made available in the future. Also introduced today […]
Beyond VB
Germany launches world’s first autonomous tram in Potsdam
The Guardian goes for a ride on the new AI-driven Combino vehicle developed by Siemens (via The Guardian)
Machine learning confronts the elephant in the room
A visual prank exposes an Achilles’ heel of computer vision systems: Unlike humans, they can’t do a double take. (via Quanta Magazine)
Can the EU become another AI superpower?
Taking on America and China will be hard. (via The Economist)
Google is using AI to issue flood warnings in India
Google is now issuing flood warnings using AI to detect and issue flood warnings – starting in India – to help people prepare better for the situation. (via The Next Web)