Microsoft calls for laws to prevent bias in facial recognition AI
Microsoft Corp. called for new legislation to govern artificial intelligence software for recognizing faces, advocating for human review and oversight of the technology in critical cases.
“This includes where decisions may create a risk of bodily or emotional harm to a consumer, where there may be implications on human or fundamental rights, or where a consumer’s personal freedom or privacy may be impinged,” Microsoft President and Chief Legal Officer Brad Smith wrote in a blog published in conjunction with a speech on the topic at the Brookings Institution think tank. Sellers of the technology must “recognize that they are not absolved of their obligation to comply with laws prohibiting discrimination against individual consumers or groups of consumers,” he added.
Smith also wants laws to require sellers of the products to explain what they do clearly and open up their services to testing by outside parties for accuracy and bias. Earlier Thursday, advocacy group AI Now called for greater regulation and regular audits of AI tools used by governments.
There’s rising concern over the use of facial recognition software by law enforcement, border security, the military and beyond. Research has shown that some of the most popular products make mistakes and perform worse on people with darker skin. Microsoft, Amazon.com Inc. and Alphabet Inc.’s Google have also faced protests from employees and advocacy groups over the the idea of selling AI software to government agencies or the police. Smith in July called on lawmakers to regulate facial recognition software without offering specific laws he’d like to see.
“The facial recognition genie, so to speak, is just emerging from the bottle,” Smith said. “Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.”
Microsoft thinks current facial recognition technology increases the risk of biased outcomes, can lead to new intrusions into privacy and can encroach on democratic freedoms if used for mass government surveillance, the executive added.
He wants initial legislation, even if it doesn’t answer all questions, rather than waiting for more comprehensive legal approaches years from now. Smith said in an interview that he’s spoken to state and federal legislators and “there is strong interest.” He hopes laws can be in place sometime next year.
Microsoft is also asking for laws that require companies that use facial recognition technology in public areas to place “conspicuous notice” to make customers and passersby aware, Smith said.
In calling for third-party audits of AI products, Smith took a swipe at rivals, without naming them. “When important advocacy organizations have tried to perform tests, they’ve almost immediately been met by rejections and criticism by some providers who claim that the testing is deficient,” he said.
In July, Amazon responded to an ACLU critique of its Rekognition service by saying the advocacy group used incorrect settings for the test.
Microsoft plans to clearly document what its AI software can and cannot do, while explicitly prohibiting use of the products for unlawful discrimination. The company also will not provide the technology to customers that it believes could undermine democracies, Smith said.
“We must ensure that the year 2024 doesn’t look like a page from the novel 1984,” Smith wrote.