This article originally appeared in Computing on 9th June 2020
IBM is quitting the controversial facial recognition software market over concerns that the technology could be used to promote racial injustice and discrimination.
In a letter to the members of the US Congress, IBM CEO Arvind Krishna said that the company would no longer sell general purpose facial recognition software and would also oppose use of such technology for racial profiling, mass surveillance, violations of basic human rights or any purpose “which is not consistent with our values and principles of trust and transparency”.
IBM’s decision to quit the facial recognition services has come at the time when US faces countrywide demonstrations over the tragic death of George Floyd, a black man, while in police custody in Minneapolis.
Several lawmakers and government officials in the US have called on the government to introduce reforms to address police brutality and racial injustice.
“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” Krishna said.
He added that vendors and users of AI-based systems have a collective responsibility to ensure that such systems are “tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported”.
Since its inception, facial recognition technology has faced intense criticism from lawmakers and privacy advocates in different countries. Critics of the technology cite multiple studies that have found that the technology can suffer from bias along lines of race, age and ethnicity and could result in abuse of human rights. Critics further argue that technology also has the potential to become an invasive form of surveillance.
Earlier this year, Clearview AI came under heavy scrutiny after it emerged that its facial recognition tool, with over 3 billion images compiled from scraping social networking websites, was being used by a number of private firms and law enforcement agencies.
Clearview has since faced multiple privacy lawsuits in the US.
In January, Facebook was also ordered to pay $550 million to settle a class-action lawsuit over its unauthorised use of facial recognition technology.
In March, the Metropolitan Police’s facial recognition deployment in Oxford Circus on Thursday led to the wrongful apprehension of seven innocent members of the public who were incorrectly identified by the system.
Last year, the UK Information Commissioner’s Office (ICO) issued a warning to police over the use of live facial recognition and also called for a statutory code of practice to be introduced to govern police’ use of live facial recognition.