IBM to kill its own facial recognition technology


Keumars Afifi-Sabet

9 Jun, 2020

IBM has decided to «sunset» its general-purpose facial recognition and analysis software suite over ethical concerns following a fortnight of Black Lives Matter protests.

Despite putting a lot of efforts into developing its AI-powered tools, the cloud giant will no longer distribute these systems for fear that it could be used for purposes that go against the company’s principles of trust and transparency. 

Specifically, there are concerns the technology could be used for mass surveillance, racial profiling and the violations of basic human rights and freedoms. This is in addition to the company now deploring the use of facial recognition in principle, and by rival vendors, for such purposes.

«We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” CEO Arvind Krishna outlined in a letter to the US Congress.

«Artificial intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.»

The announcement represents a major shift, given the company has previously ploughed considerable money and effort into building out its capabilities, and has occasionally courted controversy in the process.

In March 2019, for example, IBM was called out for using almost a million photos from photo-sharing site Flickr to train its facial recognition algorithms without the consent of the subjects. Those in the pictures weren’t advised the firm was going to use their images to help determine gender, race and other identifiable features, such as hair colour.

Several months before that, the company was found to have been secretly using video footage collected by the New York Police Department (NYPD) to develop software that can identify individuals based on distinguishable characteristics.

IBM had created a system that allowed officers to search for potential criminals based upon tags, including facial features, clothing colour, facial hair, skin colour, age, gender and more. Overall, it could identify more than 16,000 data points, rendering it extremely accurate in recognising faces.

While the general use of facial recognition in law enforcement is not entirely uncommon, it has run into legal blockades, with jurisdictions, such as San Francisco, banning its use altogether, for example.

Police forces in the UK, meanwhile, have been trialling such systems, but the Information Commissioner’s Office (ICO) has effectively neutered these plans after urging branches to assess data protection risks and ensure there’s no bias in the software being used.

In addition to permanently withdrawing its facial recognition technology, IBM has called for a national policy that encourages the use of technology to bring greater transparency and accountability to policing. These may include body cameras and data analytics techniques.

Much in step with IBM until now, a number of other major companies have engaged in developing their own AI-powered facial recognition capabilities which have often also courted controversy. 

AWS has come under fire for building its highly sophisticated Rekognition technology with alleged racial and gender bias. The company’s shareholders overturned an internal revolt over the sale of Rekognition to the police by an overwhelming majority of 97% in May 2019, for example.

The claims were based on MIT research that found it mistakenly identified some pictures of woman as men 31% of the time, which was more prevalent when it was shown pictures of darker-skinned women. This was against an error rate of 1.5% with Microsoft’s software.