Coders are Fighting Bias in Facial Recognition Software

FacialRecognition FINAL

It turns out automated facial recognition software has some built in bias. As the use and reliance on this technology expands, developers will need to find ways to combat prejudice.

Via Wired:

Research released last month found that facial-analysis services offered by Microsoft and IBM were at least 95 percent accurate at recognizing the gender of lighter-skinned women, but erred at least 10 times more frequently when examining photos of dark-skinned women

The danger of bias in AI systems is drawing growing attention from both corporate and academic researchers. Machine learning shows promise for diverse uses such as enhancing consumer products and making companies more efficient. But evidence is accumulating that this supposedly smart software can pick up or reinforce social biases.

The fix that finally made Gfycat’s facial recognition system safe for general consumption was to build in a kind of Asian-detector. When a new photo comes in that the system determines is similar to the cluster of Asian faces in its database, it flips into a more sensitive mode, applying a stricter threshold before declaring a match. “Saying it out loud sounds a bit like prejudice, but that was the only way to get it to not mark every Asian person as Jackie Chan or something,” Gan says. The company says the system is now 98 percent accurate for white people, and 93 percent accurate for Asians.

Read more!