There are real bias & privacy dangers that come along with the increased use of facial recognition
The use of artificial intelligence and biometrics is becoming more common in our society. Computers are making more autonomous decisions that affect us all. It is not really surprising that computer software is proving to be just as bias as it’s human programmers.
MIT researcher Joy Buolamwini wrote that artificial intelligence can “reinforce bias and exclusion. Even when it’s used in the most well-intended ways”
“On the simple task of guessing the gender of a face, all companies’ technology performed better on male faces than on female faces. They especially struggled on the faces of dark-skinned African women”.
Around the world facial recognition is being used to pay for products and services as well as identifying shoplifters
Although we haven’t yet implemented facial recognition technology like the ‘smile & pay’ that is being used at KFC in China. Our largest supermarket company has started using facial recognition software to identify shoplifters.
Foodstuffs, which runs the New World, Pak’nSave and Four Square brands have admitted to using the technology in some North Island stores. Foodstuffs have declined to name which stores.
Most of us have had a play with fingerprint scanners or iris identification on our smart phones and laptops. This is only the start of our accepted use of biometrics. As it becomes more commonly used in society we need to be aware that there are real bias and privacy dangers that shouldn’t be ignored.
There are real bias and privacy dangers that shouldn’t be ignored as facial analysis software becomes more widespread in our society.
Insight EDS can help keep your people, assets & property safe