While the pandemic has led to people and authorities shifting their focus on fighting the coronavirus, some technology companies are trying to use this situation as a pretext to push “unproven” artificial intelligence (AI) tools into workplaces and schools, according to a report in the journal Nature. Amid a serious debate over the potential for misuse of these technologies, several emotion-reading tools are being marketed for remote surveillance of children and workers to predict their emotions and performance. These tools can capture emotions in real time and help organisations and schools with a much better understanding of their employees and students, respectively.

For example, one of the tools decodes facial expressions, and places them in categories such as happiness, sadness, anger, disgust, surprise and fear.

This program is called 4 Little Trees and was developed in Hong Kong. It claims to assess children’s emotions while they do their classwork. Kate Crawford, academic-researcher and the author of the book ‘The Atlas of AI’, writes in Nature that such technology needs to be regulated for better policymaking and public trust.

An example that could be used to build a case against AI is the polygraph test, commonly known as the “lie detector test”, which was invented in the 1920s. The American investigating agency FBI and the US military used the method for decades until it was finally banned.

Any use of AI for random surveillance of the general public should be preceded by a credible regulatory oversight. “It could also help in establishing norms to counter over-reach by corporations and governments,” Crawford writes

It also cited a tool developed by psychologist Paul Ekman that standardised six human emotions to fit into the computer vision. After the 9/11 attacks in 2001, Ekman sold his system to US authorities to identify airline passengers showing fear or stress to probe them for involvement in terrorist acts. The system was severely criticised for being racially biased and lacking credibility.

Allowing these technologies without independently auditing their effectiveness, would be unfair to job applicants, who would be judged unfairly because their facial expressions don’t match those of employees; students would be flagged at schools because a machine found them angry. The author, Kate Crawford, called for legislative protection from unproven uses of these tools.

You May Also Like
China Achieves New World Record with 42-Tesla Resistive Magnet Technology

China Achieves New World Record with 42-Tesla Resistive Magnet Technology

China has set a new world benchmark with the development of a…
Iron Nanowires for Bone Cell Formation Developed, Could Pave Way for Degenerative Bone Disease Treatments

Iron Nanowires for Bone Cell Formation Developed, Could Pave Way for Degenerative Bone Disease Treatments

Reseachers have developed a nanotechnology platform that could aid in the development…

Inspiration4 in Space: What Life Is Like Aboard the SpaceX Dragon Capsule

The first space tourism mission by Elon Musk’s SpaceX blasted off from…

NASA Prepares OSIRIS-REx to Safely Store Leaking Asteroid Samples

NASA’s robotic spacecraft Osiris-Rex is set to begin on Tuesday a delicate…