The rapid adoption of Artificial Intelligence has major implications for our privacy. A new study by two researchers shows that it is becoming increasingly difficult for people to distinguish between a face created using AI and a real face. Startlingly, the researchers said their study has shown that fake images are more trustworthy than real ones. Researchers are now calling for more safeguards to prevent “deep fakes” from usurping our lives. Researchers have warned about the serious implications, saying AI-synthesized text, audio, image, and video have already been used for fraud, propaganda, and “revenge porn”.

Researchers have asked participants to distinguish faces created using the state-of-the-art StyleGAN2 from those that were real. They also asked the participants about the level of trust the faces evoked in them. The results were surprising. They revealed that synthetically generated faces were highly photo-realistic and difficult to distinguish from real faces. The participants judged them to be more trustworthy as well.

During the initial experiment, the accuracy rate of participants was found to be only 48 percent. During the second experiment, the rate improved marginally to only 59 percent, despite the training from the first round. Researchers then conducted a third round to ascertain trustworthiness. With the same set of images, they found the average rating for fake faces was 7.7 percent more trustworthy than the average rating for real faces.

AI–synthesised text, audio, image, and video are being “weaponised” for the purposes of non-consensual intimate imagery, financial fraud, and disinformation campaigns, the researchers said in the study, published in Proceedings of the National Academy of Sciences (PNAS). “Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable — and more trustworthy —than real faces” they added.

The study authors — Sophie Nightingale from Lancaster University and Hany Farid from the University of California — also warned about the scenario where people would be unable to identify AI-generated images. “Perhaps most pernicious is the consequence that, in a digital world in which an image or video can be faked, the authenticity of any inconvenient or unwelcome recording can be called into question.”

The researchers have proposed some guidelines against deep fakes. These safeguards include incorporating robust watermarks into the image- and video-synthesis networks.


You May Also Like
Distant Galaxies Crash to Produce Massive Sonic Boom, Could Reveal Secrets About the Universe: Report

Distant Galaxies Crash to Produce Massive Sonic Boom, Could Reveal Secrets About the Universe: Report

One of the most intense cosmic shockwaves has been observed in Stephan’s…
Introducing Python for electrochemistry research – Physics World

Introducing Python for electrochemistry research – Physics World

Introducing Python for electrochemistry research – Physics World Skip to main content…

Astronomers Spot White Dwarf Consuming Planets That Once Orbited It

Astronomers have been able to detect a white dwarf eating fragments of planets…
Neuralink Implants the First Brain Chip in a Human; Elon Musk Says ‘Patient Recovering Well’

Neuralink Implants the First Brain Chip in a Human; Elon Musk Says ‘Patient Recovering Well’

Neuralink, the California-based neurotechnology company, has implanted a wireless brain chip in…