It's quiz time! Can you spot the AI-generated faces?
AI image-generation tools have advanced so greatly that they can now create hyper-realistic images of people who don’t even exist.
Sam Altman’s OpenAI sparked a revolution in generative AI in November 2022, launching ChatGPT and prompting tech giants like Meta, Google, and Microsoft to develop chatbots of their own, namely Meta AI, Gemini, and Copilot. These AI bots, based on large language models, allow users to refine and guide conversations toward a specific length, format, style, detail level, and language.
Users' enthusiasm also pushed tech companies to create sophisticated versions that can produce images and videos based on prompts.
OpenAI — a leader in generative AI — has been at the forefront of this industry, launching tools like Dall-E (an image-generation tool) and other innovations. Similar tools such as Midjourney and StyleGAN2 have emerged, producing high-quality images that continue to fuel user interest.
AI achieved hyperrealism only when it generated white faces; AI-generated faces of colour still fell into the uncanny valley.
— Amy Dawel, Director of ANU's Emotions and Faces Lab
Another factor, highlighted by researchers, is that AI-generated faces of white people were perceived as more realistic than actual photos of white individuals — a phenomenon called hyperrealism.
A 2023 Australian National University (ANU) study found that AI can create faces that look more "real" than photos of actual human faces — a concerning development with the rise of deepfakes.
"AI achieved hyperrealism only when it generated white faces; AI-generated faces of colour still fell into the uncanny valley. This could impact not only how these tools are developed but also how people of colour are perceived online," said Amy Dawel, the study's senior author and director of ANU's Emotions and Faces Lab.
AI or real?
Digital rights expert Nighat Dad, also a member of the UNSG’s AI panel, supported the ANU findings and told Geo.tv, "When we talk about inclusivity in AI, the imbalance in the training datasets is extremely relevant; in fact, pertinent."
Dad, who is also the founder of Digital Rights Foundation, noted that seeing how feasible it was to add context to deep learning models like StyleGAN2, highlighted how easy it would be for tech giants to address inclusivity issues in their models, though they often chose not to.
"Including local skin tones within a predominantly white dataset is an essential step toward building inclusive AI models."
Expressing concern about AI's accelerated progress, she said the creation of ultra-realistic high-definition images raised significant worries about potential misuse.
Including local skin tones within a predominantly white dataset is an essential step toward building inclusive AI models.
— Nighat Dad, Founder of Digital Rights Foundation
In Pakistan, where generative AI is still a new phenomenon to many and has been used to exploit online narratives, its likelihood of creating confusion through scams or fake identities cannot be ignored.
Echoing Dad, India-based AI consultant Divyendra Singh Jadoun said the root of the issue was the lack of diverse skin tones and features in training data.
"AI struggles to create authentic, realistic images of people of colour, underscoring a broader issue with AI bias — if real-world diversity isn’t captured in the data, AI cannot represent it accurately."
I often spend extra time adjusting prompts or editing outputs to achieve a more accurate portrayal of diverse groups.
— Abbas Mustafa, Animator
Abbas Mustafa, senior animator at a digital agency in Karachi, said, "AI tools like Midjourney, Adobe Firefly, Flux, Krea.ai, Leonardo.ai, Dall-E, and DeepAI excel in producing images of white individuals, yet they sometimes struggle to represent brown faces accurately.”
"This often results in overly generic or inaccurate facial features, skin tones, or cultural elements. I often spend extra time adjusting prompts or editing outputs to achieve a more accurate portrayal of diverse groups."
0 Comments