London — The European Parliament passed the world's first comprehensive law regulating the use of artificial intelligence on Wednesday, as controversy swirled around an edited photo of Catherine, the Princess of Wales, that experts say illustrates how even the awareness of new AI technologies is affecting society.
"The reaction to this image, if it were released before, pre-the big AI boom we've seen over the last few years, probably would be: 'This is a really bad job with editing or Photoshop,'" Henry Ajder, an expert on AI and deepfakes, told CBS News. "But because of the conversation about Kate Middleton being absent in the public eye and the kind of conspiratorial thinking that that's encouraged, when that combines with this new, broader awareness of AI-generated images… the conversation is very, very different."
Princess Kate, as she's most often known, admitted to "editing" the photo of herself and her three children that was posted to her official social media accounts on Sunday. Neither she nor Kensington Palace provided any details of what she had altered on the photo, but one royal watcher told CBS News it could have been a composite image created from a number of photographs.
Ajder expressed concerns about the impact of AI technology on people's sense of shared reality, noting that public awareness of its capabilities is eroding this shared understanding.
He emphasized the need for both companies and individuals to work towards addressing this challenge.
Exploring the EU's Latest AI Act
The European Union has introduced a new AI Act that adopts a risk-based approach to AI technology. Lower risk AI systems like spam filters can adhere to voluntary codes of conduct, while higher risk applications, such as AI in medical devices, will face stricter regulations.
Certain uses of AI, like facial recognition by police in public spaces, will be prohibited except in exceptional circumstances. The EU aims for the law to safeguard the safety and rights of individuals and businesses in relation to AI.
Challenges of Trust in Digital Content
With the widespread consumption of digital content on various devices, detecting AI-generated manipulations or inconsistencies, especially on small screens, can be extremely challenging.
"It demonstrates our susceptibility to content and how we construct our realities," expressed Ramak Molavi Vasse'i, a digital rights attorney and senior researcher at the Mozilla Foundation. "If we can't rely on what we see, it's a significant issue. Not only do we already have a decline in trust in institutions, but also in media, big tech companies, and politicians. This lack of trust is particularly detrimental to democracies and can lead to destabilization."
Vasse'i collaborated on a recent study that examined the efficacy of various methods for identifying content generated by AI. She mentioned that there are several potential solutions, such as educating consumers and technologists, as well as watermarking and labeling images, but none of these methods are foolproof.
"I am concerned that the pace of technological advancement is too rapid. We are unable to fully understand and regulate the technology that is not causing the issue but is exacerbating it and spreading it," Vasse'i remarked in an interview with CBS News.
"I believe that we need to reconsider the entire information ecosystem that we currently have," she added. "Societies are founded on trust at both personal and democratic levels. We must rebuild our trust in content."
How can I verify the authenticity of what I see?
Ajder noted that, in addition to the broader goal of incorporating transparency regarding AI into our technologies and information systems, it is challenging on an individual level to determine whether AI has been utilized to alter or create a piece of media.
Amidst increasing distrust in media, it is crucial for consumers to discern sources with clear quality standards.
"In a landscape where traditional media is often dismissed, it is important to remember that they are more likely to provide reliable information compared to random social media posts or amateur videos," Adjer emphasized. "Trained investigative journalism is better equipped and more dependable in such scenarios."
Adjer also highlighted the challenge of identifying AI-generated content, mentioning that techniques like monitoring the frequency of blinks in a video can quickly become outdated due to rapid technological advancements.
His advice: "Acknowledge the limitations of your own knowledge and abilities. Practicing humility when consuming information is crucial in today's environment."