When we look at a painting, we rarely expect it to depict reality with fidelity; we look for some meaning, aesthetic value, and possibly a message or a lasting feeling. With photography, we still expect them to depict a snapshot of reality taken at a given space and time. This is changing and fast.
Conflicts are great motivators for the manipulation of information. Genuine photos have been labelled as fakes, after false positives in artificial intelligence (AI) image detection tools. Fake photos, with clear AI telltale signs (e.g., hands with 7 fingers) have been disseminated as genuine. Fake photography is now easily accessible and with increasing quality, making it easy to generate images like the ones below, done in a couple of minutes on a smartphone with DALL.E 3 (an AI image generator from prompts, by OpenAI). Soon these images, here a fake tornado event and Eve tempted by the forbidden fruit, will become indistinguishable from real photography, and maybe help fuel misinformation.
Left, a tornado in a desert. Right, a person holding an apple.
Credit: Carlos Baquero (using DALL.E 3)
We are quite able to distinguish fiction from reality when we go and watch a movie or when playing with a VR headset. But the mix of real and fake images in news and advertising will erode our trust in digital information. Recently, photographer Boris Eldagsen won and then declined a Sony World Photography Award with an AI image. The line between Promptography and traditional Photography is becoming increasingly blurred.
What can be done?
The first step is to rewire our perception and start assuming by default that photographic content (video as well) is not necessarily grounded in reality. Let us draw a parallel with official documents. Anyone can produce a PDF document that looks like a Ph.D. certificate, but that document has no value unless signed by a reputable institution that can issue doctorate degrees. The value of the content depends not only on the content, but on who signs for the content. Likewise, the same scientific paper in a pre-print archive and in a good journal with quality peer review can have the same content, but a very different value. Since photography used to depict reality, or at least a close approximation of reality, we have been neglecting the certification component.
Watermarking and detection
In a statement from July 2023, the Biden Administration reported on voluntary commitments from major AI Companies on various aspects of responsible AI. These include “developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system.” Watermarking technology can annotate images without apparent degradation of quality. Google’s SynthID tool uses neural networks to add and check watermarks, and announces that they can even be detected after intermediate screenshots, image rotations, and resizing.
Watermarking, once adopted by the major providers of AI images, will help pinpoint images generated at the request of average users. However, sophisticated users can run their own personal AI imaging platforms and avoid watermarking. Also, the source of disinformation campaigns often involves adversary state actors that can easily run their own high-grade platforms.
In the absence of watermarks, it is tempting to resort to tools that try to separate AI content from genuine content. However, these tools are subject to misclassification both by false positives and false negatives. Lock manufacturers and lockpickers have been in an arms race for centuries; we can expect a similar arms race between AI generation and detection algorithms.
Content provenance and authenticity
An alternative to the quest to identify artificial content is to provide tools for certifying genuine images and videos. The idea is to have dedicated software and hardware in the camera that cryptographically signs the image content and context information. Further signatures can be added as the image is processed and finally published to the public. Each step adds another link to the chain of provenance. The Coalition for Content Provenance and Authenticity (C2PA) provides tools and technical standards for certifying the source and history of media content. This includes an icon on the image that, when inspected, provides provenance metadata to the final consumer.
Long before the advent of AI imaging, disinformation tactics frequently involved misattributing authentic images from one event to a different time and place. Fortunately, provenance can counteract such traditional manipulations. Users already have the ability to verify digital certificates, ensuring secure website interactions and validating signatures in PDF documents. While adopting new practices can be challenging, as this technology becomes more prevalent, users will possess a potent tool to verify content authenticity as they consume it.
As always, the most essential tool is not technological but the readers’ exercise of their own critical thinking. Before accepting recent events, it is crucial to corroborate them with multiple sources. As Carl Sagan aptly said, “Extraordinary claims require extraordinary evidence.”.
Acknowledgements
I would like to thank António Coelho for comments on improving this text.
Carlos Baquero is a professor in the Department of Informatics Engineering within the Faculty of Engineering at Portugal’s Porto University and also is affiliated with INESC TEC. His research is focused on distributed systems and algorithms.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment