Santiago Lyon calls his work over the last four years with the Adobe-led Content Authenticity Initiative, “an extension of my life’s work in journalism, just viewed through a slightly different prism.” Lyon, Adobe’s head of advocacy and education, has a background in photojournalism. During a decade covering international wars, Lyon was injured in Bosnia, taken prisoner in Iraq, and lost friends and colleagues to violent deaths, he tells “Professional Photographer” podcast host Pat Miller on the episode, “Content Authenticity with Santiago Lyon.” After a Neiman Fellowship at Harvard University, Lyon served as director of photography at the Associated Press for 15 years before Adobe asked him to oversee its project to establish protocols around the provenance of digital files. “It’s really a pleasure to be working on this initiative now because I think we live in a time where it’s more important than ever,” Lyon says.
Consumers have little recourse for determining if an image or video they see online is even real, Lyon says. “… The great disruptor over the last couple of years has been the advent of generative AI, where it’s possible to create images and video with a few clicks of the keyboard, and that produces what some people refer to as the ‘liar’s dividend,’” he explains.
“In other words, the dividend for misleading somebody is that it calls everything into question. … It’s really important to understand the origins of what we’re looking at, to understand how things might have been manipulated, and with that information, make a better-informed decision about whether to trust things or not.”
Enter the Content Authenticity Initiative, a movement started by Adobe in 2019, along with The New York Times Company and what was then Twitter, focused on providing transparency around the origins and manipulations of digital photos and videos. In 2020, about 150 leaders in technology, human rights, academia, and other fields assembled to discuss the role of digital manipulation in the rise of mis- and disinformation, Lyon says. That summit turned into a nonprofit called the Coalition for Content, Provenance, and Authenticity (C2PA), which established an open-source global technical standard for digital images and video. Today, roughly 4,500 organizations in various industries have pledged to abide by this standard in their products and software, including Canon, Getty Images, Gannett, Panasonic, Nikon, and Leica.
The key to the standard is embedded “credentials” for images and videos that users can click on to learn the “lifecycle” of the content, including its creation, editing, and any AI modifications. Miller and Lyon liken the concept to a digital nutrition label. “So, in the same way that you would go into a supermarket and … figure out how much sugar or salt or calories or whatever are in [a can of beans or a bottle of juice], here you’ll be able to do the same thing [with an image],” Lyon says, adding that consumers already have this ability with many digital files. “It tells you perhaps what device was used to generate the file, what edit changes were made to it, perhaps who took the image or who distributed the image.”
With this information, the viewer can form educated opinions. “… If there’s an image or a piece of digital content online that you question or you’re not sure about or you would like to be certain about,” he says, “this is the kind of technology that will give you that certainty, that will help make a better decision for you as to whether to trust it or not.”
The concept sounds simple, yet its execution spans multiple touchpoints, according to Lyon:
Lyon draws on his journalism background to educate legislators and policy-makers about digital provenance and the importance of content credentials. “I’m not a technologist by training, but I am a storyteller by training,” he says. “My task is to take this very complex technical detail and interpret it or translate it into a more accessible language, into a vernacular that is understandable by the people who are making legal decisions, so that the decisions they are making are accurate and make sense in the technology field.
“People need to understand what’s happening,” he says about generative AI. “In a few countries, media literacy is taught well and consistently to young children and onwards through their education. But in most countries, it’s not. And children are left to their own devices.” The technology, the education, and the policy discussions are all equally important to fight mis- and disinformation, he says.
The initiative has made progress in product lines, Lyon says, including:
A lot of people do not understand artificial intelligence, and therefore are nervous and afraid of it, Lyon explains. “They want to avoid this technology, and I think it’s fair to say that this technology is here to stay. It has many positive things that can be done with it. It also has some very scary things that we should be concerned about.” His suggestion? “Get out there and get your hands wet. Play with it. Understand what it is,” he says. “…When people start to do that, they will then even more deeply understand and rationalize the value of the need for transparency and the need for things like content credentials.” An easy way to learn and be an advocate is to visit contentauthenticity.org, which includes information about events, newsletters, and an awareness campaign. It’s also a community of companies who care, Lyon says. “You have companies and even individuals who normally would be fierce business competitors, yet we’re all pulling together toward this goal of transparency, which is really, if you want to simplify, a safety issue. We’re trying to make the internet a safer place by having a guaranteed level of transparency baked in.”
Melanie Lasoff Levs is director of publications.
Tags: bridging the gap