Image attribution is a problem that Adobe, Twitter and The New York Times set out to fix late last year.
Dubbed the Content Authenticity Initiative (CAI), the trio aimed to create an industry-wide standard for content attribution. This is in hopes that internet users will be able to see whether an image has been altered in any way.
We bring this up today, however, because Adobe, Twitter and The New York Times have published a 28-page white paper which looks sort of like a SWOT analysis.
The white paper takes a look at how content attribution can help to rebuild trust between the media and its consumers.
“We are witnessing extraordinary challenges to trust in media. As social platforms amplify the reach and influence of certain content via ever more complex and opaque algorithms, mis-attributed and mis-contextualized content spreads quickly. Whether inadvertent misinformation or deliberate deception via disinformation, collectively inauthentic content is on the rise,” reads an excerpt from the paper.
We also get a look at how the trio of firms is looking to do its content attribution.
“We will provide a layer of robust, tamper-evident attribution and history data built upon XMP, Schema.org and other metadata standards that goes far beyond common uses today. This attribution information will be bound to the assets it describes, which will in turn reduce friction for creators sharing the attribution data and enable intuitive experiences for consumers who use the information to help them decide what to trust,” the trio explains.
The white paper details how content attribution would integrate into various workflows and many professions would need to manually add meta-data to images.
Mind you, some data is captured automatically with the content but a lot of information must still be added after the fact.
And here is where we see the flaw in this armour.
Many in the media make use of user-generated content for news. Take the recent incident in Beirut, Lebanon for instance.
The video above is compiled by The Guardian using footage from folks on the ground in Beirut, more than likely, citizens who posted video to social media.
Asking a newsroom to follow standards is one thing, but expecting people in the street to understand the importance of that ask and doing it themselves is a big issue
While Content Authenticity Initiative is a great idea it has to be simple enough for anybody to use even and more importantly check. We can’t have folks relying on Adobe products (which are expensive) to check the authenticity of images.
Thankfully, Adobe, Twitter and The New York Times are cognisant of this fact.
“The scenarios presented above assume wide adoption of CAI standards. We would be remiss not to acknowledge that in the early phases of adoption, many steps in many workflows will not be CAI-enabled. For example, in the photojournalist case the newsroom may not be able to enforce CAI compliant capture due to software/hardware availability and legacy systems. Here, the lack of end-to-end CAI compliance could be addressed by having the newsroom itself vouch for the legitimacy of assets and add time and location as post-hoc CAI assertions,” reads the white paper.
Make no mistake, we think this is a good basis from which the Content Authenticity Initiative can be built and some aspects of the initiative are very interesting. The trio of firms also highlights that user education is just as important as establishing a standard which is vital to the ultimate success or failure of this initiative.
In fact the trio acknowledges that now the hard work begins in that leaders in technology, media, academia, advocacy, human rights and other spheres of influence must collaborate and address the issue at hand through this initiative.
We highly recommend reading through the white paper (which can be found here) if you are in the creative field, are part of the media or just want to know more about this initiative.