TikTok Joins Adobe-founded Content Authenticity Initiative (CAI) To Help Combat Misinformation

TikTok Joins Adobe-founded Content Authenticity Initiative (CAI) To Help Combat Misinformation
B&T Magazine
Edited by B&T Magazine



This article is written by Dana Rao, general counsel and chief trust officer at Adobe.

When we began the journey years ago of developing a provenance tool to combat misinformation online, we knew that for it to work, we would need a three-part approach: provenance, policy, and education, all working together to create a chain of trust from creation to consumption.

Today represents a big milestone in establishing provenance everywhere with TikTok announcing that it is joining the Adobe-founded Content Authenticity Initiative (CAI), and the Coalition for Content Provenance and Authenticity (C2PA). Beginning today, the company will begin labeling AI-generated content uploaded to its platform with Content Credentials, with additional support for Content Credentials coming in the future. TikTok is the first social media platform to support Content Credentials, and with over 170 million users in the United States alone, their platform and their vast community of creators and users are an essential piece of that chain of trust needed to increase transparency online.

Content can be created in many ways. You can use AI to generate original content, you can use AI to edit or combine content, and, of course, you can create or edit content without AI at all. The core premise of CAI, a global coalition that we started in 2019 and now boasts over 3,000 members, is to provide a solution for good actors to show how their digital content was made so that they can be trusted in a world where altered and misleading images are not only plentiful but becoming indistinguishable from real images. Transparency is the key to trustworthy content, which is why Content Credentials — what we refer to as a “digital nutrition label” — are critical. Content Credentials include metadata that can provide information such as a creator’s name, what device was used to create the content, date and time of creation, access to the original content, and edit history, including whether AI was used. Providing any or all of this information allows the creator to establish levels of authenticity with their viewers, giving them a way to be trusted in this digital age.

It is equally clear that education has become a critical step in this solution. According to our recent Future of Trust Study, which surveyed over 6,000 consumers across the U.S., U.K., France and Germany, consumers have a strong desire for tools to verify the trustworthiness of digital content and there is an urgent need for proactive measures to address misinformation’s potential impact on election integrity. In particular, a strong majority of people (70 percent U.S., 76 percent U.K., 73 percent France, 70 percent Germany) said it is becoming difficult to verify whether the content they are consuming online is trustworthy, and most respondents (76 percent U.S., 82 percent U.K., 77 percent France, 74 percent Germany) also say it’s important to know if the content they see online is AI-generated.

There are two parts needed for an effective digital literacy campaign addressing the dangers of deepfakes. First, people must understand that AI generated content can be used to deceive them. Just like any other educational campaign about the implication of new technologies, the public needs to understand that powerful AI can create realistic synthetic content, and that they need to be skeptical when viewing any digital content. Second, we need to educate the public about tools like Content Credentials, so that they know that there are ways that they can identify content that can be trusted. It is time for all stakeholders, government, industry, academia, civil society — to come together and work on creating digital literacy about AI generated content.

Finally, along with provenance technology solutions and education campaigns, we will need policy solutions to ensure we are collectively restoring trust in content online. It is encouraging to see governments throughout the world adopting policies on content authenticity, and we need to ensure that provenance metadata is an option to be added wherever content is created and can be carried with the content wherever the content goes. With the right approaches to technology, policy, and education, we will have the tools in place to help win the fight against deepfakes. Today’s announcement with TikTok is an exciting moment in that journey, and we are excited to continue to work with all stakeholders in building what is to come.




Please login with linkedin to comment

Latest News

Zitcha appoints Josh Forsyth as sales lead to drive retail media growth across APAC
  • Advertising

Zitcha appoints Josh Forsyth as sales lead to drive retail media growth across APAC

Following global expansion and continued strong local demand Zitcha appointed Josh Forsyth as APAC sales lead. Lead image: Josh Forsyth, sales lead, Zitcha. Australian-headquartered Zitcha, which operates across four continents in countries including the US, UK, Canada, New Zealand and South Africa, is looking to Asia as the next major emerging retail media market. In […]

Moët Hennessy NZ Adds Special PR To Agency Roster
  • Advertising

Moët Hennessy NZ Adds Special PR To Agency Roster

Special PR has been added to Moët Hennessy’s roster of communications agencies in New Zealand following a competitive pitch. Special PR will be responsible for integrated communications for Moët Hennessy across its luxury brands, including Cloudy Bay, Whispering Angel, Veuve Clicquot and Glenmorangie. The scope of work includes media relations, influencer marketing, content creation, events […]