Issue 1Article 3

The Potential for Decentralized Technology To Rebuild Digital Trust

Sofia Yan & Matthew Ryan

In April 2024, a 30-second video featuring Bollywood star Aamir Khan circulated widely across the Indian internet. Khan was seen criticizing Indian Prime Minister Narendra Modi for unfulfilled campaign promises and neglecting critical economic issues during his two terms. The video concludes with the election symbol for Congress, Modi’s rival party, and the slogan: “Vote for Justice, Vote for Congress.” Khan's immense popularity in India and the video's release during the general election period likely contributed to its explosive distribution.

The video was entirely artificial and generated by AI, yet a sizeable amount of the electorate deemed it authentic. It was representative of a surge of deepfake content created to mislead the Indian public in the lead-up to the national general election, a concern further intensified as it seemed that the source of disinformation stemmed from the country's major political parties.

Disinformation, particularly AI-generated deepfakes, is a growing global crisis in 2024. This year, 40 national elections are scheduled or have already taken place, impacting 41% of the world's population. Counterintelligence agencies and news media in all of these regions are now evaluating measures they can take to mitigate the impact of disinformation on the foundations of their democracies.

Disinformation in the age of generative AI

Fears of disinformation initiatives are not new. In the aftermath of the 2016 U.S. election, both the Senate Intelligence Committee and the U.S. intelligence community concluded that the Russian government utilized disinformation attacks to denigrate the public’s faith in the democratic process and attempt to influence the outcome of the U.S. Presidential election.

What has changed, however, is the complexity, sophistication, and quantity of the attacks. By lowering the costs and increasing the effectiveness of information warfare initiatives, generative artificial intelligence has exacerbated an already serious problem to extremes that counterintelligence agencies were not prepared for.

Using Gen AI tools, nefarious actors are not only able to create fake images, voices, and videos that are indiscernible by most from reality, but they can do it at a scale that is difficult for authorities to curb.

The consequences of this deluge of disinformation have been not just a significant shift in the political landscape, but also a decimation of public trust in the media. A survey conducted by the American Press Institute and the Associated Press-NORC Center for Public Affairs Research revealed that 42% of Americans harbor anxieties about news outlets employing generative artificial intelligence to fabricate stories. Additionally, 53% of Americans have expressed serious concerns about the possibility of inaccuracies or misinformation being reported by news organizations during elections.

In part, the cause of this loss of trust can be attributed to the proliferation of fake news on less reputable sites that lack the rigorous checks or standards of the formal mainstream press. These news sources have managed to convince readers that their unreliable news is equally credible. Once universally trusted media institutions are being put on an equal footing to hobbyist bloggers or sites that have no qualms about publishing glaringly fabricated stories.

Why trust is eroding from institutions

The breakdown in trust stems from being unable to determine where a piece of media comes from –– a lack of provenance. Provenance refers to the origin and history of a piece of content, a term often used in the context of art or antiques. In terms of digital content, provenance refers to data about the origin and history of a piece of content, including the location, date of creation, and any changes made throughout its existence.

By knowing the origin and history of media, one can verify if it was authentically created or artificially generated. A recent study conducted by the Center for an Informed Public (CIP) highlighted the importance of provenance in understanding media accuracy and trust. When users were exposed to provenance information for a piece of media, they were able to better calibrate their trust in the authenticity of the content and their perceptions of its accuracy. For deceptive content, users were able to successfully identify the content as less trustworthy and less accurate when provenance was disclosed.

How we can add provenance to digital media

One way to add provenance is by watermarking content, such as how C2PA (the Coalition for Content Provenance and Authenticity) proposes. This method involves inserting a watermark with a unique identifier into the digital content of the image and then recording that unique identifier. However, there are challenges in ensuring that malicious actors do not strip these watermarks (as noted by* MIT Technology Review*).

Another approach is to leverage blockchains –– record hashes of content and data on a distributed ledger to create a verifiable log anyone can inspect. Numbers Protocol uses this method in its approach to trust on the web, allowing data to be stored securely while also maintaining its integrity. Content history records on the blockchain are both immutable and exist in perpetuity. The permanent accessibility of these records ensures that the data has not been tampered with, and by the public nature of blockchains, anyone can verify the origin of a piece of content, be it an individual or an organization.

Case study: 2024 Taiwan presidential election

The adoption of blockchains to solve these problems isn’t a distant future –– it’s already happening.

Before the Tawainese Presidential Election, in January 2024, there was widespread societal concern regarding the threat of disinformation campaigns. Due to Taiwan’s difficult political situation, attacks primed with creating confusion and sowing distrust were expected. Numbers Protocol collaborated with the Starling Lab, Taiwanese news outlets, and journalists to show how decentralized technologies can be utilized to rebuild media trust.

We launched a pilot study using the Capture Cam App, which allows content recorded by partnered media and civil society groups to be marked and logged as having been authenticated by those groups. The media captured by users was registered on the blockchain, generating a traceable and secure record of the election's vital moments. One example was a civic society member who used Capture Cam to record the election counting process at the polling station to counter disinformation that there was widespread vote counting fraud. This provided a permanent record of the counting process on the blockchain.

The Numbers Protocol created a digital provenance trail, and all of the metadata and assets were stored securely on the Filecoin Network, with multiple copies distributed globally and cryptographic proofs being submitted continually to show the ongoing integrity of the data.

Conclusion

The key to regaining trust lies in empowering audiences to authenticate and verify the genuineness of their news. In terms of provenance, blockchain outperforms other watermark technologies due to its immutability, which can effectively show whether content has been tampered with.

Provenance provides a root of trust by allowing readers to verify that what they read is authentic and dependable. They can, with certainty, know that this is an article written by someone on a specific date and time, rather than just generated by AI. This is the foundation needed to regain public trust in media as a reliable fourth estate required for a functioning democracy.