Many security analysts argue that one of the chief threats of our time is the rise of the use of internet misinformation as an asymmetric tactic in information war. Free internet information services like FaceBook, Twitter and Gmail, and others, too, which collect user data and sell targeted ads to make money, can be exploited by bad actors; individuals, corporations and nation states can create bogus ads and “fake” news targeted to specific audiences, facilitating influence operations that result in large effects, including the possibility of mass violence or changes in election outcomes, for little expense. (Digital pioneer Jaron Lanier explains this emerging threat in detail here.)
The rise of this threat begs the question: Would more widespread use of chain of proof for internet information help counter deliberate misinformation campaigns? Wikipedia already alerts users of dubious information through a system of trusted content creators (human beings). Could every article online have an evidence chain associated with it (a source, or lineage) verified by human beings? Would blockchain be of use here?
Many of us who grew up on the internet enjoyed, celebrated, and defended the privacy and anonymity of communicating online. The rise of centralized systems and their abuse, however, seems to insist that digital information be tied to real human beings who should be held to account for what they say online, or, at least, real human beings should be there to judge the veracity of digital content whomever or whatever the source.
YouTube videos, or FaceBook articles, for example, could have a “B.S.” meter that reflects the level of verified misinformation in the content. Those who pass tests of expertise or earn reputation points would gain access levels on the platform that would allow them to judge content as a “verified human expert”. Something like this already works in content value systems like Stack Exchange’s reputation system.
Below, the author likens current tactics combating digital information warfare to the Maginot Line, arguing that they are outpaced by the tempo of bad actor innovation, that we are in a state of reaction when we should be on the offensive. The solutions argued for here are for governments to incentivize platforms to build in detection for emerging tactics and for platforms to take consequential direct action against those who misuse their systems in cooperation with government law enforcement agencies. The author fails to say precisely how this would be done, but the points are very much worth investigating.
There is a war happening. We are immersed in an evolving, ongoing conflict: an Information World War in which state actors, terrorists, and ideological extremists leverage the social infrastructure underpinning everyday life to sow discord and erode shared reality.