Disinformation Isn’t Just a Tech Problem. It’s a Social One, Too.

Mis- and disinformation are often viewed as a cause of society’s ills. But a new report from the Aspen Institute’s Commission on Information Disorder, which studied the global “crisis of trust and truth,” offers a different perspective on how to think about the proliferation of conspiracy theories and bogus info: The rise of disinformation is the product of longstanding social problems, including income inequality, racism, and corruption, which can be easily exploited to spread false information online.

“Saying that the disinformation is the problem—rather than a way in which the underlying problem shows itself—misses the point entirely,” the report quotes Mike Masnick, founder of Techdirt, as saying.

Disinformation, as the report’s authors explain, comes from “corporate, state actor, and political persuasion techniques employed to maintain power and profit, create harm, and/or advance political or ideological goals” and can exacerbate “long-standing inequalities and undermines lived experiences for historically targeted communities, particularly Black/African American communities.” Disinformation is a nascent and fast-moving field of study and Aspen’s report, released Monday, offers a point of view that departs from the conventional wisdom that information problems stem largely from tech and social media platforms. 

The Aspen commission, which was co-chaired by Katie Couric, Color of Change president Rashad Robinson, and former director of the Cybersecurity and Infrastructure Security Agency Chris Krebs, spent six months examining the causes of and solutions to disinformation. Members of the commission included Joan Donovan, research director at Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy; Nathaniel Gleicher, head of Cybersecurity at Meta (formerly Facebook), and Evelyn Douek, a lecturer at Harvard Law School. 

The report outlines goals and recommendations for addressing disinformation on social media, including amendments to Section 230, a frequently debated and often misunderstood part of the 1996 Communications Decency Act, which gives tech companies legal immunity when it comes to user-generated content posted on their platforms. The report proposes “withdraw[ing] platform immunity for content that is promoted through paid advertising and post promotion” and “remove[ing] immunity as it relates to the implementation of product features, recommendation engines, and design.” In other words, algorithmically boosted content and information would no longer be legally protected. 

The authors also urged the executive branch to take action on disinformation broadly, something that’s been controversial even among disinformation researchers and thinkers. They recommended the White House create a “comprehensive strategic approach to countering disinformation and the spread of misinformation” that includes “a centralized national response strategy.” 

Despite the breadth and recommendations of the report, its authors still noted the limitations of stopping disinformation. “To be clear, information disorder is a problem that cannot be completely solved,” they write. “Its eradication is not the end goal. Instead, the Commission’s goal is to mitigate misinformation’s worst harms with prioritization for the most vulnerable segments of our society.”

Copyright

© Mother Jones

0
Banning Books Makes for Bad Parenting

Related Posts