Content moderation is topical. Whether it’s vaccines or election integrity, disinformation is thriving; and harassment a defining feature of the online world. The Government of Canada published a technical discussion paper about these challenges in July 2021. In September, election platforms promised to introduce legislation to combat “harmful online content” and make sure social media platforms are “held accountable for the content they host.” Shortly after the election, whistleblower Frances Haugen leaked a series of internal Facebook documents to the Wall Street Journal and spent three hours with the United States Senate discussing these documents and the impact of social media platforms on polarization, misinformation, and young people’s mental health. The resounding message is that the content that we post, share, and consume online has real world impacts. In many cases, this content creates real world harms.
Yet for many tech policy enthusiasts, how to moderate online content well has been front of mind for some time (even within the Parliament of Canada, investigations into the impact of social media platforms on misinformation, election interference, and other online harms date back to 2018). In the days following the Facebook whistleblower leak, these tech policy enthusiasts came together online to move the conversation on content moderation forward. Tech Policy Press held Reconciling Social Media and Democracy on October 7, and on October 8. The Toronto Public Library and McMaster University held Silicon Values: Free Speech, Democracy, and Big Tech. This blog showcases some of the approaches to content moderation that were discussed at these events, as well as their critiques.
Photo by camilo jimenez on Unsplash
Legislation or Government-Led Content Moderation
- Centralized, top-down
- Responsibility for content moderation can fall with the public and/or private sector
- Can encourage or discourage competition, depending on the approach
Legislation is the most heavy-handed policy response a government can take. It is centralized and top-down and defines what must (or must not) be done by who. In this way, legislation draws boundaries between legal and illegal action. A pro to legislation is that it is firm; is (hopefully) applied to all actors equally; and when enforced effectively, is very good at changing behaviour. A con is that legislation can be difficult to build nuance into — and often must be formally amended (a slow-moving process) to adapt to social or technological change.
Government legislation for content moderation is difficult to define because it can vary so much. Governments could make users accountable for the content they post online and make certain content a punishable offence; establish an independent regulatory agency to review and flag online content; or make social media companies responsible for removing illegal content from their platforms. Each one of these approaches would place the responsibility for content moderation in the hands of a different actor and impact competition differently.
One of the first pieces of legislation to directly moderate online content was the Network Enforcement Act. Passed by Germany in 2017 and updated in 2021, the Act makes social media companies responsible for unlawful content on their platforms. It also requires platforms with two million or more users to remove unlawful content within 24 hours; create easy ways for users to report content; and deliver bi-annual transparency reports. If not, they face a fine of up to 50 million euros. Critics of the Network Enforcement Act say it places censorship in the hands of private sector companies; incentivizes unaccountable, overly-broad censorship; and sets a precedent that enables authoritarian governments to erase online dissent. A fourth critique is that many social media companies are based in the US and may not have a nuanced enough understanding of local German culture, language, and law to moderate content effectively.
During the recent Canadian election, the Liberal Party promised to introduce legislation within the first 100 days of Parliament to moderate online content: specifically, hate speech, terrorist content, content that incites violence, child sexual abuse material, and the non-consensual distribution of intimate images. In July, the Government of Canada proposed a legislative framework that closely resembles Germany’s approach.
Social Network-Led Content Moderation
- Centralized, top-down
- Responsibility for content moderation lies with the private sector
- May limit competition
Social network-led content moderation may happen as a result of legislation, for instance, in the case of the Network Enforcement Act, however, it may also happen due to a lack of legislation: many social media companies censor online content because of intense social and political pressures to do so. In any event, social media companies do moderate content online, often using a combination of algorithms and outsourced human moderators. A pro to social network-led content moderation is that companies can tailor moderation to the unique characteristics of their platform. A con is that social media companies are driven by private sector interests and may engage in “ethics-washing,” a process whereby ethics are “increasingly identified with technology companies’ self-regulatory efforts and with shallow appearances of ethical behaviour.”
Critics of this type of content moderation say that private sector companies should not be in charge of a task that so profoundly impacts public discourse, human rights, and democracy. Such critics see social media companies as too centralized (with decision-making power in the hands of a few board members and c-suite executives) and too heavily influenced by private sector motives (such as increasing revenues or appeasing shareholders). For example, companies may invest too few resources in content moderation, choosing instead to prioritize profit. Finally, without transparency, there is no way to ensure due process or enable users to properly appeal moderation decisions.
A second critique is that, because just a few social networks host nearly all online users, social network-led content moderation is incredibly monopolized — and in turn lacks competition. Competition benefits consumers by keeping the quality and choice of products and services high; alternatively, “when there is limited competition and consumer choice, businesses can dictate their terms.” Many speakers who attended the Reconciling Social Media and Democracy event felt that third-party content moderation apps (discussed below) could demonopolize private sector content moderation and give users more control over what they consume online.
Third-Party Apps
- Decentralized
- Responsibility for content moderation lies with the private sector, but also with individuals
- Encourages competition
The “app store model” is a broader tech sector trend whereby companies enable (and sometimes foster) a community of third-party developers to build on top of their core products and services. In this way, third-party apps can improve a company’s product or service offering without the company needing to take on new projects or staff. Interoperability, transparency, and cooperation are foundational to the app store model, and Shopify is a perfect example of a Canadian tech company that swears by this approach.
Third-party apps (also referred to as online intermediaries) were a core topic of discussion at the Reconciling Social Media and Democracy event. Speakers felt that by fostering a community of third-party moderation tools (that integrate with existing social media platforms), the tech industry can give users more choice, agency, and control over their online experiences. Some speakers envisioned out-of-the-box apps serving different content niches, while others envisioned users choosing exactly what kind of content they block or see.
Some speakers noted possible challenges: third-party developers would be subject to the same privacy and technical feasibility concerns that social networks face today; the funding and/or business model for third-party apps is unclear; and social media companies may not allow their use. But despite these challenges, third-party moderation tools do exist today. Block Party is a Twitter app that lets users filter out harassment and other unwanted content. The company wants to prevent groups that are more likely to experience harassment (women, racialized groups, public figures) from leaving social media and, in turn, create a more diverse online world. At the Reconciling Social Media and Democracy event, Block Party founder Tracy Chou praised Twitter for enabling third-party apps to exist on their platform, as other platforms have not exhibited such open approach.
Community-Led Moderation
- Decentralized, bottom-up
- Responsibility for content moderation lies with the public
- Encourages competition
If social network-led moderation is monopolized by a handful of social media companies, community-led moderation might be described as pluralistic (a system in which two or more sources of authority exist). Community-led moderation takes place when online communities are given the tools and authority to moderate themselves. Communities establish a shared set of rules, and then appoint moderators to enforce those rules.
Reddit is the sixth most popular social network in Canada and was one of the first social networks to popularize community-led moderation on their sites. The company of course has its own master content policy, but subreddits are free to establish any number of additional rules (even down to the specific formatting required for posts). While many subreddits rely on human moderators to enforce rules, Reddit’s API (application programming interface) also supports task automation through the creation of bots.
A pro to community-led moderation is that it decentralizes online content moderation by giving users the tools and authority to establish their own rules: because platforms like Reddit host many diverse subreddits (or communities), users have a healthy degree of choice as to which communities they join and engage with. A con is that community-led moderation takes place on a not-for-profit volunteer basis, therefore users may not have the time or resources to moderate content effectively. As Nadia Eghbal writes, “Without effective support for coders’ work on publicly available projects, not only will their labour go uncompensated, but the digital world risks security breaches, interruptions in service, and slowed innovation.”
Each of the above approaches to content moderation places the responsibility for content moderation with a different actor and has a unique impact on competition and centralization. Speakers who attended the Reconciling Social Media and Democracy and Silicon Values: Free Speech, Democracy, and Big Tech events expressed hope that future approaches to content moderation would be decentralized and foster competition, with many speakers cautioning against top-down, heavy-handed approaches that give private sector entities too strong an influence on public discourse and democracy. Looking forward, it will be interesting to see what approach Canada takes with its content moderation legislation: who ends up being responsible for online content moderation, how centralized that approach is, and whether it fosters competition.