HomeEditor's PickMeta Makes Major Moves to Advance Free Expression on Its Platforms

Meta Makes Major Moves to Advance Free Expression on Its Platforms

David Inserra

Meta just made a big announcement that it will be making a series of significant changes to its content moderation regime in service of greater expression on Meta’s platforms. These changes should be largely celebrated as they contribute a far more vibrant culture of free expression online.

But it’s also worth examining the talking points to more clearly understand what Meta is doing next and how impactful these changes will be. Meta CEO Mark Zuckerberg laid out six ways that Meta will be changing to better favor free expression:

Replacing fact-checkers with community notes. Zuckerberg didn’t shy away from the hard truth here. While fact-checkers might have been conceptualized as a well-meaning way to give users better information and combat the threat of misinformation, the reality, argued by many and finally affirmed by Zuckerberg, is that the fact-checkers have been “too politically biased and have destroyed more trusted than they created.”

Zuckerberg is exactly right. Meta’s fact-checking system turned fact-checking decisions into labels and demotions of content. The age-old problem, though, is “who watches the watchers?” Fact-checkers were comprised almost entirely of left-leaning academics and media who felt it was appropriate to moderate away false or misleading content. Free speech advocates and conservative voices did not join the program, and since arbitrating misinformation is often a subjective process, the fact-checker’s own biases were an inherent part of the process. Furthermore, the fact-checks couldn’t meaningfully be appealed, meaning that fact-checkers were effectively unaccountable private censors. Rather than building trust, such a process only undermined faith in expertise and created more problems for Meta

In place of top-down fact-checking, Meta has proposed adopting the Community Notes program used by X that harnesses the deliberative power of users across the political spectrum to identify helpful information and context. It’s not a perfect system, but it is far more likely to minimize bias and build trust with users. And perhaps Meta can improve the system.
 

Simplify and reduce content moderation of contentious topics. Meta also announced its plans to streamline its content policies that govern some of the most contentious topics in our society, such as immigration and sexuality. As someone who used to be one of Meta’s policy experts, I can say that these appear to be some significant changes. For example, saying that a man shouldn’t be in a boxing competition against a woman was previously a violation of Meta’s policies, but now the policy clearly allows for users to call for sex-based restrictions in “spaces commonly limited by sex or gender, such as restrooms, sports and sports leagues.” Or, previously, it was considered a hate speech violation to state that one group of people had less education than another, but it appears that line is gone. You might expect these types of statements to come up when discussing various social issues. Now, we can’t fully understand the scope of these changes since some internal policies are not publicly available, but the public changes indicate that some of Meta’s rules are becoming more permissive.
 

Reduce errors in enforcement. The sheer amount of content that is posted online means that when any online platform moderates content, it often makes use of automated systems. Previously, Meta’s automated and proactive systems were scanning nearly every piece of content for every policy violation and removing or demoting content accordingly. But even if only a tiny fraction of content is actioned in error, that still means millions of mistakes are made. Meta announced it would only be using those automated systems to find the illegal and highest severity types of content. This will dramatically reduce the amount of content that is being automatically removed or demoted for lower severity violations, instead depending more on users to report lower severity violations. This decision represents a trade-off Meta is making. More content that violates a low-severity policy will be left online until it is reported, but in turn, less nonviolating policy will be mistakenly suppressed.
 

Restoring and improving users’ ability to access political and civic content. Over the past few years, Meta has demoted what it calls civic content, as many users seemingly didn’t want to see political or social content that might be divisive. But on the other hand, other users want to see such content. Meta announced that users who want to see such content will be able to better personalize their feeds. Meta’s prior one-size-fits-all approach meant that certain users were always unhappy. But giving users greater choice and control over their news feed is a wise way to avoid this moderation dilemma. Meta might consider giving users even greater control of their experience to further avoid moderation challenges.
 

Moving content moderation teams to Texas and elsewhere. Meta is moving content moderation and trust and safety teams out of California to other points in the United States, notably Texas, ostensibly to reduce worries about bias in the content moderation teams. While the teams within Meta certainly have a strong left-wing bias as previously testified by Zuckerberg himself, it is worth noting, however, that Meta already had a significant presence in Texas for its content moderation and policy teams, not to mention many other places around the world. While it might be nice to have a geographically diverse workforce outside of California, locating your staff in Texas or elsewhere seems like mostly an attempt to signal to conservatives that Meta will be more understanding and accepting of conservative beliefs. Without a deeper and long-standing commitment to ideological diversity, however, this action on its own will likely have little to no impact on free expression. 
 

Pushing back against foreign censors. While the prior actions are all efforts by a private company to improve its product by choosing to suppress less content, the last action Meta is taking is to partner with the Trump administration to fight back against foreign regulators that are censoring and impacting increasing amounts of speech. Specifically, Zuckerberg put European and Brazilian censors on notice. 

The EU, with its Digital Services Act, member states’ hate speech and misinformation laws, and other regulations, has put significant rules around online speech in Europe. Indeed, rather than new businesses, regulation has been the main technology export of the EU, and it has significant impacts on how, primarily American, technology companies operate. This is known as the Brussels effect, in which the EU’s rules may spread and be adopted beyond the EU, impacting Americans’ speech. One of the most egregious recent examples was when an EU commissioner, Thierry Breton, threatened Elon Musk and X for having the audacity to have a conversation with President Trump live on X in the lead-up to the 2024 US election. While Breton is now out of a job, nothing in the EU’s laws or structures prevents this type of blatant abuse of power in the future. 

And Zuckerberg also alluded to Brazil’s secret court orders that censor important political content with little to no legal justification. Under the guise of “protecting democracy,” Brazil’s judiciary has acted with impunity to censor and jail ordinary citizens, political opponents, and those who dare to criticize the court’s authoritarian power grab. When X attempted to resist these secret orders, the courts banned X and froze the assets of other American companies as punishment. Whatever form the censorship takes, Meta is right to join the fight against governments that are forcing online platforms to silence users.

As a private company, Meta has the right to set up its rules as it wants, and I believe that most of these changes to its rules and programs should be applauded. But if Meta wants to more fundamentally embrace free expression, its institutions need to change as well. Here’s what Meta should change:

Create an internal institution dedicated to free expression. Much of the discussion about these moves, from the right and the left, is that they are being done opportunistically to avoid the wrath of the incoming Trump administration. And these changes are likely at least somewhat a calculated business move. Zuckerberg, though, has been vocal on the importance of free expression in the past, giving a speech at Georgetown in 2019 on just this topic. That said, the impulse to suppress speech was strong over the past few years, and Meta made its prior policy decisions to suppress more speech in a less speech-friendly political climate. Whether explicit censorship abroad, or more subtle censorial pressures here at home, online platforms were not prepared or willing to resist. 

To help better resist arguments and logic for suppressing speech coming from both sides of the political aisle, Meta should create an internal organization dedicated to researching and advocating for free expression. Regardless of the political party in charge or the social pressures of the moment, this organization would exist to make the case for why in the long run, Meta is better served by consistently advancing freer expression. Meta already has plenty of organizations that are politically and institutionally focused on limiting certain kinds of expression. One would expect, for example, Meta’s trust and safety specialists to argue for taking down more content they believe could make some people unsafe. It makes sense that risk-averse communications or legal teams might prefer taking down certain types of controversial speech. And even though Meta has internal teams dedicated to advocating for human rights and civil rights, those organizations embrace international, European, or modern progressive views of expressive rights. 

A free expression policy team should unabashedly focus on advocating for greater expression in all situations for all users. This team would have access to Meta’s data to show through original research and stories how expression has helped users. It would build partnerships with free expression organizations around the world to better inform policymaking. It would provide a devil’s advocate in escalation situations in which there is a push to remove speech and no one within the company is willing to say “hold on a minute.” The billions of dollars Meta spends on its broader Trust and Safety initiatives are focused on removing content—and there is a lot of content that should and will be removed. But given Meta’s products are essentially about giving people a voice, expression deserves its own advocate within Meta. 

Mark Zuckerberg called the recent election a cultural tipping point for free expression. While some may decry this as a craven appeal to win approval from the incoming Trump administration, many of the changes being made by Meta support norms of greater expression and resist clear government censorship. As such, they should be lauded. That said, Meta should go even further in giving users greater control over their online experiences and create internal institutions that are proud champions of free expression regardless of the cultural moment or the political winds. 

previous article
next article
No comments

leave a comment