Content moderation as language policy

Connecting commercial content moderation policies, regulations, and language policy


  • Mandy Lau York University



language policy, commercial content moderation, social media regulation, platform moderation


Commercial content moderation removes harassment, abuse, hate, or any material deemed harmful or offensive from user-generated content platforms. A platform’s content policy and related government regulations are forms of explicit language policy. This kind of policy dictates the classifications of harmful language and aims to change users’ language practices by force. However, the de facto language policy is the actual practice of language moderation by algorithms and humans. Algorithms and human moderators enforce which words (and thereby, content) can be shared, revealing the normative values of hateful, offensive, or free speech and shaping how users adapt and create new language practices. This paper will introduce the process and challenges of commercial content moderation, as well as Canada’s proposed Bill C-36 with its complementary regulatory framework, and briefly discuss the implications for language practices.


Abid, A., Farooqi, M., & Zou, J. (2021). Large language models associate Muslims with violence. Nature Machine Intelligence, 3(6), 461–463.

Andrews, L. (2020, August 31). Paedophiles are using cheese and pizza emojis to communicate secretly on Instagram. Daily Mail Online.

Bayer, J., & Bárd, P. (2020). Hate speech and hate crime in the EU and the evaluation of online content regulation approaches (PE655.135 - July 2020). European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs.

Bender, E., Gebru, T., McMillan-Major, A., Shmitchell, S., & Anonymous. (2020). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the ACM/IEEE Joint Conference on Digital Libraries, 1(1), 271–278.

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.

Bickert, M., & Fishman, B. (2018). Hard questions: What are we doing to stay ahead of terrorists? Meta.

Bill C-36: An Act to amend the Criminal Code and the Canadian Human Rights Act and to make related amendments to another Act (hate propaganda, hate crimes and hate speech). (2021). 1st Reading June 23, 202, 43rd Parliament, 2nd session. Retrieved from the Parliament of Canada website:

Cameron, D. (1995). Verbal hygiene. Routledge.

Canales, K. (2021, March 25). Mark Zuckerberg said content moderation requires “nuances” that consider the intent behind a post, but also highlighted Facebook’s reliance on AI to do that job. Business Insider.

Cao, S., Jiang, W., Yang, B., Zhang, A. L., & Robinson, J. M. (2020). How to talk when a machine is listening: Corporate disclosure in the age of AI. NBER Working Paper Series.

Cobbe, J. (2020). Algorithmic censorship by social platforms: Power and resistance. Philosophy and Technology.

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Curry, B. (2021, October 1). Liberals’ Parliamentary agenda lists three internet regulation bills as early priorities. The Globe and Mail.

Dias Oliva, T., Antonialli, D. M., & Gomes, A. (2021). Fighting hate speech, silencing drag queens? Artificial Intelligence in content moderation and risks to LGBTQ voices online. Sexuality and Culture, 25(2), 700–732.

Douek, E. (2021, June 2). More content moderation is not always better. Wired.

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. Picador.

European Commission. (2021, October 21). The digital services act package. Shaping Europe’s Digital Future.

Gerrard, Y. (2018). Beyond the hashtag: Circumventing content moderation on social media. New Media and Society, 20(12), 4492–4511.

Ghaffary, S. (2021, August 15). The algorithms that detect hate speech online are biased against black people. Vox.

Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.

Gonen, H., & Goldberg, Y. (2019). Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 609–614.

Google Canada. (2021, November 5). Our shared responsibility: YouTube’s response to the Government’s proposal to address harmful content online. Official Google Canada Blog.

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data and Society, 7(1).

Government of Canada. (2021a). Consultation closed: The Government’s proposed approach to address harmful content online.

Government of Canada. (2021b). Discussion guide.

Government of Canada. (2021c). Technical paper.

Gray, M.L. & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt.

Harris, R. (1987). The language machine. Duckworth.

Horwitz, J. (2021). Facebook says its rules apply to all. Company documents reveal a secret elite that’s exempt. The Wall Street Journal.

Jenik, C. (2021). Technology: What happens every minute on the Internet? World Economic Forum.

Jeong, S. (2018, April 13). AI is an excuse for Facebook to keep messing up. The Verge.

Jones, R. H. (2021). The text is reading you: Teaching language in the age of the algorithm. Linguistics and Education, 62.

Kaye, D. (2019). Speech police: The global struggle to govern the Internet. Columbia Global Reports.

Khoo, C., Gill, L., & Parsons, C. (2021, September 28). Comments on the Federal Government’s proposed approach to address harmful content online. The Citizen Lab.

Koetsier, J. (2020, June 9). Report: Facebook makes 300,000 content moderation mistakes every day. Forbes.

Langvardt, K. (2018). Regulating online content moderation. Georgetown Law Journal, 106, 1353–1388.

Liberal Party of Canada. (2021). Protecting Canadians from online harms.

Lomas, N. (2020, June 19). Germany tightens online hate speech rules to make platforms send reports straight to the feds. TechCrunch.

Manzini, T., Lim, Y. C., Tsvetkov, Y., & Black, A. W. (2019). Black is to criminal as Caucasian is to police: Detecting and removing multiclass bias in word embeddings. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1, 615–621.

Meta. (2021, November 3). Reviewing high-impact content accurately via our cross-check system. Meta Transparency Center.

Nazari, J. (2021). Don’t say ‘Taliban’: Facebook suppresses Afghan activists and artists leading the resistance. The Toronto Star.

Newton, C. (2019, February 25). The secret lives of Facebook moderators in America. The Verge.

Noble, S. (2018). Algorithms of oppression. New York University Press.

Oversight Board. (2021, November). Board decisions. Oversight Board.

Patel, F., & Hecht-Felella, L. (2021, February 22). Facebook’s content moderation rules are a mess. Brennan Center for Justice.

Pelley, S. (2021, October 4). Whistleblower: Facebook is misleading the public on progress against hate speech, violence, misinformation. 60 Minutes.

Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press.

Seetharaman, D., Horwitz, J., & Scheck, J. (2021, October 17). Facebook says AI will clean up the platform. Its own engineers have doubts. The Wall Street Journal.

Shenkman, C., Thakur, D., & Llansó, E. (2021). Do you see what I see? Capabilities and limits of automated multimedia content analysis. Center for Democracy & Technology.

Shohamy, E. (2006). Language policy: Hidden agendas and new approaches. Routledge.

Simonite, T. (2021, October 25). Facebook is everywhere; its moderation is nowhere close. Wired.

Spolsky, B. (2012). The Cambridge handbook of language policy. Cambridge University Press.

Statista. (2021). Media usage in an internet minute as of August 2021. Statista.

Stober, E. (2021, November 5). Google warns Canada’s plan to fight online hate is ‘vulnerable to abuse’. Global News.

Street, B. (2005). Understanding and defining literacy. In Paper commissioned for the Education for All Global Monitoring Report 2006.

The Wall Street Journal. (2021). The Facebook files: The Wall Street Journal investigation. The Wall Street Journal.

Wardle, C. (2018). Information disorder: The essential glossary. First Draft Footnotes.

Wigglesworth, R. (2020, December 5). Robo-surveillance shifts tone of CEO earnings calls. Financial Times.

Woolf, M. (2021, November 21). Wrangling over language may slow online harm bill, anti-hate groups say. CBC News.

Yang, J. (2017, February 27). Why hate crimes are hard to prosecute. The Toronto Star.

Zakrzewski, C., De Vynck, G., Masih, N., & Mahtani, S. (2021, October 24). How Facebook neglected the rest of the world, fueling hate speech and violence in India. The Washington Post.

Zuckerberg, M. (2018). A blueprint for content governance and enforcement. Facebook.



2022-01-10 — Updated on 2022-03-04


How to Cite

Mandy Lau. (2022). Content moderation as language policy: Connecting commercial content moderation policies, regulations, and language policy. Working Papers in Applied Linguistics and Linguistics at York, 2, 1–12. (Original work published January 10, 2022)




Similar Articles

1 2 3 > >> 

You may also start an advanced similarity search for this article.