Content moderation as language policy

Connecting commercial content moderation policies, regulations, and language policy

Authors

  • Mandy Lau York University

DOI:

https://doi.org/10.25071/2564-2855.11

Keywords:

language policy, commercial content moderation, social media regulation, platform moderation

Abstract

Commercial content moderation removes harassment, abuse, hate, or any material deemed harmful or offensive from user-generated content platforms. A platform’s content policy and related government regulations are forms of explicit language policy. This kind of policy dictates the classifications of harmful language and aims to change users’ language practices by force. However, the de facto language policy is the actual practice of language moderation by algorithms and humans. Algorithms and human moderators enforce which words (and thereby, content) can be shared, revealing the normative values of hateful, offensive, or free speech and shaping how users adapt and create new language practices. This paper will introduce the process and challenges of commercial content moderation, as well as Canada’s proposed Bill C-36 with its complementary regulatory framework, and briefly discuss the implications for language practices.

References

Abid, A., Farooqi, M., & Zou, J. (2021). Large language models associate Muslims with violence. Nature Machine Intelligence, 3(6), 461–463. https://doi.org/10.1038/s42256-021-00359-2

Andrews, L. (2020, August 31). Paedophiles are using cheese and pizza emojis to communicate secretly on Instagram. Daily Mail Online. https://www.dailymail.co.uk/news/article-8681535/Paedophiles-using-cheese-pizza-emojis-communicate-secretly-Instagram.html

Bayer, J., & Bárd, P. (2020). Hate speech and hate crime in the EU and the evaluation of online content regulation approaches (PE655.135 - July 2020). European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/655135/IPOL_STU(2020)655135_EN.pdf

Bender, E., Gebru, T., McMillan-Major, A., Shmitchell, S., & Anonymous. (2020). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the ACM/IEEE Joint Conference on Digital Libraries, 1(1), 271–278. https://doi.org/10.1145/3442188.3445922

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.

Bickert, M., & Fishman, B. (2018). Hard questions: What are we doing to stay ahead of terrorists? Meta. https://about.fb.com/news/2018/11/staying-ahead-of-terrorists/

Bill C-36: An Act to amend the Criminal Code and the Canadian Human Rights Act and to make related amendments to another Act (hate propaganda, hate crimes and hate speech). (2021). 1st Reading June 23, 202, 43rd Parliament, 2nd session. Retrieved from the Parliament of Canada website: https://www.parl.ca/DocumentViewer/en/43-2/bill/C-36/first-reading

Cameron, D. (1995). Verbal hygiene. Routledge.

Canales, K. (2021, March 25). Mark Zuckerberg said content moderation requires “nuances” that consider the intent behind a post, but also highlighted Facebook’s reliance on AI to do that job. Business Insider. https://www.businessinsider.com/zuckerberg-nuances-content-moderation-ai-misinformation-hearing-2021-3

Cao, S., Jiang, W., Yang, B., Zhang, A. L., & Robinson, J. M. (2020). How to talk when a machine is listening: Corporate disclosure in the age of AI. NBER Working Paper Series. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3683802

Cobbe, J. (2020). Algorithmic censorship by social platforms: Power and resistance. Philosophy and Technology. https://doi.org/10.1007/s13347-020-00429-0

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Curry, B. (2021, October 1). Liberals’ Parliamentary agenda lists three internet regulation bills as early priorities. The Globe and Mail. https://www.theglobeandmail.com/politics/article-liberals-parliamentary-agenda-lists-three-internet-regulation-bills-as/

Dias Oliva, T., Antonialli, D. M., & Gomes, A. (2021). Fighting hate speech, silencing drag queens? Artificial Intelligence in content moderation and risks to LGBTQ voices online. Sexuality and Culture, 25(2), 700–732. https://doi.org/10.1007/s12119-020-09790-w

Douek, E. (2021, June 2). More content moderation is not always better. Wired. https://www.wired.com/story/more-content-moderation-not-always-better/

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. Picador.

European Commission. (2021, October 21). The digital services act package. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package

Gerrard, Y. (2018). Beyond the hashtag: Circumventing content moderation on social media. New Media and Society, 20(12), 4492–4511. https://doi.org/10.1177/1461444818776611

Ghaffary, S. (2021, August 15). The algorithms that detect hate speech online are biased against black people. Vox. https://www.vox.com/recode/2019/8/15/20806384/social-media-hate-speech-bias-black-african-american-facebook-twitter

Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.

Gonen, H., & Goldberg, Y. (2019). Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 609–614. https://doi.org/10.18653/V1/N19-1061

Google Canada. (2021, November 5). Our shared responsibility: YouTube’s response to the Government’s proposal to address harmful content online. Official Google Canada Blog. https://canada.googleblog.com/2021/11/our-shared-responsibility-youtubes.html

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data and Society, 7(1). https://doi.org/10.1177/2053951719897945

Government of Canada. (2021a). Consultation closed: The Government’s proposed approach to address harmful content online. https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content.html

Government of Canada. (2021b). Discussion guide. https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/discussion-guide.html

Government of Canada. (2021c). Technical paper. https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/technical-paper.html

Gray, M.L. & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt.

Harris, R. (1987). The language machine. Duckworth.

Horwitz, J. (2021). Facebook says its rules apply to all. Company documents reveal a secret elite that’s exempt. The Wall Street Journal. https://www.wsj.com/articles/facebook-files-xcheck-zuckerberg-elite-rules-11631541353?mod=hp_lead_pos7

Jenik, C. (2021). Technology: What happens every minute on the Internet? World Economic Forum. https://www.weforum.org/agenda/2021/08/one-minute-internet-web-social-media-technology-online/

Jeong, S. (2018, April 13). AI is an excuse for Facebook to keep messing up. The Verge. https://www.theverge.com/2018/4/13/17235042/facebook-mark-zuckerberg-ai-artificial-intelligence-excuse-congress-hearings

Jones, R. H. (2021). The text is reading you: Teaching language in the age of the algorithm. Linguistics and Education, 62. https://doi.org/10.1016/j.linged.2019.100750

Kaye, D. (2019). Speech police: The global struggle to govern the Internet. Columbia Global Reports.

Khoo, C., Gill, L., & Parsons, C. (2021, September 28). Comments on the Federal Government’s proposed approach to address harmful content online. The Citizen Lab. https://citizenlab.ca/2021/09/comments-on-the-federal-governments-proposed-approach-to-address-harmful-content-online/

Koetsier, J. (2020, June 9). Report: Facebook makes 300,000 content moderation mistakes every day. Forbes. https://www.forbes.com/sites/johnkoetsier/2020/06/09/300000-facebook-content-moderation-mistakes-daily-report-says/?sh=77b8b0da54d0

Langvardt, K. (2018). Regulating online content moderation. Georgetown Law Journal, 106, 1353–1388. https://perma.cc/Z48T-H3K3.

Liberal Party of Canada. (2021). Protecting Canadians from online harms. https://liberal.ca/our-platform/protecting-canadians-from-online-harms/

Lomas, N. (2020, June 19). Germany tightens online hate speech rules to make platforms send reports straight to the feds. TechCrunch. https://techcrunch.com/2020/06/19/germany-tightens-online-hate-speech-rules-to-make-platforms-send-reports-straight-to-the-feds/

Manzini, T., Lim, Y. C., Tsvetkov, Y., & Black, A. W. (2019). Black is to criminal as Caucasian is to police: Detecting and removing multiclass bias in word embeddings. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1, 615–621. https://doi.org/10.18653/V1/N19-1062

Meta. (2021, November 3). Reviewing high-impact content accurately via our cross-check system. Meta Transparency Center. https://transparency.fb.com/enforcement/detecting-violations/reviewing-high-visibility-content-accurately/

Nazari, J. (2021). Don’t say ‘Taliban’: Facebook suppresses Afghan activists and artists leading the resistance. The Toronto Star. https://www.thestar.com/news/world/2021/10/03/dont-say-taliban-facebook-suppresses-afghan-activists-and-artists-leading-the-resistance.html

Newton, C. (2019, February 25). The secret lives of Facebook moderators in America. The Verge. https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona

Noble, S. (2018). Algorithms of oppression. New York University Press. https://doi.org/10.4324/9780429399718-51

Oversight Board. (2021, November). Board decisions. Oversight Board. https://oversightboard.com/?page=decision

Patel, F., & Hecht-Felella, L. (2021, February 22). Facebook’s content moderation rules are a mess. Brennan Center for Justice. https://www.brennancenter.org/our-work/analysis-opinion/facebooks-content-moderation-rules-are-mess

Pelley, S. (2021, October 4). Whistleblower: Facebook is misleading the public on progress against hate speech, violence, misinformation. 60 Minutes. https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-misinformation-public-60-minutes-2021-10-03/

Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press.

Seetharaman, D., Horwitz, J., & Scheck, J. (2021, October 17). Facebook says AI will clean up the platform. Its own engineers have doubts. The Wall Street Journal. https://www.wsj.com/articles/facebook-ai-enforce-rules-engineers-doubtful-artificial-intelligence-11634338184

Shenkman, C., Thakur, D., & Llansó, E. (2021). Do you see what I see? Capabilities and limits of automated multimedia content analysis. Center for Democracy & Technology. https://cdt.org/wp-content/uploads/2021/05/2021-05-18-Do-You-See-What-I-See-Capabilities-Limits-of-Automated-Multimedia-Content-Analysis-Full-Report-2033-FINAL.pdf

Shohamy, E. (2006). Language policy: Hidden agendas and new approaches. Routledge.

Simonite, T. (2021, October 25). Facebook is everywhere; its moderation is nowhere close. Wired. https://www.wired.com/story/facebooks-global-reach-exceeds-linguistic-grasp/

Spolsky, B. (2012). The Cambridge handbook of language policy. Cambridge University Press.

Statista. (2021). Media usage in an internet minute as of August 2021. Statista. https://www.statista.com/statistics/195140/new-user-generated-content-uploaded-by-users-per-minute/

Stober, E. (2021, November 5). Google warns Canada’s plan to fight online hate is ‘vulnerable to abuse’. Global News. https://globalnews.ca/news/8353819/google-canada-online-hate-plan/

Street, B. (2005). Understanding and defining literacy. In Paper commissioned for the Education for All Global Monitoring Report 2006. https://doi.org/10.1201/9780849378508.ch2

The Wall Street Journal. (2021). The Facebook files: The Wall Street Journal investigation. The Wall Street Journal. https://www.wsj.com/articles/the-facebook-files-11631713039

Wardle, C. (2018). Information disorder: The essential glossary. First Draft Footnotes. https://medium.com/1st-draft/information-disorder-part-1-the-essential-glossary-19953c544fe3

Wigglesworth, R. (2020, December 5). Robo-surveillance shifts tone of CEO earnings calls. Financial Times. https://www.ft.com/content/ca086139-8a0f-4d36-a39d-409339227832

Woolf, M. (2021, November 21). Wrangling over language may slow online harm bill, anti-hate groups say. CBC News. https://www.cbc.ca/news/politics/anti-hate-online-legislation-1.6257338

Yang, J. (2017, February 27). Why hate crimes are hard to prosecute. The Toronto Star. https://www.thestar.com/news/gta/2017/02/27/why-hate-crimes-are-hard-to-prosecute.html

Zakrzewski, C., De Vynck, G., Masih, N., & Mahtani, S. (2021, October 24). How Facebook neglected the rest of the world, fueling hate speech and violence in India. The Washington Post. https://www.washingtonpost.com/technology/2021/10/24/india-facebook-misinformation-hate-speech/

Zuckerberg, M. (2018). A blueprint for content governance and enforcement. Facebook. https://www.facebook.com/notes/751449002072082/

Downloads

Published

2022-01-10 — Updated on 2022-03-04

Versions

How to Cite

Mandy Lau. (2022). Content moderation as language policy: Connecting commercial content moderation policies, regulations, and language policy. Working Papers in Applied Linguistics and Linguistics at York, 2. https://doi.org/10.25071/2564-2855.11 (Original work published January 10, 2022)

Issue

Section

Articles