Content Regulation

With the increase in users and online platforms over the past years, regulators in different jurisdictions have become concerned about online harms, and decided to get involved and find regulatory ways to protect users. These initiatives are generally focused on specific content lawmakers consider inappropriate for the risks these contents and conducts entail to users. In general, these initiatives are well-intentioned, but they may also involve potential unintended consequences. For instance, many of these regulations give platforms short periods of time to review the conduct or content and make a decision, with the risk for platforms of being subject to important fines, which could result in over-removals. This section provides an overview of the main regulatory developments in different jurisdictions with regards to key topics. The list of issues addressed is not exhaustive, but it covers the most common and relevant topic areas T&S professionals usually have to deal with in their daily work.

Hate Speech

The United Nations’ Strategy and Plan of Action on Hate Speech defines hate speech as “any kind of communication in speech, writing or behavior that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, color, descent, gender, or other identity factor.” International law does not prohibit hate speech; however, it does prohibit the incitement to discrimination, hostility, and violence.

There is no universally accepted definition of hate speech and hateful behavior, and each platform has a different way of categorizing and defining these terms. However, in order to ensure a safe online environment for users to interact and express themselves, nowadays this content and conduct are banned and will be removed from almost every online platform.

In some jurisdictions, hate speech is illegal and platforms must abide by these local laws, whether or not they have community policies against hate speech and hateful behavior. In this context, a few jurisdictions have recently passed laws that specifically oblige platforms to remove illegal content–including hate speech and hateful behavior–within short periods of time. For instance, according to the German Network Enforcement Act (NetzDG), platforms with over 2 million users must remove “clearly illegal” content within 24 hours and all illegal content within 7 days from the day it was posted; otherwise, platforms could face a maximum fine of 50 million euros.In April 2021, the Austrian Communication Platform Act (KoPl-G) came into force in Austria. The Act is part of a larger package targeting “Hass im Netz” (online hate). This law applies to domestic and foreign platforms that either have more than 100,000 average registered users in Austria or the turnover they achieve in Austria exceeds 500,000 euros (with a few exceptions). According to KoPl-G, platforms must remove or disable access to content within 24 hours from the date a complaint is filed when it is “already evident to a legal layperson” that the content is illegal. If the assessment requires a more detailed examination, content must be removed within 7 days. Platforms can face fines of up to 10 million euros in case they fail to comply. Platforms must also publish reports on takedowns and appoint a responsible point of contact through whom the courts and users can reach platforms.

Disinformation and Misinformation

There is not a comprehensive and agreed definition of these terms. Each platform defines disinformation in different ways but, in general, definitions usually cover the following items:

“(i) involve false information; 

(ii) that is intentionally designed to be false or misleading;

(iii) that is distributed online in some coordinated manner; and 

(iv) often has some political, social, or economic goal, including undermining trust in democratic institutions or other harms.”

Other related terms have slightly different meanings. For instance, “misinformation” can be described as “inaccurate information created or shared without an intent to mislead or cause harm and can include genuine mistakes of fact,” whereas “malinformation” refers to “accurate information presented in a misleading context.” Moreover, the term “fake news” has been defined as “fabricated information that mimics news media content in form but not in organizational process or intent.”

Platforms struggle with preventing disinformation and misinformation from spreading and amplifying. However, in order for platforms to identify false information and disinformation and take action, they need to clearly understand the context in which the content is shared. Many platforms added to their policies the possibility of taking alternative actions to content removals in order to provide users with relevant context to better assess the reliability of the information they encounter. For instance, some platforms started labeling content, prompting users when they engage with misleading content, and reducing the distribution and visibility of specific content. Some platforms also have third-party fact-checkers to identify disinformation and misinformation. The effectiveness of these investments is often considered to be uncertain at the time of writing.

These policies have been reinforced during specific periods of crises. Particularly during the COVID-19 pandemic, when levels of online disinformation and misinformation surged, platforms had to strengthen their policies on this issue or even develop specific rules for posting false and misleading information regarding the pandemic. Protecting users from disinformation and misinformation while at the same time protecting and promoting freedom of expression has been a challenge for online platforms.

In 2018, representatives of online platforms, leading tech companies and players in the advertising industry agreed on a self-regulatory Code of Practice to address the spread of online disinformation. This was the first time that the industry agreed, on a voluntary basis, to self-regulatory standards to fight disinformation. In 2022, a more diverse group of actors–including major online platforms, emerging and specialized platforms, players in the advertising industry, fact-checkers, and research and civil society organizations–published the Strengthened Code of Practice on Disinformation, which builds on the 2018 Code of Practice on Disinformation, and sets more ambitious commitments and measures aimed at countering online disinformation. Commitments include demonetizing the dissemination of disinformation; guaranteeing transparency of political advertising; enhancing cooperation with fact-checkers; and facilitating researchers access to data.

Recent laws and bills have addressed the online disinformation and misinformation problems. In 2018, the French congress passed a law that empowered judges to order the immediate removal of “fake news” during election campaigns. This law applies over the three months preceding a national election to platforms with more than 5 million visitors per month or platforms that receive 100 euros—excluding tax per advertising campaign—for each publication containing information related to a debate of general interest. The false character of the piece of news must be obvious, and it must be massively and artificially disseminated, and likely to disturb the public order or the integrity of an election.Moreover, the German NetzDG allows criminally punishable fake news and other unlawful content to be removed from social media (for instance, insult, malicious gossip, defamation, public incitement to crime, incitement to hatred, disseminating portrayals of violence and threatening the commission of a felony). Various bills have been developed in Brazil and the U.S., for instance, but they have not been passed. Moreover, in January 2023, the Indian government proposed an amendment to the Information Technology Rules, 2021, that would bar social media platforms from hosting any information that the authorities identify as false.

Child Sexual Abuse Material (CSAM)

Child sexual exploitation (CSE) is a particularly egregious and sensitive online T&S issue, and online crimes against children are increasing in volume year over year at the time of this writing. Child sexual abuse material (CSAM), defined as a visual depiction (such as an image or video) of child sexual abuse and exploitation, has become a particularly prevalent child exploitation issue for user-generated content platforms to manage. In most countries worldwide, CSAM is considered to be illegal material, carrying strict penalties for creation and distribution. Liability risks may exist not only for individuals who intentionally engage with such material, publish, or distribute it, but also for intermediaries like online content platforms that may host illegal material intentionally or unintentionally, and risk contributing to its distribution as a result.

As outlined in 18 U.S. Code, Chapter 110 on sexual exploitation and other abuse of children, if a platform becomes aware that they are hosting CSAM (explained as having “actual knowledge” of the material), they are required to take certain actions in response. In practice, a platform team is likely to become aware of CSAM being hosted through notification from law enforcement, through user reporting, or through their own internal automated or manual detection or investigation processes. Once aware of the material, platforms must minimize access to the content, and report the content to the designated authority National Center for Missing and Exploited Children (“NCMEC”), an organization dedicated to the reduction and prevention of harms against children. NCMEC is congressionally-mandated and holds a designated legal status allowing them to hold and maintain CSAM and work with platforms to handle it. NCMEC will then assess platform reports and route them to the appropriate law enforcement entity, including non-U.S. based agencies, if relevant to a foreign investigation. While not specifically outlined as a provider-specific requirement within the law, given that the publication or distribution of CSAM is illegal, platforms also expeditiously remove the content, and are also required to preserve the NCMEC report contents. 

If a U.S.-based platform does not act according to these applicable legal guidelines once aware of hosting CSAM content, they may be liable for criminal charges; however, assuming that they did not have knowledge of the material, or otherwise did not act “with malice” once made aware, 18 U.S. Code § 2258B also grants providers limited liability. Within U.S. case law, the majority of suits against platforms related to CSAM have been dismissed due to the broader protections from liability afforded to platforms by Section 230. Regardless, due to widespread awareness of the illegality of CSAM in most jurisdictions worldwide, the clear effects of harm to child victims, and the potential liability and brand risks to platforms related to hosting CSAM, platforms tend to adopt clear and robust internal and external policies and operational processes around the prevention, detection, and efficient removal, not only of CSAM but also other potential interactions related to child exploitation.

Current laws related to platforms’ legal responsibilities with handling child exploitation and CSAM establish any secondary liability risks, or “safe harbor” frameworks or other protection from liability, detail how platforms may be notified of illegal content, and the steps in response they need to take, including the management of any applicable reporting notice-and-takedown system, or fulfilling any mandatory reporting requirements. Many current child exploitation-related laws, such as in the U.S., generally do not mandate CSAM monitoring requirements, nor any particular methods platforms must use to detect and remove illegal content, nor the prevention of re-uploads of previously uploaded illegal content. 

While many software tools can help with the automatic detection and removal of CSAM, including re-upload detection based on image hashes, it is considered by some industry experts and platform teams themselves infeasible to mandate all online platforms to use automated detection means to scalably and effectively remove illegal content, particularly considering the constraints of smaller platforms with fewer resources. However, more recent laws have begun to introduce automated CSAM detection requirements, notably India’s IT Rules (2021), section II.4(4), which establish that significant social media intermediaries (with 5 million or more users in India) should “endeavour to deploy technology-based measures, including automated tools or other mechanisms to content and conduct related to child sexual abuse and rape,” encompassing also the prevention of re-uploads, with a defined operational strategy approved by the Indian government.

In practice, many platforms incorporate proactive measures into their T&S  systems to aim to detect and remove CSAM more swiftly, even if not legally subject to mandatory monitoring. Tools which can detect nudity within images may enable some degree of CSAM detection, while filtering text based on keywords can help with the detection of CSAM-related discussions, as well as detecting online acts of child exploitation or signals such as grooming behaviors. 

Another of the most noteworthy child safety industry-wide efforts to provide proactive CSAM detection which platforms can engage is hash matching across databases of known CSAM images, wherein a hash—or “digital fingerprint”—of each image is created, and when this hash is recognized, the image can be prevented from being uploaded, or swiftly and automatically removed. One of the most widely known and used hash-matching software tools in the child safety industry is PhotoDNA, designed by Microsoft in 2009 and later donated to NCMEC, which is made available for free to platforms and organizations who apply for participation. Various other CSAM hash databases and detection tools have been developed through collaborations between NGOs and tech platforms, and are maintained on an ongoing basis by a variety of contributing organizations, including NCMEC, the Internet Watch Foundation (IWF), INHOPE, INTERPOL, and the Tech Coalition, with efforts being made to streamline databases and establish partnerships for industry-wide usage. 

Additionally, some organizations and platforms have collaborated to develop tools that allow victims to proactively upload images or videos that have been used to victimize them, and either manually or automatically (via detection strategies such as hash matching) remove the content when it is uploaded to a platform, including NCMEC and Meta’s “Take it Down” program, and the “Report Remove” tool developed by Childline and the IWF.

Laws that mandate the reporting of detected child sexual abuse material by platforms have historically been limited to a small number of countries. 18 U.S. Code § 2258A is one example, requiring that if a platform obtains knowledge of CSAM being hosted on their platform, they must report it to NCMEC. However, as awareness of the widespread nature of online child exploitation has increased, defined legal requirements for reporting to law enforcement are also becoming more commonplace. Canada’s child sexual abuse material law has similar reporting requirements to the U.S., mandating reporting of CSAM to organization Cybertip.ca. 

At the time of this writing, the EU is also considering the creation of an EU-specific organization similar to NCMEC to receive and process CSAM and child exploitation reports from EU-based platforms. To tackle scalability issues with the reporting of increasingly large volumes of child safety reports, platforms and organizations such as NCMEC are collaborating to build out tools to facilitate more automated and secure reporting–for example, developing APIs and operational tools to enable an image to be deleted and sent from an internal trust and safety admin tool directly to NCMEC in a few clicks.

In addition to child exploitation issues carrying greater liability and brand risks for platforms,  child exploitation is constantly increasing in scale, and there is growing awareness, aided by increased research on the topic, of the harm risks to victims. Platforms have a clear stake in curbing the proliferation of illegal material to the greatest extent possible, not only on the platforms they individually manage, but across the internet at large. Given that child exploitation is also an issue which most nations have consensus on being a prohibited and particularly egregious illegal act, platforms and organizations have been able to make more inroads with collaboration on child exploitation prevention and harm reduction compared to many other online content and conduct issues. This includes forming coalitions and collaborative projects, investing more resources into tools development, and making software available for widespread industry usage more affordable or free. 

Many online platform teams—particularly those that host image and video-based content, and those that have more resources to invest time and effort into industry projects and partnerships—regularly contribute to industry-wide efforts to prevent child sexual abuse and protect victims from re-victimization through the distribution of illegal material. Examples include incorporating evolving industry best practices and detection tools into their detection processes, participating in hash-sharing and knowledge-sharing partnerships, and partnering directly with NCMEC, INHOPE, and other organizations, including projects to collaboratively refine their operational processes and facilitate swifter reporting, and to publish preventative user education.

Terrorism

Government bodies regulating content, as well as online platforms defining policies around content and conduct, define “terrorist content” in different ways without an accepted universal definition. Some laws and policies limit their definition to content that consists of direct threats to commit violent acts against others, as incitement of others to commit violent acts. Various laws (with the most recent EU regulation on terrorist content as one example) and platform policies may also include in their definitions any soliciting or recruiting others to terrorist groups, endorsing or glorifying terrorists’ acts, presenting any affiliation with or support of such groups (such as sharing their symbols or flags), disseminating related materials (including disinformation or propaganda), or financing terrorist activities. 

Government entities, stakeholder groups such as NGOs, and platforms generally define policies related to the activities of international terrorist groups–such as groups that have an international scope, or are considered “international” relative to the primary jurisdiction of concern–as well as more localized domestic groups, whose activities may be more of a concern relative to specific jurisdictions, and whose ideological agendas may overlap with other policy areas of concern, such as hate speech. Government bodies and NGOs frequently maintain lists of designated terrorist organizations or sanctioned groups (such as the EU Terrorist List and the U.S.’s Foreign Terrorist Organizations list) which platforms may use to shape the formation of their policies and operational practices. Platforms may establish their own lists of designated organizations that are prohibited from interacting on the platform within their policies, and/or may formulate their terrorism policies in relation to groups’ specific ideologies and narratives, specific types of content, or specific types of conduct.

Terrorist content online, and specifically its adoption as a recruitment and amplification tool for terrorist organizations, has been an ongoing concern for online platforms since their inception. However, regulators and the public are increasingly scrutinizing online platforms’ role in disseminating terrorist content, driven by terrorist attacks which were livestreamed and widely shared (such as the 2019 Christchurch mosque attacks), terrorists sharing manifestos online, and a more recent spate of terror attacks that have been carried out in countries in the EU. New legislation has been passed in recent years introducing additional mandates for platforms to compel them to remove terrorist content. 

In response to the 2019 Christchurch attacks, Australia and New Zealand passed new laws criminalizing video content depicting violent acts, and introducing knowledge-based liability regimes for platforms who do not remove such content, with each law effectively establishing a notice-and-takedown system, In 2022, EU member states adopted Regulation (EU) 2021/784, which mandates that platforms remove terrorist content within one hour of notice of being notified by a designated authority, as well as reporting on terrorist content removals in their transparency reporting, and maintaining a user complaint mechanism users can engage when their content is removed. 

Platforms face considerable challenges presently in their approach and response to terrorist content. Within their policies, they must initially set their own definitions of terrorist affiliation, content, and conduct, amidst the lack of industry consensus. Detection and response to terrorist content—particularly where a platform is notified by a government entity and where expeditious takedown and response is required by law—may also pose a challenge, particularly for platforms with fewer resources to set up necessary monitoring. Terrorist content may come to a platform’s attention through reporting (including government notices, or user reporting), or a platform’s own proactive detection systems or investigation efforts (which may include text, image, or video filtering, matching against hash-sharing databases of terrorist content, and/or filtering tools that prevent the re-upload of terrorist content based on hashes). 

Depending on their perceived risk and the scale of terrorist content they deal with, platforms may also need to put significant resources toward their proactive investigation efforts, such as conducting social network analyses, to identify users involved in terrorist activities. Effective assessment and action on terrorist content generally requires a strong understanding of how terrorist groups use online platforms to further their aims in order to set operational strategies in response. Ongoing monitoring of current events related to terrorist activities in jurisdictions of concern will generally also be helpful in order for a platform to be reactive to terrorist recruitment strategies and anticipate any needed rapid response to terrorist activities or attacks. Growing platforms are likely to require increasingly robust language moderation support sufficient to be able to review and detect content from a variety of international terrorist groups  (which may consist of a combination of human reviewers and automated detection tools), as well as a strong awareness of trends and localized context amongst internal stakeholders and moderator teams.

To mitigate the particular challenges for platforms that come with tackling terrorist content, and to further global online counter-terrorism efforts, tech companies have formed collaborative industry groups (with some of these groups developing into more independent NGOs over time), focused on supporting online platforms in their counter-terrorism policy and moderation efforts. These groups’ activities include knowledge-sharing related to terrorist strategies and activities; guiding platforms on unified approaches to policy, detection, and transparency; and helping smaller platforms with their strategies. Notably, the Global Internet Forum to Counter Terrorism (GIFCT), initially founded by four large tech companies in 2017 and now an NGO, supports its partner platforms in sharing awareness of terrorist activities, large-scale acts of violence, and trends with violent extremism, and providing platforms with access to hash-sharing databases and other tools. Other organizations that aim to unite platforms to facilitate platform knowledge-sharing, encourage more effective platform strategies to counter terrorist content, and to help more resource-constrained platforms include Tech Against Terrorism (TAT) (supported by the United Nations Counter Terrorism Committee Executive Directorate) and the Christchurch Call.