Key Principles, Laws & Regulations

This section briefly touches upon key issues and trends of the field that at the time of writing are shaping business decisions and the specific setup of operations and workflows alike. It provides an overview of the types of liability platforms could face, the key laws and regulations that address these liability frameworks, as well as their similarities and differences, and how these frameworks have developed over the past years. Moreover, this section describes some common requirements found in key laws and regulations, and potential risks involved. 

Intermediary Liability

An intermediary is “any entity that enables the communication of information from one party to another.” When a platform is considered an intermediary of information and/or data, this means that the platform provider is generally not legally responsible for the content/conduct on its platform or service. 

The term “intermediary liability” refers to “the allocation of legal responsibility to content providers of all kinds for regulated categories of content.” Depending on the type of content and the role of the platform in providing that content, as well as the responses taken by platforms––actions or omissions––obligations may change. On the basis of these characteristics, countries around the world have designed different responsibility regimes.

Taxonomy of Intermediary Liability

Christoph Schmon and Haley Pedersen of the Electronic Frontier Foundation—expanding upon an intermediary liability guide developed by industry expert Daphne Keller—simplified intermediary liability frameworks as defined in global legislation into the following descriptive categories, arranged within a scale according to the risk of liability placed on the platform:

  • Strict liability: A platform can be found liable for a user’s content or conduct—without any defined harm done by the platform, or any knowledge of a user’s illegal action;
  • Fault-based: A platform can be found liable if they fail to meet certain specified legal obligations, which might include monitoring requirements, or takedowns of illegal content they have a legal obligation to remove;
  • Knowledge-based: A platform may be liable where they had knowledge of a user’s illegal content or conduct but didn’t take action (note that the concept of having “knowledge” varies law to law, and case law outcomes may vary—for example, a platform may be liable if notified by a user, or law enforcement or a government entity, or if they became aware through their own monitoring or operational procedures);
  • Court-adjudicated: Platforms are liable only if notified by a government entity of criminal activity, and subsequently do not take action; 
  • Immunity: Platforms are effectively shielded from potential liability in relation to user content and conduct in almost all scenarios, regardless of their knowledge or actions.

Section 230 and Platform Immunity

The United States’ platform liability regime has had a resonant impact on the global development of liability frameworks and surrounding discussions. The U.S. concept of intermediary liability evolved out of several differing case outcomes in the 1990s in which platforms had been sued as the result of user-generated content, which evidenced a need to define from a legal perspective whether platforms were considered to be publishers of content, who historically were considered liable for the content of the material they publish; or more like distributors of content, who were granted more limited liability. This led to the passage of Section 230 of the U.S. Communications Decency Act (1996), which established that platforms were intermediaries rather than publishers of content users shared on their sites, and granted platforms explicit protection from liability where they remove user-generated content they deem “objectionable.”

Section 230 does not extend its liability shield to platforms when it comes to state law violations, violations of the Electronic Communications Privacy Act of 1986, and intellectual property infringement (although platforms also have a separate “safe harbor” liability shield under a different statute, the Digital Millennium Copyright Act), and—following the passage of the FOSTA and SESTA laws in 2018, content related to sex trafficking. Section 230 empowers platforms to remove content—including non-illegal content, through the enforcement of their own policies and guidelines—however they see fit, while simultaneously imposing only minimal illegal content removal requirements on platforms.

Global Trend Towards More Stringent Liability

Section 230’s generous liability shield for platforms is unique among global platforms’ liability frameworks but, for decades, many other countries’ laws followed the U.S.’s example in offering platforms limited liability based on good faith or knowledge-based action, or did not strictly define their liability. More recently, however, several jurisdictions (notably India, the EU, and some individual EU member states) have begun drafting and passing amendments to previous legislation with stricter liability regimes. These laws generally establish new knowledge-based or fault-based liability regimes—in some jurisdictions, these are added as a layer on top of existing liability regimes.

Generally, these laws introduce operationally more complex content moderation, legal operations, legal, and other requirements for platforms to maintain protections from primary and/or secondary liability. This initial regulatory shift has often been motivated by a number of factors, including but not limited to:

  • A general need to update laws to account for decades of new learnings and new technologies introduced by online content;
  • Concerns over tech platforms’ content moderation practices having real-world negative impact—for example, “under-moderation” allowing terrorism-related content, hate speech, disinformation, and other illegal or toxic categories of content to increase; as well as platforms’ potential “over-moderation” negatively impacting free expression or leading to discriminatory practices;
  • Motivations to try to mitigate real-world harms connected to the dissemination of online content––for example, the 2019 Christchurch mosque mass shooting, which was livestreamed on social platforms, informed the passage of laws in several countries imposing requirements on platforms to action swift takedowns of reported terrorism-related content.

Key examples of legislation following this regulatory trend include:

  • The European Union: In the EU, the E-Commerce Directive (2000) has historically granted platforms a safe harbor from liability where they remove illegal content in good faith. The Digital Services Act (DSA) was approved by member states in 2022, and has come into effect for larger tech platforms (with at least 45 million EU users, defined as “Very Large Online Platforms” (VLOPs)) starting in 2023, and will apply to smaller platforms in 2024. The DSA introduces a large number of requirements for platforms across illegal content handling and removal (including user reported illegal content), transparency reporting and practices, child safety practices, and privacy practices, and marketplace practices.
  • India: India’s Information Technology Rules (replacing the previous 2011 Intermediary Guidelines Rules) came into effect in 2021, and requires platforms to comply with a variety of content moderation, reporting, and requirements in a fault-based regime, with more burdensome requirement for “significant social media intermediaries” with 5 million or more registered users in India. Requirements for platforms include establishing local representatives including a “Grievance Officer,” receiving and responding to user grievances including content and account appeals, complying with government requests within certain timelines (generally, 36 hours for takedowns and 72 hours for user data requests; there may be additional provisions depending on the type of content), monthly transparency reporting on grievances and content blocking, and the proactive removal of CSAM and rape-related content.
  • Germany: The Network Enforcement Act, which came into effect in 2017, introduced a fault-based regime for platforms with at least 2 million registered users in Germany mandating that they establish both notice-and-takedown systems and redressal mechanisms available to users, requiring that they remove illegal content (including hate speech) within 24 hours when notified by either users or government officials, and introducing transparency requirements for platforms related to content actions. Additional amendments to NetzDG in 2020 and 2021 introduced an additional proactive reporting requirement for illegal content, as well as more specific guidance on the redressal mechanism. NetZDG is considered by some experts to have inspired similar legislation mandating takedowns in at least thirteen other jurisdictions.
  • The United Kingdom: At the time of writing, the U.K. is currently considering the implementation of a draft Online Safety Bill which would impose various requirements on platforms within a fault-based liability regime. This includes illegal content removal requirements, user reporting mechanisms, restricting certain types of content for minor users, other minor safety requirements, and granting additional powers to regulatory body Ofcom to oversee platforms’ adherence to the law.
  • The United States: There is currently increasing regulatory and public interest for Section 230 to be amended, in part due to widespread backlash against the negative impact and influence of “big tech” companies. While some lawmakers and industry experts believe Section 230 could be constructively amended, there is not yet widespread consensus on amendments or replacement laws, and many experts have concerns about potential consequential negative impact on free expression and platform’ innovation.

Several recent intermediary liability laws, particularly more recent fault-based ones (such as India’s IT Rules and the EU Digital Services Act), have some or all the following requirements in common (note that this list is not exhaustive):

  • Publication of terms and policies: Platforms may be required to publish their terms of service, community policies, privacy policy, and/or other defined policies; to ensure their policies are in plain, user-friendly language; and/or to update their policies on a regular cadence. 
  • Clear notice to users: Where content is removed, demoted, or otherwise its visibility is affected; or, where a user’s access to features are affected (e.g., account suspension), the platform may be required to give the affected user(s) clear notice (e.g., through a notification) of what content was affected, why action was taken, and other details.
  • Redressal or appeal mechanism: The platform may be held to offering specific channels to users for dispute resolution, including redressal mechanisms platforms must offer to users, any process for addressing disputes without resolution in court, or out-of-court dispute resolution methods. If a redressal mechanism is required, a platform must designate a channel for users to contact them with grievances about aspects of their user experience. This may include, but is not limited to, an appeals system allowing users to appeal actions taken on content or accounts. If a redressal system is required, there is generally a timeline defined for how quickly the platform must respond to user grievances. Some laws, such the DSA, Austria’s KoPI-G, and others, define additional layers of protocol where a dispute cannot be resolved between a user and platform (such as where a user’s submitted appeal was denied by the platform, and the user continues to contest the decision outcome)—for example, a platform may be required to participate in “alternative” dispute resolution out of court involving a user-selected and state-approved mediator.
  • Removing illegal content upon notice: Managing a “notice-and-takedown system” of removing illegal content when it is reported (with “illegal content” generally being defined according to any laws that may apply within the jurisdiction(s), including any federal, jurisdiction, or locality-specific guidelines). Some laws require only court-adjudicated removal actions, meaning platforms must be notified by a government entity or law enforcement to be compelled to remove the content; other laws place a burden on platforms to additionally remove illegal content reported by users (e.g., through a designated reporting channel, or redressal channel), such as Germany’s NetzDG and the EU Digital Services Act. The scope of the removal requirements will also vary depending on the applicable law. For example, content removal may only be mandated in a particular jurisdiction–for example, an item of content only needs to be restricted within one particular jurisdiction where the law applies, and can remain available in other jurisdictions–or may be applied at a global level, meaning that the content must not be available to all users in all jurisdictions. Some laws (with the EU Copyright Directive as one example) also include “staydown” requirements, meaning that if illegal or prohibited content is removed, the platform must maintain tools (generally, involving hash-matching) to detect and remove the content if it is re-uploaded again. 
  • Timely actions on notices: Laws frequently specify that content must be removed swiftly or “expeditiously” in response to a notice. In many laws, specific timelines for removal are identified (examples include 24 hours, 36 hours, 72 hours, or 7 days) and may specify the removal of certain specific categories of illegal content (such as Child Safety Abuse Material (CSAM), sexual violence-related content, or terrorism-related content).
  • Processing legal requests: Platforms may be required to comply with requests from law enforcement and government officials for the disclosure of user data.
  • Proactive reporting: Some laws may require the proactive reporting of illegal content to law enforcement (examples include the U.S.’s CSAM reporting law, and the Digital Services Act)
  • Child safety requirements: Many laws may require the proactive removal of CSAM content, and may have specific privacy or advertising requirements to ensure the safety of children, or other child safety requirements.  
  • Transparency requirements: Platforms may be required to disclose metrics related to actions on content or operational actions taken (such as the number of monthly grievances they received from users; or the number of items of content they removed, categorized by reason). To disclose these requirements, platforms may be required to publish transparency reports on a specific cadence, such as monthly, or annually. Platforms may also be required to disclose specifics of how their content moderation or other operational systems that may affect access to content work, such as automated actions that are taken on the platform, or how recommender systems work, explained either in policies, or in transparency reporting. Platforms may be required to publish other details about their operations; for example, the EU DSA requires online platforms to publish their average number of monthly active users in the EU in order for the EU Commission to categorize each platform by size (e.g. as a a Very Large Online Platform (“VLOP”) with 45+ million EU users), and determine if the upper tier of DSA requirements will apply to them.
  • Appointing local representatives or designated representatives: Some laws, such as India’s IT Rules, require the platform to establish a local office and/or appoint local employees or other designated representatives, particularly to receive and answer notices from law enforcement or government officials, or from users.
  • Monitoring requirements: Some laws require the monitoring and proactive removal of certain types of content—which may include the prevention of re-uploading of removed content. Specific requirements may apply to CSAM, sexual exploitation-related content, or intellectual property violations.
  • Privacy, advertising, marketplace, and other requirements: Many laws, particularly if they have broader e-commerce regulation objectives (a key example being the DSA), simultaneously introduce and encompass a variety of other types of requirements for platforms in their scope, in addition to or as a part of content moderation requirements. These requirements may be related to the processing of user data, how users receive notices and/or advertisements while interacting on the platform (including clear labels for this type of content, or other specific requirements intended to protect users from misleading information or interactions), and requirements related to the regulation of sales of goods or services on marketplace platforms.

Proportionate Response

More recent laws frequently take into account a concept of “proportionality”, in which larger online platforms, which deal with a larger volume of users and content, and thus a greater scale of harm risks, have more requirements that apply to them; as a result, they are more at risk of being liable where user content or conduct leads to harm. Some laws define “proportionality” according to platform size, including the DSA (which imposes more intensive requirements, including risk accountability measures and data-sharing requirements, on “Very Large Online Platforms” of at least 45 million monthly users); India’s IT Rules (which impose additional requirements on “significant social media intermediaries” with at least 5 million users in India); and Germany’s NetzDG (which applies to platforms with at least 2 million registered German users). Some emergent laws, including the UK’s Online Safety Bill and Australia’s Online Safety Act (each not yet in effect at the time of writing), determine the “proportionality” to which legal requirements will apply to the platform based on the outcomes of mandated risk assessments, assessing factors such as the types of content and features the platform offers. For example, Australia’s law defines platforms’ risk levels–for example, those with larger audiences, or offering “higher risk” content like livestreams–and holds higher risk platforms to additional requirements.

Risks with Current Legislation

Although initiatives that create new regulatory requirements in general intend to reduce the risk of harm to users, they may also have potential unintended consequences that could impact people’s collective democratic freedoms.

For example:

  • Limiting free expression on online platforms, as a result of platforms’ teams automating more of their content moderation or over-removing content manually;
  • The potential for governments to submit discriminatory orders to platforms to remove content which it purports to be illegal which may or may not actually violate applicable laws within that jurisdiction–for example, dissenting content speaking out against a political regime–and may additionally result in “chilling effects” on free expression;
  • Potentially introducing or intensifying unintentional discriminatory practices related to content moderation—for example, a platform may over-remove content in a specific language due to automated or ineffective translation practices; 
  • User privacy risks, such as transparency reporting requirements that may inadvertently reveal user data, or legal requirements for data collection or monitoring that conflict with privacy laws that are also in force;
  • Placing greater operational burdens and demands on platforms to attempt to align their policies and practices with a number of global laws may lead to platforms having fewer resources to innovate and improve other aspects of user experience; and,
  • Inconsistent interpretation and application of laws across platforms due to vague, broad, or subjectively judged requirements. For example, some laws require takedowns “expeditiously” without specifics of how quickly this action must occur, while others define platforms’ limited liability based on “making best efforts” to act, or have more vague knowledge-based liability language.

Key Intermediary Liability Laws Summary

The following table summarizes examples of currently active or anticipated laws related to intermediary liability, the broad requirements of each law, and the current status of each law as of September 2023. (Download the PDF here.)

A table summarizing the key intermediary liability laws described in the sections above. Download the PDF to access the links in the Resources / Further Reading column. Last updated September 2023.