Digital platforms, services, and products across industries engage in a wide spectrum of activities that impact both their users and clients as well as the societies in which they and their direct stakeholders participate. Governments and other types of regulators, in turn, aim to influence, create, or shut down markets and activities they deem desirable or harmful to their aims or mandates. The U.S. tech industry benefited from the light-touch regulation era created by Section 230 of the U.S. Communications Decency Act 1996 (CDA). Section 230 aimed to foster growth and effectively protect against the traditional approach to regulating other parts of the U.S. economy via tort law (more details on Section 230 of the CDA in the section on “Key principles, laws, and regulations”). The environment has been transformed by the arrival of more relevant regulators globally that shape the industry and have changed the legal requirements for individual businesses.
Four Regulators to Note
While there are too many regulators for the limited purposes of this chapter, the following four have had significant impact on the experiences and priorities of businesses with T&S functions:
- The United States still houses the bulk of companies in the tech industry due to targeted public and private technology investment, the growth fostered by Section 230, and (historically) it also offered the largest accessible market.
- The European Union has emerged in the late 2010s as the default global regulator for the industry due to its ability to create and continuously regulate a large, unusually unified, wealthy, and accessible consumer products and services market, export its consumer-facing regulations efficiently into other jurisdictions through trade-agreements, and carefully designed market-access conditions, and leverage the economics of T&S as a business cost center. Businesses with global T&S reach usually serve sizable numbers of EU consumers and therefore have to fund ongoing compliance with the high EU regulatory standards. They confront two main considerations in this regard: first, it can be too expensive to run a separate T&S infrastructure for non-EU users and, second, providing different T&S policies and practices to non-EU/EEA customers might create brand risks if they are perceived as entailing fewer user rights and services than those available to EU/EEA-based peers.
- India tends to build on emerging EU regulatory ideas that it significantly adapts to its own evolving needs and aims at the national and state level while emerging as the most-populous consumer market globally.
- China has taken a distinct path to controlling its online industry since at least 2006 and offers a substantial, well-developed market. However, due to its highly restrictive approach to regulation and market access, China has not developed the global regulatory clout historically wielded by the U.S. and since the late 2010s most prominently and effectively by the EU. Therefore, China is a significant domestic regulator but will not be reviewed in more depth in this broader overview chapter.
Additionally, public policy makers are taking note of the different business model purposes, the services provided, company sizes, and how these manifest in distinct approaches to T&S and different requirements with which entities need to comply. This tends to lead to creating tiered regulatory frameworks worth taking into careful consideration when making investment decisions or evaluating companies to join as a T&S professional entering the field. For example, the EU’s Digital Market Act (DMA) regulating digital economy competition targets the largest online platforms (“core platforms”) with added obligations due to their outsized market impact on consumers and smaller competitors. The act also distinguishes purpose, explicitly excluding “services which act in a non-commercial purpose capacity such as collaborative projects”, thereby separating out very large online platforms dedicated to serving only in the public interest (i.e., most notably Wikipedia and its sister projects hosted by the same global non-profit organization).
As a smaller provider or professional new to the field, one relatively easy way to start navigating business decisions about T&S in the shifting legal and regulatory landscape is to utilize frameworks developed in the more mature field of cybersecurity. In such a framework, compliance with laws and regulations, in full or in part, relevant for markets one might wish to serve can be framed as a risk management exercise comparable to IT risks: map costs, opportunity costs, and other relevant aspects that enable a fairly formalized decision-making approach.
Business Model Considerations
Within the broad frameworks of the legal and regulatory considerations that shape the industry and the opportunities and risks for companies and individuals working in it, there is another layer touching on business model decisions and, relatedly, organizational structures.
Companies sometimes tend to underestimate (or initially not pay attention to) the costs of legal and regulatory considerations down the road, especially in early scaling phases or while their services and products are not yet tailored to different jurisdictions while their user base comes from a range of different countries. This dynamic tends to stabilize as organizations mature and regulators start paying closer attention, thereby expanding cost centers previously less funded.
Equally, there are business decisions tied to the structure of the organization itself. While the Legal department is often a unified structure in organizations large and small, database and services architectures—as well as the distribution of T&S services and cost centers mitigating related issues—tend to vary. If a company runs its service and product offerings as a separate organizational structure or as a subsidiary, it tends to replicate redundant compliant work through separate T&S structures attached to those distinct offerings. Often, this organizational arrangement drives both costs and makes the overall impact of the organization on consumers more complex for senior leadership and regulators to comprehend. For example, consider tech companies that are conglomerates of large, diversified services (for example, social networking, multimedia platforms, online marketplaces, and cloud computing). This kind of setup might incentivise a conglomerate to host the former three lines of business (social networking, multimedia platforms, and online marketplaces) in-house in the fourth division (cloud computing) and sometimes even share costs on shared internal services (for example, legal or accounting). However, T&S infrastructure tends to be partially-redundantly built around the distinct nature of the offerings without realizing synergies or effectively addressing cross-services abuse. Therefore, the holding company is also ultimately holding enterprise and increasing regulatory risks across all four divisions.
Laws and Regulations v. Community Policies
Over the years, online platforms have created community guidelines or policies—often called “community standards” or “community guidelines“—which are aimed at creating a safe environment and protecting users from harmful behavior and content. These policies work as rules within the platform to ensure a standard of behavior expected on the platform. As platforms grew in size and more people started using them, companies and organizations had to moderate more content and behavior to ensure their users don’t experience anything harmful or unwanted and could freely express themselves.
There are no universal guidelines for every online platform. Each company or organization develops its own specific internal rules, which are tailored to the particular services offered, the type of interactions among users, and the type and size of platform among others. Guidelines are public and usually shared within a platform’s Terms of Service page or on their Transparency page.
Examples of topics covered by policies include (among others):
- Harassment and Bullying
- Sexual Content
- Hate Speech and Hateful Behavior
- Terrorism and Violent Extremism
- Child Sexual Abuse and Exploitation
- Violent and Graphic Content
- Misinformation
- Suicide and Self-Harm
- Illegal Activities and Regulated Goods
This list is non-exhaustive. The types of content and how they are named and categorized vary between platforms.
Although some categories of speech are not illegal, companies and organizations decide to exclude them from their platforms. These categories consist of “material that cannot be prohibited by law but that profoundly violates many people’s sense of decency, morality, or justice” (Keller, 2022).
Platforms can either require previous reporting from users to remove some type of content or proactively enforce their community guidelines using technology and human moderation. Depending on the platform and the type of violation, if some content violates a specific community guideline, the platform may take some kind of action: It may remove the content, label it, hide it, or limit its reach among others. In some cases (but not all), the platform notifies individuals of these decisions and then users can appeal these decisions if they believe no violation has occurred.
At the same time, as explained above, platforms must abide by specific local and regional laws and regulations. Generally, laws and regulations vary across jurisdictions, and there is currently no international convention or standard that regulates platforms’ obligations. Depending on where the company is based, obligations vary. Furthermore, interpretation of obligations varies across companies and their respective policy positions.