Laws in each country may establish provisions permitting local, jurisdictions, or national officials (including law enforcement or government agency officials) to require platforms to disclose user information to them for the purpose of providing evidence for criminal investigations. Information requested by officials may vary depending on what information is required to carry out the investigation, and what is permitted to be requested within the scope of the applicable law or the type of legal request submitted.
For online platforms based in the United States, the 1986 Stored Communications Act (“SCA,” 18 U.S.C. Chapter 121 § 2701–2712) establishes that government officials can request the disclosure of information from online providers. The SCA also details the types of records which may be requested by government officials, the circumstances which must be present to request various types of information, and the types of legal process which must accordingly be obtained–generally within three types (such as subpoenas, search warrants, and court orders). Government officials are required to obtain a particular type of order, depending on the types of records they wish to request, the basis for the request being made (such as the risk of the criminal investigation), and other applicable requirements. Platforms should closely assess the appropriate legal guidelines and work with legal teams to confirm that the scope of the order is valid based on its type.
Laws that make provisions for platforms’ disclosures of user data may detail the scope and limitations of liability, and any applicable penalties, either for the platform as a business entity or individual employees, if the disclosure request is not fulfilled or other requirements are not met, which may include a loss of protection from liability, fines, and/or imprisonment. Some laws may also require a specific timeframe for response to legal requests, or particular guidelines related to the category or severity of illegal content (such as CSAM, terrorism-related content, or content that poses a threat to life, such as threats of real-world violence). As one example, the due diligence guidelines for platforms within India’s IT Rules (2021) require platforms generally to comply with disclosure requests within 72 hours, in relation to investigations related to unlawful activities or cyber security incidents (section II, 3, (1), j).
Provisions in laws related to legal requests are frequently also made to account for circumstances where an online provider may proactively report crimes to law enforcement, including protection from liability in certain cases. For example, 18 U.S. Code § 2702 regarding “voluntary disclosure” allows online providers to disclose user information to law enforcement without legal process being necessary–for example, where an emergency involving danger of death or serious physical injury to one or more people may result, if law enforcement does not locate the person planning to carry out a violent act and thus prevent it from happening. This provision has led to a precedent wherein law enforcement officials, including foreign governments, may submit “emergency disclosure requests” to platforms without a court-issued legal request, citing an emergency situation where death or injury may occur, and platforms may comply and disclose information at their discretion according to their established policies. Additionally, any legally mandated platform reporting for illegal content will generally be acknowledged in these provisions, such as the reporting requirement for platforms to report CSAM to NCMEC established in 18 U.S. Code § 2252A.
When a provider receives a legal request from a foreign official, the provider will need to consider the applicable laws of all involved countries to determine whether it is required or not to comply with the request. Considerations with assessment may include, but may not be limited to, the jurisdiction(s) where the platform operates as a business entity, the jurisdiction of the government entity making the request, the jurisdiction where the user resides, and how these jurisdictions’ laws are determined to interact based on defined laws or past legal precedent, such as applicable legal treaties between nations.
Historically, the “Mutual Legal Assistance Treaty” (MLAT) framework was the primary method through which non-U.S. entities would request user information from U.S.-based platforms, as established within the Electronic Communications Privacy Act (“ECPA,” 18 U.S.C. § 2501). Within the MLAT framework, non-U.S. entities were expected to submit non-emergency requests for U.S.-based platforms through the U.S. Department of Justice’s Office of International Affairs, and U.S.-based platforms do not comply with non-emergency legal requests they received directly from non-U.S. entities. Increasingly in recent years, however, many jurisdictions are passing regulations to mandate platforms located outside their jurisdiction to disclose information about users within their jurisdiction, with Germany’s Network Enforcement Act (“NetzDG”, 2017) and India’s IT Rules (2021) being two examples.
When processing a legal request, platform teams should work closely with counsel, carefully assess all applicable laws, and consider risks with potential action or non-action (including platform liability, legal penalties, individual employees’ liability, individual employees’ safety, and user safety concerns).
Factors Platforms Consider
Assuming a request meets applicable legal requirements, and the platform is in possession of the requested data, the platform may be required to comply and disclose information. However, the reality of how online providers process and resolve legal requests is far more complex. Platforms may find they need to strike a balance between compliance with applicable regulations and protecting the privacy rights and physical safety of involved parties when making decisions to comply with legal requests, and may work closely with their counsel, as well as consulting regional experts and NGOs, to help guide their actions. Factors platforms consider may include:
- Validity of requests. Platforms may challenge requests that are too broad, lack information sufficient to locate a user, or otherwise do not meet applicable guidelines for validity.
- Legal precedent. In addition to assessing the validity of a legal request and applicable legal consequences–effectively, what the law says will happen if compliance occurs or does not occur–platforms may also consider what the law does not say, namely the historical precedent of how laws are actually enforced in a given region, and the expected real-world consequences of compliance or non-compliance.
- Expected legal penalties for noncompliance. Consequences for the business, including any anticipated penalties, including liability, fines and/or imprisonment, either for the business, or for individual employees may result from non-compliance.
- Expected other consequences for noncompliance. Governments may take certain actions against platforms as a consequence of noncompliance which are not specifically encoded in law. A relatively common consequence of non-compliance is a government blocking all access to a platform in a given jurisdiction.
- Establishing a precedent with compliance. Should the platform comply or not comply with a government’s request, they will establish a precedent for compliance in a given jurisdiction, which may shape the government’s expectations for future actions on requests.
- Legal and safety risks for individuals. Platforms may need to consider the risks to platform users as well as the platform employees responsible for processing the request, and/or any employees residing within a given jurisdiction. In addition to any liability or risk of imprisonment that may fall on individuals as outlined in a given law–if a platform does not comply with a request, platform employees may be subject to intimidation, imprisonment, or even physical harm by law enforcement in certain jurisdictions. Additionally, the subject of a legal request (e.g. a user of the platform whose data is being disclosed) may be at risk for false imprisonment, physical harm, or other legal or safety consequences, depending on the validity of the request being made and the context of the request. Where a platform has employees residing within the region, particularly where the law may require having a designated agent who works as a point of contact for law enforcement, protecting their physical safety and freedoms will be another concern which may factor into compliance and non-compliance decisions.
- Public perception and brand risk. Local and global media, the platform’s user community, and other audiences following the platform’s activities may scrutinize and commentate on compliance or noncompliance decisions which are publicized, whether by the platform themselves (sometimes through mediums such as transparency reports or blog posts), government entities, or media outlets, which can affect public perception and trust in the platform.