Proactively Investigating and Disclosing Harm
Content detection and enforcement mechanisms can surface harm that companies may proactively disclose to law enforcement, either by legal mandate or on a voluntary basis. For instance, in the United States, when a company utilizes machine learning (ML) classifiers and/or hash matching technology to detect possible and remove child exploitation imagery (“CEI”) or child sexual abuse material (“CSAM”) from its platform for violating its policies, they also have a legal obligation to comply with per 18 U.S. Code § 2258A. This statute requires the company to report confirmed CSAM and any related users to the National Center for Missing and Exploited Children (“NCMEC”) via its CyberTipline to be shared with law enforcement. The resulting CyberTips are sent to law enforcement for further investigation.
Failure to proactively report online child sexual exploitation/material leaves a company punishable by fines up to $300,000.00 in the United States. In addition to reporting requirements, there are a number of other legal requirements platforms must meet when they identify CEI/CSAM. For example, companies are obligated to preserve the reported information for 90 days and store it. Having policies and procedures in place to ensure these obligations are met is essential to ensure staff understand obligations and a company remains legally compliant.
However, for many harm types, there are no laws that mandate law enforcement disclosures or preservation of account information. That said, companies that detect content and behavior indicating credible and imminent threat to life and do not choose to voluntarily disclose relevant information to law enforcement for the purposes of harm prevention may face legal repercussions, a loss of public confidence, and/or scrutiny from lawmakers.
Because the internet plays a significant role in our daily lives, internet companies are stewards of high volumes of information about the public. The decision to voluntarily disclose user information to prevent harmful content should be made by weighing the severity of the harmful content, legal obligations, and ethical considerations against privacy and civil liberty concerns.
With respect to voluntary disclosures, teams that operate at the intersection of trust and safety and law enforcement are responsible for making decisions every day about what constitutes a significant threat and which of those threats require law enforcement intervention. It is incumbent upon organizations to understand and build internal policies and measures to enforce appropriately weighted decision-making.
For example, for someone or something to be classified as a threat that law enforcement should be made aware of, it must be, have the potential to be, or cause, significant harm. What constitutes a threat cannot be based solely on group think, experience, or intuition. Instead, determining what makes a threat is an objective process. A single fact, or the totality of the information, must lead a reasonable person to believe the action could, will, or is causing harm. The harm could result because of either intended or unintended actions.
Vetted criteria, company policies, and training are critical to establish a threshold for reportable harm. They also guide a T&S professional’s decision making so they can make the best possible choices in often-ambiguous situations.
A threat requiring law enforcement engagement generally excludes content such as comedy, hate speech, harassing speech, satire, harmful words used in a benign situation, ambiguous speech, or solely political or religious speech. These types of threats may be best handled outside of law enforcement channels. For example, even a fringe political belief a T&S professional believes violates their company’s terms of service and is highly objectionable is not a threat requiring law enforcement engagement. Speaking about controversial topics, such as race, sexual identity, guns, or religious and political policies certainly could be a threat if there is signficant harm associated with the content. Other reasonable circumstances which are, will, or could cause harm must be articulable and reasonable to make the objectionable content a threat.
When determining triage rating and whether to make a law enforcement referral, there are a number of aspects of the case to consider, including credibility of the threat, proximity of the threat, actionability, specificity of the threat, and potential for public attention. Organizations generally set internal policies governing at which level of each of these aspects a law enforcement referral is required.
Despite the best policy and procedures T&S professionals will continue to encounter gray areas and new and emerging issues. As noted, hiring experienced professionals can help achieve a balance.
In most cases, organizations are not mandated to report issues to law enforcement, but there are a number of circumstances in which organizations may choose, on an ad hoc or standing policy basis, to make a referral to law enforcement. This act is often referred to as “reporting out” or “reporting up” to law enforcement. The line between “what we must do” (what is legally or statutorily required to report) and “what we can do” (what you choose to report even though it is not required) is a space each organization will have to stake out for itself, based on its values and the needs of its users. Given the complexities involved in determining what a policy may involve it is recommended a legal review be conducted.
Some situations in which a law enforcement referral may take place include:
- Terroristic threats;
- Threats of extreme physical violence;
- Threats of suicide or self-harm;
- Reports of human exploitation;
- Cybersecurity threats.
Some organizations proactively monitor content and identify threats themselves for law enforcement referral; this can involve using monitoring software or human moderation or a combination of both. Other organizations rely only on user reports to identify threats; generally these organizations will have an email “hotline” or a “report this” form. Whatever the method of identifying reportable content, the next step upon identifying it is to triage and evaluate it.
|Remember: Platforms are not the only party who can reach out to law enforcement. Users should be encouraged to make their own reports as well, particularly in cases where they feel they or someone else is in physical danger. While one-stop law enforcement contacts are very useful for properly directing complaints, nothing can replace the ability of one’s local law enforcement agency to handle imminent threats.|
If a T&S professional evaluating a threat believes it is actionable based on their organization’s criteria, generally the next step is some level of investigation. This will typically involve using internal tools or server access to gather data about the threat and its poster, as well as checking other content uploaded by the poster in case it is also problematic. At this point, the organization may also choose to enact local actions upon the threat or its poster, including warnings, content removal, or blocks or bans. Local action is not a substitute for law enforcement (LE) referral if that is called for; it should be supplementary.
A law enforcement referral will generally contain a statement of the matter at hand as well as as much technical information as possible to allow law enforcement to locate the threat. This technical information may include IP addresses, user names, ports, copies of the threatening content, and the upload date and time of the content in question. Some organizations also provide their contact information in the report because law enforcement may require additional information. It is common to set up standardized templates for online referrals and a basic script for phone based referrals to ensure the information from the report is communicated quickly, clearly, and calmly in urgent situations.
Where to direct a law enforcement referral will vary by organization and jurisdiction. Some organizations, usually those for whom LE referrals are rare, may locate the exact law enforcement body needed and contact them, while many organizations where referral is common have an individual law enforcement point(s) of contact to whom they can reach out and who can then direct the report to where it is needed. In both cases, an organization may choose to maintain a listing of appropriate LE points of contact for its teams to use in case of referrals.
Some law enforcement and emergency services are decentralized; for example, 911 in the US is managed by thousands of local call centers. The full list of contact details for these centers is restricted for security reasons so will likely be available only to organizations reporting a significant number of emergencies. It is often difficult (or nearly impossible) to route information between decentralized services, and this can cause significant delays in both dispatch time and overall response time. As a result, most platforms strongly encourage users who are experiencing or reporting an emergency to contact law enforcement and emergency services directly. One effective strategy is to encourage users to make their own reports to emergency services when applicable.
In some areas of the world, there may be no functional law enforcement to refer a matter to, or making a law enforcement referral may put someone at greater risk. For instance, in cases where local authorities have been known to retaliate against those who contribute certain types of content, or where suicide and self-harm are criminalized. In these situations, the best option is generally to contact one of the major law enforcement agencies (for instance, the FBI in the United States) and allow the law enforcement agency to triage and share information with necessary parties. These organizations often have attachés in areas where there may not be responsive law enforcement agencies.
Laws and policies governing information release can affect an organization’s policies and decisions about when and how to make law enforcement referrals. Law enforcement disclosures should be made in accordance with local laws or other laws that apply to an organization. It may be necessary to seek approval, either on an individual case basis or on a standing approval basis, from an organization’s legal counsel for non-mandatory law enforcement referral-related disclosures.
It is important to create and uphold internal values and policies regarding what information teams choose to disclose to law enforcement without being compelled; remember that users trust platforms to protect their data to the greatest extent possible. If the policy is to inform law enforcement, establishing a plan covering whether and under what circumstances a platform would notify a user that their data was being or had been released, as well as whether and under what circumstances such reports are publicized in any transparency reporting, is a must.