Technical Safety Analyst

  • Individual Contributor
  • TSPA Members
  • San Francisco, CA, US

Website openai OpenAI

This content was reproduced from the employer’s website on February 22, 2023. Please visit their website below for the most up-to-date information about this position.

About the Role

The Platform Abuse team protects OpenAI’s products from abuse. As a technical safety analyst, you will be responsible for discovering and mitigating new types of misuse, and scaling our detection techniques and processes. Platform Abuse is an especially exciting area since we believe most of the ways our technologies will be abused haven’t even been invented yet.

Please note that this role involves working with sensitive content, including sexual, violent, or otherwise-disturbing material. This is an operations role that requires participation in an on-call rotation that resolves urgent incidents, sometimes outside of normal work hours.

This role is based in our San Francisco HQ. We offer relocation assistance to new employees.

In this role, you will:

  • Discover, triage, investigate, and report on abusive behaviors on our platform
  • Respond to real-time safety incidents by stabilizing the problem and rolling out mitigations
  • Develop new ways to scale and automate our detection coverage
  • Collaborate with engineering, policy, and research teams to enhance our tooling and understanding of abusive content

You might thrive in this role if you:

  • Have a pragmatic approach to being on an operations and incident response team and can get in the weeds to get stuff done
  • Have experience on a highly technical trust and safety team and/or have worked closely with policy, content moderation, or security teams
  • Have experience in a technical detection or analysis role, or have experience with log analysis tools like Splunk/Humio
  • Can use scripting languages (Python preferred) to programmatically explore large datasets and generate actionable insights to solve problems
  • Bonus if you have experience with fraud, anti-automation, or API abuse
  • Bonus if you have experience with deploying scaled detection solutions using large language models, embeddings, or fine tuning

To apply for this job please visit