Trust & Safety Operations Analyst, Platform Abuse

  • Individual Contributor
  • San Francisco, CA, USA

Website OpenAI

This content was reproduced from the employer’s website on June 13, 2022. Please visit their website below for the most up-to-date information about this position.

About the Team

Trust and Safety is at the foundation of OpenAI’s mission. The goal of the Applied AI team is to turn OpenAI’s technology into useful products. We see this as a path towards safe and broadly beneficial AGI, by gaining practical experience in deploying these technologies safely and making them easy to build with.

Within the Applied AI team, the Trust and Safety team protects OpenAI’s technologies from abuse. We build tools and processes to detect, understand, and mitigate misuse at scale. We’re a small, focused team that cares deeply about safely enabling users to build useful things with our products.

In summer 2020, we introduced GPT-3 as the first product on the OpenAI API, allowing developers to integrate its ability to understand and generate natural language into their product. The MIT Technology Review listed GPT-3 as one of its 10 Breakthrough Technologies of the past year (alongside mRNA vaccines!). In the summer of 2021, we launched Copilot, powered by our Codex, in partnership with GitHub.

About the Role

As an analyst on the Trust and Safety team, you will be responsible for discovering and mitigating abuse of OpenAI’s technologies. The Platform Abuse subteam specializes in detecting new threat vectors including new categories of harmful use cases and scaled abuse. This position consists of equal parts analysis work and capability improvement. This is an operations role based in our San Francisco office and will require participation in an on-call rotation and resolving urgent incidents outside of normal work hours.

In this role, you will:

  • Detect, respond to, and escalate platform abuse incidents
  • Improve our detection and response processes
  • Collaborate with engineering, policy, and research teams to improve our tooling and understanding of abusive content

You might thrive in this role if you:

  • Have a pragmatic approach to being on an operations and incident response team and can get in the weeds to get stuff done
  • Have experience on a trust and safety team and/or have worked closely with policy, content moderation, or security teams
  • Have experience in a technical analysis role or have experience with log analysis tools like Splunk/Humio
  • Bonus if you have experience with large language models and/or can use scripting languages (Python preferred) to write programs to solve problems

Note that this role involves grappling with questions of sensitive uses of OpenAI’s technology, including at times sexual, violent, or otherwise-disturbing material. This role will involve engaging with such content.

To apply for this job please visit boards.greenhouse.io.