Trust & Safety Operations Manager

  • People Management
  • San Francisco, CA, USA

Website OpenAI

This content was reproduced from the employer’s website on May 20, 2022. Please visit their website below for the most up-to-date information about this position.

About the Team

Trust and Safety is at the foundation of OpenAI’s mission. The goal of the Applied AI team is to turn OpenAI’s technology into useful products. We see this as a path towards safe and broadly beneficial AGI, by gaining practical experience in deploying these technologies safely and making them easy to build with. The Trust and Safety Operations team builds the processes and capabilities to prevent misuse and abuse of OpenAI‘s technologies.

In summer 2020, we introduced GPT-3 as the first product on the OpenAI API, allowing developers to integrate its ability to understand and generate natural language into their product. The MIT Technology Review listed GPT-3 as one of its 10 Breakthrough Technologies of the past year (alongside mRNA vaccines!). In the summer of 2021, we launched Copilot, powered by our Codex, in partnership with GitHub.

About the Role

As a Trust and Safety Operations Manager, you will be responsible for scaling enforcement of OpenAI’s product policies across its products. This position consists of equal parts analysis work and capability improvement.

This is a great opportunity for candidates seeking to support the development of cutting edge artificial intelligence.

In this role, you will:

  • Build and manage a team of scale content moderation operations and quality assurance for all products, notably OpenAI Labs and the OpenAI API
  • Build and manage a team to run production application reviews and periodic audits of developers on the OpenAI API
  • Manage ongoing relationships with external Trust & Safety Labeling vendors
  • Collaborate with product engineering, security, legal, and policy to keep our platform and the broader information ecosystem safe
  • Work with the proactive Platform Abuse team to routinize newly emerging areas of misuse
  • Work with researchers and policy makers to improve the alignment of OpenAI models through structured data about policy violations
  • Develop and adopt operational metrics to track the organization’s performance against its objectives

You might thrive in this role if you:

  • Are excited about building something from the ground up in an emerging field
  • Are pragmatic in your approach to policy and operations work, and can balance both long-term thinking and in-the-weeds execution to achieve program objectives
  • Are comfortable communicating complex concepts to leadership and other stakeholders
  • Thrive under pressure, and appreciate the ambiguity that comes with incident response
  • Have a proven track record of shipping products or product policy and delivering results
  • Have strong user empathy, and a passion for finding the right balance between enabling developers to experiment and preventing misuse
  • Have a background in computer science, data analysis, statistics, math, or related field
  • Bonus if you have experience with GPT-3 and/or incident response

Note that this role involves grappling with questions of sensitive uses of OpenAI’s technology, including at times erotic, violent, or otherwise-disturbing material. This role will involve engaging with such content. This role may require participation in an on-call rotation or resolving urgent incidents outside of normal work hours.

To apply for this job please visit boards.greenhouse.io.