This content was reproduced from the employer’s website on September 9, 2022. Please visit their website below for the most up-to-date information about this position.
About the Team
Trust and Safety is at the foundation of OpenAI’s mission. The team is a part of OpenAI’s broader Applied AI group, which is charged with turning OpenAI’s advanced AI model technology into useful products. We see this as a path towards safe and broadly beneficial AGI by gaining practical experience in deploying these technologies safely and easily by developers and customers.
Within the Applied AI group, the Trust and Safety team protects OpenAI’s technologies from abuse. We develop tools and processes to detect, understand, and mitigate large-scale misuse. We’re a small, focused team that cares deeply about safely enabling users to build useful things with our products.
In the summer of 2020, we introduced GPT-3 as the first product on the OpenAI API, allowing developers to integrate its ability to understand and generate natural language into their products. The MIT Technology Review listed GPT-3 as one of its 10 Breakthrough Technologies of the past year (alongside mRNA vaccines!). In the summer of 2021, we launched Copilot, powered by our Codex, in partnership with GitHub.
About the Role
As a technical analyst on the Trust and Safety team, you will be responsible for developing novel detection techniques to discover and mitigate abuse of OpenAI’s technologies. The Platform Abuse subteam specializes in detecting new threat vectors and scaling our coverage using state of the art techniques. This is an operations role based in our San Francisco office and will require participation in an on-call rotation and resolving urgent incidents outside of normal work hours.
Out of transparency and trust, we’d like to note that this role involves grappling with sensitive uses of OpenAI’s technology, including at times sexual, violent, or otherwise-disturbing material.
In this role, you will:
- Detect, respond to, and escalate platform abuse incidents
- Develop new ways to scale our detection coverage, especially using state of the art large language models and embeddings to improve and automate our coverage
- Improve our detection and response processes
- Collaborate with engineering, policy, and research teams to improve our tooling and understanding of abusive content
You might thrive in this role if you:
- Have experience developing innovative detection solutions and conducting open-ended research to solve real-world problems
- Have experience with large language models and deploying scaled detection solutions
- Have experience in a technical analysis role or have experience with log analysis tools like Splunk/Humio
- Have experience on a trust and safety team and/or have worked closely with policy, content moderation, or security teams