This content was reproduced from the employer’s website on January 24, 2022. Please visit their website below for the most up-to-date information about this position.
About the Team
Trust and Safety is at the foundation of OpenAI’s mission. OpenAI is pushing artificial intelligence to unprecedented scale. The Trust and Safety Operations team builds the tools and processes to protect the information ecosystem from the harms that arise from abuse of OpenAI’s technologies.
As part of the Security organization, the Trust and Safety team works to protect OpenAI’s technology, people, products, and customers.
About the Role
As an analyst on the Trust and Safety Operations team, you will be responsible for discovering and mitigating abuse of OpenAI’s products. This position consists of equal parts analysis work and capability improvement. This is an operations role and will require participation in an on-call rotation.
This is a great opportunity for candidates seeking to support the development of cutting edge artificial intelligence.
In this role, you will:
- Detect and respond to violations of product policies
- Produce threat intelligence about product fraud and abuse vectors
- Work with researchers and policy makers to improve the alignment of OpenAI models
- Prototype and build tools to improve analyst workflows
You might thrive in this role if you:
- Can use scripting languages (Python preferred) to write programs to solve problems.
- Have experience with log analysis with tools like Splunk, Humio, DataDog and Jupyter notebooks.
- Have a background in computer science, data analysis, statistics, math, or related field
- Have a pragmatic approach to being on an operations team and can get in the weeds to get stuff done
- Are comfortable communicating complex technical concepts to leadership and other stakeholders
- Bonus if you have experience with GPT-3, cloud infrastructure (Terraform, Kubernetes), and/or incident response
To apply for this job please visit boards.greenhouse.io.