
Website anthropicai Anthropic
This content was reproduced from the employer’s website on April 10, 2023. Please visit their website below for the most up-to-date information about this position.
As a Trust And Safety Analyst Risk Analyst on the Trust and Safety team, you will help monitor our dashboards, review potentially harmful content, and enforce our AI policy standards. This role sits on the Enforcement and Detection sub-team and will work closely with the Product Policy team to develop deep expertise in our AI policy standards so that they can be consistently enforced across our suite of products, customers and users.
IMPORTANT CONTEXT ON THIS ROLE: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.
Your Responsibilities Will Include:
-
- Monitor user-reported and model detected issues and conduct investigations to determine appropriate actions
- Work with our customers to help develop their enforcement strategies and processes
- Analyze harmful behavior and identify trends to better understand the threat landscape and help improve our detection methods
- Identify bugs, process limitations, and tooling requirements and work with our PMs to implement solutions
You may be a good fit if you:
-
- 2+ years of experience working in an analytics heavy or content moderation role
- Understand the challenges and opportunities of operationalizing product policies, including in the content moderation space, and can incorporate this into our enforcement strategy
- Enjoy gathering feedback, analyzing data and creating processes that scale
- Love to think creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems
- Can analyze the benefits and risks of open-ended problem spaces, working both from first-principles and from industry best practices
To apply for this job please visit jobs.lever.co.