Trust & Safety Coffee Chats are one-on-one conversations designed to provide mentorship to current and aspiring trust and safety professionals. Hosts are volunteering their time and expertise to offer advice to job seekers and T&S professionals.
How It Works:
Search for a host based on what you want to talk about.
Sign-up using the host’s “Schedule a chat” button.
Come prepared with questions and have a specific goal in mind.
Friendly Reminders:
Be respectful of our hosts and their time. If you book time with a host, please show up or cancel the booking in advance if the time no longer works for you.
The TSPA Code of Conduct applies to all resources, spaces, and programming, even if you’re not a TSPA member.
If you’re just getting started in T&S and want to learn more, we recommend reviewing Careers in T&S and the T&S Curriculumbefore scheduling.
Hosts are not conducting formal interviews, offering referrals, providing product feedback, participating in interviews and/or user research, and/or acting on behalf of their employers or TSPA. If you schedule time and you’re not a current or aspiring T&S professional, a host reserves the right to cancel the appointment without notice.
Double check the time zone when booking. Our hosts are located all over the globe.
Michael Swenson most recently led the Policy Programs team at
… Read more Discord, where they oversee a variety of programs that facilitate direct connections between Discord’s many safety and policy-focused teams with users, moderators, academics, and industry practitioners. Their team engaged in dialogue, research, and education surrounding topics of platform governance, policy acumen and praxis, as well as education and co-creative approaches to facing the platform’s toughest safety challenges. In their prior work, Michael worked in the digital marketing space and in academia, and also has spent many years in online community spaces, helping lead dozens of communities most recently on the Reddit platform. They studied religion and ethics in modern society at Duke University, where they researched religious and political extremism through the lens of post-colonial era critical social theory and anthropological approaches to communities of practice. They also spent several years doing Computer Science work at the Georgia Institute of Technology, where they focused on human-centered design, ethics in AI and ML, and educational technology at scale.Read less
I'm a tech and privacy lawyer with 10+ years of
… Read more
experience advising on trust and safety (primarily online terms, content, and T&S policies), emerging tech, and data privacy. My work involves providing strategic counsel on legal, policy, risk management, regulatory compliance, and governance issues relating to tech products and services.
I have advised hundreds of tech startups and thousands of founders, worked closely with senior management, led cross-functional teams for clients, and provided transactional advisory for large multi-country technology contracts. At present I’m the founding partner at Saachi Legal, a tech legal firm based in Bengaluru, India.
With specific reference to T&S roles and activities, several of my clients have been tech startups / platforms with varying user base in domains including social media, fintech, edtech, and health tech. I have worked with, advised, and trained internal teams at these organisations drafting and reviewing policies on content, acceptable user behaviour, data, privacy, etc.
In addition, I have a deep interest in emerging tech, data, AI, and cybersecurity, and keen desire to understand the matters I provide advise on. Consequently, I hold a variety of technical and other certifications including on AI, AI governance, and privacy.
I’m also presently a mentor with All Tech is Human, for data privacy and tech policy, and would love to mentor TSPA members as best I can.Read less
Nitesh has been in the Trust & Safety space for
… Read more over 10 years, all at Google. He has had the opportunity to work as an IC and manager, and has built teams from the ground up. Nitesh has also developed and executed anti-abuse strategies for various categories of products including Search, G+, Google Drive, Calendar, and others. His team also works on identifying various abuses ranging from Chia mining on Cloud, scaled abuse like spammy notifications/emails and sensitive and egregious content like terrorist content, sexually explicit content, and hate speech.Read less
Ratnakar Pawar is a Staff Machine learning Engineer at PlayStation,
… Read more
where he leads Trust and Safety AI initiatives, building scalable, multilingual AI solutions to detect complex harms such as hate speech, bullying, and child grooming across text, image, and voice communications.
With a decade of experience in applied machine learning, his work also focuses on architecting novel generative AI systems and robust fairness evaluation pipelines to enhance platform integrity and support human moderators. His contributions to the field include patented threat intelligence technologies from his time at IBM and a commitment to advancing industry practices through open-source initiatives.
As a member of TSPA and a Coffee Chats host, Ratnakar is looking forward to discussing career paths in ML safety, the challenges of scaling multimodal abuse detection, implementing responsible AI, and leveraging new technologies to build a safer internet for everyone.
Rolando currently leads the Legal Response Team at Discord. He
… Read more
has eight plus years of experience working in Trust & Safety and his career has focused on legal compliance in relation to law enforcement, child safety, intellectual property, and privacy operations. He has worked to develop processes and teams at companies like Pinterest, TikTok, Snap, and Twitter. His previous background in legal support has aided him in developing a career in Trust & Safety legal operations, and his passion for preventing real-world harm has allowed him to continue to grow as a leader in this space.Read less
Rosanna has been at YouTube for over two years, and
… Read more is a Program Manager for YouTube Trust and Safety, having formerly been Policy Enforcement Manager tackling Hate Speech for YouTube. She held a series of roles in the third sector prior to joining YouTube, working on antisemitism and racism, particularly online for almost a decade. She currently works cross-policy with external organizations, managing the Trusted Flagger program for YouTube. She has a Masters in Criminology, focused on Hate Speech, from Birkbeck College, University of London and a Masters in Near and Middle Eastern Studies from the School of Oriental and African Studies. She is based in London.Read less
Ruby Yuen is a seasoned Senior Product Manager with over
… Read more
15 years of experience in Trust and Safety, fraud prevention, and Anti-Money Laundering (AML). Her career began in Hong Kong at Goldman Sachs & UBS starting from a IT analyst and pivot into Business Analyst. At HSBC, Ruby rose to Senior Project Manager in Global Banking and Markets, launching a client onboarding department for AML due diligence in Southeast Asia. She then joined Morgan Stanley, managing AML compliance projects that improved regulatory adherence and operational efficiency.
Ruby transitioned to the tech and digital sectors at Starbucks as the Digital Fraud Prevention Senior Program Manager. She led the development and global rollout of anti-fraud solutions, boosting digital sales and customer satisfaction in 15+ countries.
At Mercari US, Ruby focused on Trust and Safety and Growth as a Product Manager. She launched features that increased gross merchandise value and implemented effective payment fraud prevention strategies. Most recently, at Wrapbook, Ruby developed a comprehensive fraud prevention strategy, resulting in high customer satisfaction and minimal fraud losses.
Now, as a career coach, Ruby leverages her extensive experience to help individuals pursue careers in Trust and Safety.Read less
Content Policy Manager, Policy Development & Enforcement, Meta
Rumana is currently a Content Policy Associate Manager at Meta, based
… Read more in Bay Area, California. She has 7+ years of experience in Trust & Safety, across Operations, Abuse Specialization, and Policy Development & Enforcement. She has had the opportunity to be both, and IC and a manager – working on building teams from ground up. Having worked in APAC, EMEA, and now in the NA region, she has developed a deep understanding of both, the global landscape and the region-specific nuances of T&S issues. In her current role, she focuses on building and managing policy development & launch processes; with the goal to continually improve efficiency, legitimacy, and transparency.Read less
Residing in Austin, Texas, Ryan has worked (only ever fully
… Read more remote) with a variety of notable tech companies for the past decade. He started his T&S career at Indeed.com as a Search Quality Moderator. He then moved on to an organization called Sucuri (which was eventually acquired by GoDaddy), where he gained experience in the GoDaddy Product Security Group department for five years. Afterwards, Ryan joined Open Technologies, an anonymous messaging mobile app, as their Community & Moderation Coordinator. It was at Open that he not only gained experience as a content moderation manager, but also as an interim product manager for Open’s internal moderation tools. Most recently, Ryan has been a wearer-of-multiple-hats at Yik Yak, an anonymous messaging mobile app mainly used by college students. He’s the acting Content Moderation Manager, but their small, lean startup has him doing much more. From internal product development, policy making, and anything else in between, Ryan’s focused on bringing Yik Yak’s vision and mission to the masses while ensuring their users can express themselves within a healthy online community. If Ryan isn’t in Austin, Texas (CST), he can usually be found on the east coast (EST). He’s always happy to connect with anyone interested in talking trust & safety!Read less
Sanjana is a Senior Policy Specialist on Spotify’s Platform Integrity
… Read more
team, based in Singapore. She focuses on tackling account and behavioral abuse by investigating bad actor behavior, identifying emerging abuse patterns, and developing policies to address these challenges.
In her 4+ years at Spotify, Sanjana has developed expertise across data science and key Trust & Safety functions, including policy development and enforcement operations. She has worked on a broad range of issues, including child safety, self-harm, elections integrity, and ban evasion. Prior to Spotify, she earned a master’s degree from the Harvard Kennedy School of Government, where she published research on hate speech and harassment in the Asia Pacific region.
Sanjana is passionate about creating safe and positive online experiences for young people, and is excited to support others as they explore this field.Read less
Sarah Nasr is a Trust and Safety Hate Speech Project
… Read more Manager working at Meta (Facebook), and based in Dublin, Ireland. She’s been at Meta for more than three years months now. Sarah initially joined as a Market Specialist on the MENA market, where she focused on risk mitigation, and later on progressed to the Project Manager role. Prior to Meta, Sarah was working in the humanitarian sector and NGOs focused on health, development and education. She used to be a Project Coordinator, coordinating various programs on access to education for refugees based in Lebanon. Her background is in international relations; she have a Masters degree in International Studies and Diplomacy at the School or Oriental and African Studies. Sarah is fluent in English, French, and Arabic. Unlike what some might perceive as a big change from humanitarian to tech, Sarah thinks the two sectors are very similar at the core when it comes to T&S. She notes, “Our mission is to protect users and enable expression, while protecting human rights. We draft policies, ensure operational feasibility, design implementation processes and manage large projects with a large network of cross-functional stakeholders such as Policy, engineers and product.” Sarah’s areas of expertise are policy development and implementation, communication, and project management.Read less
Scott is currently a manager on the Risk Intelligence team
… Read more at Meta, which he joined in January 2020. RI has a wide remit in the T&S space ranging from understanding the overall risk landscape facing Meta’s users on their platforms to understanding the motivations and root causes of bad actors or behaviors. His team has experience with virtual every abuse type including child safety, terrorism, misinformation, election delegitimization, fraud, and harassment. Prior to Meta, Scott worked at the American Marketing Association leading innovation strategy, product management, research, and analytics. Before that, he spent a number of years at the Central Intelligence Agency working on counterterrorism. Scott has worked with a lot of former government people transitioning to the private sector, so he’s happy to talk to anyone exploring T&S as a career, navigating career changes, managing people, or anything else on your mind. He’s based in the Bay Area.Read less
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.