Child Safety Analyst, Responsible AI Testing
- linkCopy link
- emailEmail a friend
Minimum qualifications:
- Bachelor's degree in Political Science, Communications, Computer Science, Data Science, History, International Affairs, Social Work, Child Development, related discipline, or equivalent professional experience.
- 4 years of experience in Trust and Safety Operations, data analytics, policy, cybersecurity, product policy, privacy and security, legal, compliance, risk management, intel, content moderation, AI testing or other relevant environment.
Preferred qualifications:
- Master's degree.
- Experience with machine learning.
- Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or in a scripting/programming language (e.g. Python).
- Strong understanding of AI systems, machine learning, and their potential risks.
- Excellent communication and presentation skills (written and verbal) and the ability to influence cross-functionally at various levels.
About the job
Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
As an Analyst on the Trust and Safety Responsible AI Child Safety Testing Team, you will be an expert in structured and unstructured safety pre-launch testing for Google's GenAI models and products, focused particularly on the broad spectrum of online child abuse and exploitation risks. You will partner closely with the technical abuse-fighting experts in Trust and Safety to understand launch requirements, and to develop and implement testing protocols. You will leverage robust data analysis to provide quantitative and qualitative actionable insights on potential risks for mitigation by Trust and Safety and product teams. You'll manage many stakeholders through effective relationship building, bringing an ordered, streamlined and structured approach. You will demonstrate analytical thinking through data-driven decision making, technical know-how, and execute quickly, ensuring that Google's AI products do not generate content of children that is unsafe.
The US base salary range for this full-time position is $110,000-$157,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.Responsibilities
- Own and lead structured pre-launch child safety testing, end to end, for Google’s most prominent GenAI products.
- Define and execute prompt generation strategies to develop a set of prompts that will sufficiently test product compliance with standards, working with RAI Testing Sustainability and Data Science to leverage and evolve best practices. This may entail leveraging LLM-based prompt generation tools and/or defining and providing clear instructions to vendor teams.
- Collaborate with product teams to scrape responses. This may entail providing consultation for how to develop a scaled scraping solution, getting access to the model/UI and performing scrapes, and/or defining and providing clear instructions to vendor teams.
- Execute prompt/response rating against defined standards. This may entail providing clear instructions to vendor teams, clarifying gray area cases, and/or providing quality calibrations.
Information collected and processed as part of your Google Careers profile, and any job applications you choose to submit is subject to Google's Applicant and Candidate Privacy Policy.
Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google's EEO Policy, Know your rights: workplace discrimination is illegal, Belonging at Google, and How we hire.
If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.
To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.