The Content Safety Team, which is responsible for ensuring a safe environment for our customers, is seeking a Senior AI Scientist. The team develops content moderation solutions for user-generated and AI-generated content across all our products and utilizes advanced GenAI tools to build capabilities such as automatic labeling.
Responsibilities:
Develop and refine content moderation models to filter and tackle abusive content across various products, content types and modalities.
Work extensively with NLP algorithms, focusing on fine-tuning transformers and large language models (LLMs) to enhance accuracy and efficiency.
Explore and integrate advanced technologies such as Retriever-Augmented Generation (RAG) and Agentic AI to assist in the automatic labeling and classification of data.
Collaborate with a multi-disciplinary team including machine learning engineers, AI scientists, and policy experts to ensure models align with ethical standards and company policies.
Stay updated with the latest research in AI and machine learning to incorporate cutting-edge techniques into content moderation strategies.
Prepare and maintain documentation on model development, configurations, and performance metrics for both technical and non-technical audiences.
Mentor and guide junior AI scientists, fostering a collaborative and productive team environment.
Requirements: MSc or PhD in a computational or statistical field (Computer Science, Statistics, Applied Math, Econometrics, Operations Research), or equivalent experience.
3+ years Data Science experience.
Experience working on one of the cloud platforms: AWS, GCP, Azure.
Well versed in Data Science languages, tools and frameworks, including data processing platforms and distributed computing systems (for example Python, R, SQL, SKLearn, NumPy, Pandas, TensorFlow, Keras).
Familiarity with LLM based applications an advantage.
Familiarity with LLM security and safety aspects - an advantage.
Resourceful, motivated, and organized.
This position is open to all candidates.