Would you like to be a part of a well established startup in Bay area, CA who are looking to hire AI Trust & Safety Engineer/Scientist to spearhead their efforts in developing safe, ethical, and reliable AI systems. In this role, you will work closely with their product and research teams to ensure their foundation models, including large language models and multimodal systems, adhere to the
highest standards of safety, honesty, and helpfulness. Your work will be crucial in
shaping the responsible development and deployment of AI technologies that can
positively impact millions of lives across India and beyond.
Qualifications:
- M.S. or Ph.D. in Computer Science, Machine Learning, or a related field with a focus on AI safety and ethics.
- 4+ years of experience in AI research, with a strong emphasis on trust and safety aspects of large language models and multimodal systems.
- Demonstrated expertise in implementing advanced safety techniques such as RLHF, DPO, and PPO for foundation models.
- Proven track record of building or fine-tuning models with 10s and 100s of billions of parameters.
- Strong background in red teaming methodologies and adversarial testing for AI systems.
- Experience in generating and working with high-quality synthetic data for model fine-tuning.
- In-depth understanding of ethical considerations in AI development and deployment.
- Excellent programming skills, particularly in Python and deep learning frameworks such as PyTorch and TensorFlow.
- Strong publication record in top-tier AI conferences and journals, specifically in areas related to AI safety and ethics.
- Exceptional communication skills with the ability to explain complex technical concepts to both technical and non-technical audiences.
- Experience leading research teams and mentoring junior researchers.
- Familiarity with relevant regulatory frameworks and industry standards for AI ethics and safety.
Please coordinate Jia for more information on the role and client.