Research Scientist - AI Safety - LLMs
We are currently partnered with a hugely exciting AI Safety Startup who are looking to expand their Research Science team. The business, led by ex-Harvard/Stanford/Berkeley Researchers, are building advanced Safety tooling for large AI models, specifically Large Language Models (LLMs).
As a Research Scientist, your role will see you apply the latest research in AI Safety to Large Language Models, with the aim of developing a suite of cutting-edge tools for Foundation Model providers (product is already used by OpenAI, Anthropic, AI21 and more) as well as end-users. This position will involve a mix of theoretical and applied research, with a particular focus on Synthetic Data Generation, Active Learning and Adversarial Attacks.
Key skills required:
-> PhD or MS in a relevant field - ML, CS, Applied Mathematics etc.
-> AI Safety Research experience, ideally in a practical/commercial setting
-> Experience in one of; Synthetic Data Generation, Active Learning and/or Adversarial Attacks
-> Strong Python and general coding skills
Please apply ASAP for more info!