DiscoverCustomersPartners
Databricks PlatformIntegrations and DataPricingOpen Source
Databricks for IndustriesCross Industry SolutionsMigration & DeploymentSolution Accelerators
Training and CertificationEventsBlog and PodcastsGet HelpDive Deep
CompanyCareersPressSecurity and Trust
Ready to get started?
AI Security Engineer
Paris, France
RDQ224R257
While candidates in the listed location(s) are encouraged for this role, candidates in other locations will be considered.
While candidates in the listed location(s) are encouraged for this role, candidates in other locations will be considered.
The Responsible AI Team is committed to the development and implementation of AI systems that prioritize fairness, transparency, and accountability. Through rigorous testing, analysis, and research, we aim to identify potential vulnerabilities, biases, and risks associated with AI models, thereby fostering the creation of trustworthy and robust AI solutions. By advocating for responsible AI practices, we strive to contribute to a future where AI technologies benefit society while upholding the highest standards of integrity, privacy, and social responsibility.
The impact you will have:
You will be an important member of the Responsible AI Team at Databricks. The duties of the position involve performing security design reviews and red team engagements on new and existing models and AI systems, as well as performing novel research in the AI security field.Â
Candidates will be actively following newly published research in the field and have a strong interest in the future of AI security.
Conduct Red Team operations on live AI systems in development and production environments, employing adversarial strategies and methods to discover vulnerabilities.
Investigate new and emerging threats to ML systems and address them both internally and externally.
Create and refine a set of tools, techniques, dashboards and automated processes that can be used to effectively discover and report vulnerabilities in AI systems.
Guide model and system development securely through the SDLC.
Pioneer best practice and guidance for various facets of ML technology.
Collaborate with internal teams to help facilitate advances in our operational security and monitoring procedures.
What we look for:
The ideal candidate will have a strong background in the following areas:
Machine Learning and Deep Learning concepts with coding experience using libraries like TensorFlow, PyTorch, or SparkNLP.
Expertise on programming languages such as Python or C++ for coding and secure code reviews.
Expertise with adversarial machine learning techniques.
Knowledge of cybersecurity principles and tools for vulnerability discovery and exploitation
Strong problem-solving skills and genuine curiosity to develop novel attack methods against AI systems
Excellent verbal and written communication skills,Â
Strong team player, as the role involves working closely with other security experts and AI researchers
Typically 4+ years of experience or advanced degree (MS/PhD) with 3+ years of experience in the ML domain.
BS or higher in Computer Science, or a related field
Benefits
Private medical insurance
Private life and disability coverage
Pension Plan
Equity awards
Enhanced Parental Leaves
Fitness reimbursement
Annual career development fund
Home office & work headphones reimbursement
Business travel accident insurance
Mental wellness resources
Employee referral bonus