Postdoc in Explainable AI with applications to medical data
In this project, the Postdoc Researcher will spend around 70% of their time on an industry-academia project entitled "Safe & Explainable Clinical AI for Orthopaedic Surgical Assessment (SECure)", and 30% working on the explainability of Tuberculosis detection via Chest X-Ray images.
The work for the first project consists in the design of a safe and explainable ML-based system that supports a qualified physician to decide whether or not surgery is indicated for a given patient. The postdoc will be involved on the write-up of an extension project to be submitted, which would allow them to extend their engagement with Idiap for another year or two, if accepted. The work for the second project consists in the analysis of imaging data in search for explainable factors tying in Tuberculosis diagnosis with radiological findings. The postdoc is also expected to contribute to follow-up academic projects in this context.
Ideal candidates should:
- hold a Ph.D. degree in computer science, or related fields
- have a background in machine learning, statistics or applied mathematics, and optimisation, linear algebra and signal processing
- have strong english writing skills
- have strong programming skills and be familiar with Python (pytorch specific knowledge is a plus), shell programming, and with the Linux operating system.
- a background in explainability applied to machine learning will be considered a plus.
- experience with reproducibility will be considered a plus
- experience with medical applications will be considered a plus
Shortlisted candidates may undergo a series of tests including technical reading and writing in English and programming (in Python). Appointment for the position is for 1 year (renewable upon conditions specified above). Starting date is as soon as possible.
How to apply: Interested candidates are invited to submit a cover letter, a detailed CV, and the names of three references through the Idiap online recruitment system: