FAIVOR

FAIR AI validation and quality control

AI models are often described in human-readable text, which leads to issues when hospitals want to use these AI models. In FAIVOR, we propose a software platform to transparently validate AI models before, and when using AI models in clinical practice. This is necessary as differences may exist between the original training population, and the patient population for the hospital which wants to implement an AI model. Furthermore, hospitals need to regularly perform validations to test whether the AI model performance stays equal, as changes in the hospital (clinical workflow, devices used) can influence AI model performance. These validation results can help to give insights into the trust and robustness of AI models and can inform researchers to learn from previous failures and successes in development of new AI models.

Participating organisations

Maastricht University
Netherlands eScience Center
Life Sciences
Life Sciences
Social Sciences & Humanities
Social Sciences & Humanities

Team

JvS
Johan van Soest
Lead Applicant
Maastricht University
Sonja Georgievska
Sonja Georgievska
Lead RSE
Netherlands eScience Center

Related projects

FAIR is as FAIR does

Integrating data publishing principles in scientific workflows

Updated 22 months ago
Finished