AI models are often described in human-readable text, which leads to issues when hospitals want to use these AI models. In FAIVOR, we propose a software platform to transparently validate AI models before, and when using AI models in clinical practice. This is necessary as differences may exist between the original training population, and the patient population for the hospital which wants to implement an AI model. Furthermore, hospitals need to regularly perform validations to test whether the AI model performance stays equal, as changes in the hospital (clinical workflow, devices used) can influence AI model performance. These validation results can help to give insights into the trust and robustness of AI models and can inform researchers to learn from previous failures and successes in development of new AI models.