Explainable Embeddings

What is happening in your machine-learned embedded spaces?

What elements of the bee make it closer (blue) to the image of the fly?

Explainable AI (XAI) has been hot for a couple of years already. Machine learning using embedded spaces is hot. Where are the methods for XAI in embedded spaces? They don’t seem to exist. We have created exactly a method to do this.

What are embedded spaces anyway? Many machine learning methods encode any input, whether it be text, images, video, time series or tabular data, into a numerical vector space. This encoding of the input can be part of such a machine learning method, or a preprocessing step or sometimes even the end result. We need to know how these embedded spaces are structured in order to fully understand our machine learning methods.

We have developed a method, we believe the first, that explains distance between data points in any embedded space. The method works but has some knobs to turn that still needs some investigating. In this project we experiment with our newly developed algorithm, we implement it and make it available for the community and we of course share what we learned.

Participating organisations

Netherlands eScience Center

Team

Related projects

DIANNA - Deep Insight and Neural Networks Analysis

Explainable AI tool for scientists

Updated 6 months ago
In progress

Related software

DIANNA

DI

Deep Insight And Neural Network Analysis, DIANNA is the only Explainable AI, XAI library for scientists supporting Open Neural Network Exchange, ONNX - the de facto standard models format.

Updated 1 month ago
21 12

Distance explainer

DI

Explainable AI tool for explaining models that create embeddings.

Updated 1 month ago
2

Explainable embeddings

EX

Experiments with regard to explanation of embedded spaces and multi modal models.

Updated 2 weeks ago
2