Sign in
Ctrl K

Explainable Embeddings

What is happening in your machine-learned embedded spaces?

What elements of the bee make it closer (blue) to the image of the fly?

Explainable AI (XAI) has been hot for a couple of years already. Machine learning using embedded spaces is hot. Where are the methods for XAI in embedded spaces? They don’t seem to exist. We have created exactly a method to do this.

What are embedded spaces anyway? Many machine learning methods encode any input, whether it be text, images, video, time series or tabular data, into a numerical vector space. This encoding of the input can be part of such a machine learning method, or a preprocessing step or sometimes even the end result. We need to know how these embedded spaces are structured in order to fully understand our machine learning methods.

We have developed a method, we believe the first, that explains distance between data points in any embedded space. The method works but has some knobs to turn that still needs some investigating. In this project we experiment with our newly developed algorithm, we implement it and make it available for the community and we of course share what we learned.

Team

Related projects

DIANNA - Deep Insight and Neural Networks Analysis

Explainable AI tool for scientists

Updated 2 weeks ago
In progress

Related software

DIANNA

DI

Deep Insight And Neural Network Analysis, DIANNA is the only Explainable AI, XAI library for scientists supporting Open Neural Network Exchange, ONNX - the de facto standard models format.

Updated 1 week ago
18 12