DIANNA
Deep Insight And Neural Network Analysis, DIANNA is the only Explainable AI, XAI library for scientists supporting Open Neural Network Exchange, ONNX - the de facto standard models format.
What is happening in your machine-learned embedded spaces?
Explainable AI (XAI) has been hot for a couple of years already. Machine learning using embedded spaces is hot. Where are the methods for XAI in embedded spaces? They don’t seem to exist. We have created exactly a method to do this.
What are embedded spaces anyway? Many machine learning methods encode any input, whether it be text, images, video, time series or tabular data, into a numerical vector space. This encoding of the input can be part of such a machine learning method, or a preprocessing step or sometimes even the end result. We need to know how these embedded spaces are structured in order to fully understand our machine learning methods.
We have developed a method, we believe the first, that explains distance between data points in any embedded space. The method works but has some knobs to turn that still needs some investigating. In this project we experiment with our newly developed algorithm, we implement it and make it available for the community and we of course share what we learned.
Explainable AI tool for scientists
Deep Insight And Neural Network Analysis, DIANNA is the only Explainable AI, XAI library for scientists supporting Open Neural Network Exchange, ONNX - the de facto standard models format.
Explainable AI tool for explaining models that create embeddings.
Experiments with regard to explanation of embedded spaces and multi modal models.