ISOVIS
Information and Software Visualization
  • ISOVIS Home
  • LNUC DISA
  • FTK
  • ISOVIS Twitter
  • ISOVIS SVN
  • ISOVIS Git
  1. Home
  2. Research & Projects
  3. Research Areas
  4. Explainable AI/ML
News People Research & Projects Research Areas Projects Vis Tools Video Channel Publications Teaching Open Theses Cooperations Contact Us Intranet

Explainable AI/ML

Research in Machine Learning (ML) and Artificial Intelligence (AI) has become very popular in recent years, with many types of models proposed to comprehend and predict patterns and trends in data originated from different domains. As these models get more and more complex, it also becomes harder for users to assess and trust their results, since their internal operations are mostly hidden in black boxes. The explanation of ML/AI models is currently a hot topic in the Information Visualization (InfoVis) community, with results showing that providing insights from ML models can lead to better predictions and improve the trustworthiness of the results.

Our current focus on this research area is (1) methodological by providing surveys and guidance through qualitative and quantitative analyses of its literature and research community as well as (2) technical by developing Visual Analytics (VA) methods to open the black boxes of various ML/AI models. In the latter case, our research encompasses both unsupervised dimensionality reduction (DR) models and supervised learning models such as single classifiers or multiple classifier systems.

Contact Persons:

  • Prof. Dr. Andreas Kerren
  • Dr. Rafael M. Martins

Relevant Publications:

  • Publications in DiVA

Relevant Projects:

  • VAESS

Relevant Tools:

  • TrustMLVis Browser
  • FeatureEnVi
  • VisEvol
  • StackGenVis
  • t-viSNE

Interesting URLs:

  • Doctoral project: Visual analytics for explainable and trustworthy machine learning
  • Workshop on Visualization for AI Explainability
  • Workshop on TRust and EXpertise in Visual Analytics

All content copyright © 2007–2025 ISOVIS Group, all rights reserved.