Research Pillars

Human-Centric AI

The Human-Centric AI pillar focuses on the development and evaluation of methods and algorithms to improve human-AI interaction in decision support, user experience, explainability, and interpretability, while also addressing societal and ethical impacts. It aims to enhance transparency and human oversight in AI-based decision-making. 

Research Areas
Research Topics
Projects from Open Calls
Project: Constraints-Abiding Explainable Reinforcement Learning

Third parties involved: George Vouros, George Papadopoulos, Piyabhum Chaysri – University of Piraeus

This research addresses the limited attention given to reinforcement learning (RL) methods that provide transparency regarding operational constraints—domain-specific requirements that must be maintained during operation. Transparency about these constraints is essential, since automation must preserve humans’ awareness of them and their ability to inspect whether they are upheld, thereby assuring operational safety. The project aims to devise an inherently interpretable safe RL method that offers clear visibility into operational constraints. It studies symbolic representations for constrained RL policy models and designs, implements, and validates an interpretable safe RL–based solution in constrained settings. The project delivers symbolic models that enable an inherently interpretable safe RL method under operational constraints.



Project: Explainability-driven decision support systems for small clinical datasets

Third parties involved: Prof. Dr. Darian M. Onchiș, Dr. Codruța Istin – West University of Timisoara

The project targets two ENFIELD challenges: human-oriented explanations for AI-assisted medical diagnostics and metrics for evaluating explainability/interpretability. It focuses on post-hoc, model-agnostic surrogate methods to make deep learning “black boxes” clinically transparent. Aim 1 is to use explanation insights as feedback to improve models, with emphasis on small medical datasets. Aim 2 is to address instability in techniques like LIME by proposing complementary metrics for stability, clinical usefulness, and interpretability. The outcome is actionable, human-centered explanations underpinned by rigorous evaluation.





Project: Multimodal Analysis of Sleep Dynamics with Explainable Transformers

Third parties involved:

This project leverages transformer-based models with explainable AI to analyze sleep dynamics and physiological signals related to schizophrenia relapse using wearable devices. Expected outcomes include pre-trained models for sleep analysis, new explainability techniques, enhanced libraries for signal analysis with deep learning, and improved relapse prediction to advance AI-driven healthcare in Europe.