Research Pillars

Adaptative AI

The Adaptive AI pillar focuses on enhancing the adaptability, efficiency, and reliability of AI systems in dynamic real-world environments by developing and accessing methods and algorithms for adaptive AI at the edge, as well as for robustness and trustworthiness in such environments. It draws inspiration from the brain to study the adaptation of AI systems. 

Research Areas
Topics
Projects from Open Calls




Project: Improving Edge AI Performance by Federation and Adaptive Model Selection

Third parties involved: Ismail Ari, Habtamu Abie, Sandeep Pirbhulal – Ozyegin University

Running DNN models at the edge enables critical field apps including object classification, sensing & control. However, embedded & mobile devices are constrained in computation, energy, and communication. To improve operational efficiency, the prosejct proposes to demonstrate a federated learning (FL) scenario with adaptive & online model selection by starting with a set of pretrained DNN models creating a heterogeneous setting (Pi4/5s, Jetsons, Arduino with different sensors or motors), monitoring model & device performance & costs in accuracy, training/inference time, energy during FL, and switching models online among devices. We expect to reach optimal HW/SW configurations.

 


Project: SHACKLE: SHape-based pAtterns for Constraining KnowLedge graph Embeddings

Third parties involved: Pierre Monnin – Université Côte d’Azur

The project SHACKLE advances trustworthy AI by combining sub-symbolic methods with the validation schemata of knowledge graphs to reduce errors, increase explainability, and enforce trust. Building on ontologies and shape constraints for soundness and completeness, it explores a currently under-studied path toward neurosymbolic integration.   


Project: Adaptive Intelligence in Multi-Agent Systems: When Collective meets DRL

Third parties involved: Chuhao Qin – University of Leeds

This project aims to advance AI by enhancing the robustness and adaptability of multi-agent systems. By integrating collective learning and multi-agent deep reinforcement learning (MADRL), we address challenges such as biased information propagation and lack of flexibility. Our approach involves developing an adaptive model for real-time learning, with activities spanning data collection, algorithm development, and evaluation across domains like voice conversation. Expected outcomes include novel AI approaches, opensource contributions, publications, and collaborative partnerships.


Project: CXAI: Cautious explainable artificial intelligence

Third parties involved:

This project aims to ensure robustness and trustworthiness by developing classifiers that return set-valued predictions. It addresses two challenges: (1) how to evaluate set-valued predictions and calibrate them to a user’s attitude toward imprecision to build a calibrated, optimal robust imprecise classifier; and (2) how to explain set-valued predictions—both the need for robustness and how robust set-valued models can help test the robustness of classical XAI techniques.



Project: Robust Multimodal Continual Learning for Robotics

Third parties involved: Nicolas Kuske – Artificial and Natural Intelligence Toulouse Institute

This project advances multimodal continual learning (MMCL) by integrating audio-visual cues into reinforcement learning (RL) for robotic manipulation. Objectives include (1) developing a VR-based RL environment for testing and (2) optimizing Global Latent Workspace (GLW) and Semantic-Aware Multimodal (SAMM) models with attentional mechanisms. Activities involve VR setup, model evaluation, and hybrid model fusion. Expected outcomes are a robust MMCL framework handling noisy sensory inputs, adaptable to tasks like robotic pick-and-place. Added European value arises from TU/e (Netherlands) and ANITI (France) collaboration, combining expertise in continual and multimodal learning to advance AI innovation and support adaptive robotics.