Associate Professor · Sapienza University of Rome
I lead the HERCOLE Lab (Human-Explainable, Robust, and COllaborative LEarning) at the Department of Computer Science, Sapienza University of Rome, and serve as co-founder & CSO at tellmewAI. My research is centred on three core pillars of modern AI: explainability, security, and scalability.
Gabriele Tolomei is an Associate Professor of Computer Science at Sapienza University of Rome, where he leads the HERCOLE Lab (Human-Explainable, Robust, and COllaborative LEarning). He is also co-founder and Chief Scientific Officer at tellmewAI, an innovative start-up developing tools for the automatic compliance verification of AI systems against regulatory frameworks, such as the European AI Act.
His research is centred on three core pillars of modern AI: explainability, security, and scalability.
In explainability, his work focuses on post-hoc, model-agnostic methods—particularly counterfactual explanations—for complex, "black-box" machine learning systems. The goal is to make AI decisions transparent, actionable, and human-understandable.
In security, he studies adversarial machine learning, analysing vulnerabilities of models under evasion and poisoning attacks, and designing methods to improve the robustness and reliability of both centralized and federated learning systems.
In scalability, his research addresses large-scale and distributed learning, including federated learning, with a focus on efficiency, privacy, and deployment in real-world settings.
Before joining Sapienza, he held research positions at Yahoo Labs (London, UK) and ISTI-CNR (Pisa, Italy), working on advertising quality, query understanding, and large-scale data mining.
Ph.D. in Computer Science — Ca' Foscari University of Venice
M.Sc. in Computer Science (summa cum laude) — University of Pisa
B.Sc. in Computer Science — University of Pisa
Head of the HERCOLE Lab — Sapienza, Department of Computer Science
Co-founder & CSO at tellmewAI
Member of the European Laboratory for Learning and Intelligent Systems (ELLIS)
Model-agnostic, post-hoc explainability methods for black-box systems, with a focus on counterfactual explanations. Emphasis on transparent, actionable, and human-understandable insights, through LLM-generated explanation narratives.
Adversarial machine learning and robustness techniques for protecting AI systems against malicious attacks such as evasion and model poisoning, with the goal of improving security and reliability.
Large-scale machine learning systems, including federated learning and distributed optimization methods. Focus on privacy-preserving training, efficiency, and robustness in decentralized environments.
Loading publications…
I currently teach the following courses at Sapienza University of Rome. I am the advisor of six Ph.D. candidates, and I also supervise B.Sc. and M.Sc. theses on topics including explainable AI, adversarial robustness, and federated learning. Prospective students are welcome to reach out!
Bachelor's Degree in Computer Science (Laurea Triennale in Informatica). Covers process management, memory, file systems, and concurrency.
Course material on GitHubMaster's Degree in Computer Science (Laurea Magistrale in Informatica). Large-scale data processing, distributed computing frameworks, and scalable ML pipelines.
Course material on GitHub