Associate Professor · Sapienza University of Rome

Gabriele Tolomei

I lead the HERCOLE Lab (Human-Explainable, Robust, and COllaborative LEarning) at the Department of Computer Science, Sapienza University of Rome, and serve as co-founder & CSO at tellmewAI. My research is centred on three core pillars of modern AI: explainability, security, and scalability.

Publications
Top-tier Venues
Years active
Portrait of Gabriele Tolomei

Gabriele Tolomei is an Associate Professor of Computer Science at Sapienza University of Rome, where he leads the HERCOLE Lab (Human-Explainable, Robust, and COllaborative LEarning). He is also co-founder and Chief Scientific Officer at tellmewAI, an innovative start-up developing tools for the automatic compliance verification of AI systems against regulatory frameworks, such as the European AI Act.


His research is centred on three core pillars of modern AI: explainability, security, and scalability.

In explainability, his work focuses on post-hoc, model-agnostic methods—particularly counterfactual explanations—for complex, "black-box" machine learning systems. The goal is to make AI decisions transparent, actionable, and human-understandable.

In security, he studies adversarial machine learning, analysing vulnerabilities of models under evasion and poisoning attacks, and designing methods to improve the robustness and reliability of both centralized and federated learning systems.

In scalability, his research addresses large-scale and distributed learning, including federated learning, with a focus on efficiency, privacy, and deployment in real-world settings.


Before joining Sapienza, he held research positions at Yahoo Labs (London, UK) and ISTI-CNR (Pisa, Italy), working on advertising quality, query understanding, and large-scale data mining.

🎓 Education

Ph.D. in Computer Science — Ca' Foscari University of Venice
M.Sc. in Computer Science (summa cum laude) — University of Pisa
B.Sc. in Computer Science — University of Pisa

🏛️ Affiliations

Head of the HERCOLE Lab — Sapienza, Department of Computer Science
Co-founder & CSO at tellmewAI
Member of the European Laboratory for Learning and Intelligent Systems (ELLIS)

💡

Explainable AI (XAI)

Model-agnostic, post-hoc explainability methods for black-box systems, with a focus on counterfactual explanations. Emphasis on transparent, actionable, and human-understandable insights, through LLM-generated explanation narratives.

🛡️

Robust & Secure Machine Learning

Adversarial machine learning and robustness techniques for protecting AI systems against malicious attacks such as evasion and model poisoning, with the goal of improving security and reliability.

🌐

Scalable & Federated Learning

Large-scale machine learning systems, including federated learning and distributed optimization methods. Focus on privacy-preserving training, efficiency, and robustness in decentralized environments.

Auto-synced nightly from DBLP.

Loading publications…

I currently teach the following courses at Sapienza University of Rome. I am the advisor of six Ph.D. candidates, and I also supervise B.Sc. and M.Sc. theses on topics including explainable AI, adversarial robustness, and federated learning. Prospective students are welcome to reach out!

IT

Sistemi Operativi (Operating Systems)

Bachelor's Degree in Computer Science (Laurea Triennale in Informatica). Covers process management, memory, file systems, and concurrency.

Course material on GitHub
EN

Big Data Computing

Master's Degree in Computer Science (Laurea Magistrale in Informatica). Large-scale data processing, distributed computing frameworks, and scalable ML pipelines.

Course material on GitHub

Office

Department of Computer Science
Sapienza University of Rome
Viale Regina Elena, 295
Building E — 1st Floor — Room 106
00161 Rome, Italy