Hi, I’m Federico Belotti

I build trustworthy AI systems around Large Language Models and structured data. I’m currently a Research Fellow at the University of Milano-Bicocca and will start a PhD at the University of Bergamo in November 2025.

My recent work spans training and benchmarking LLMs for tabular tasks, improving training of Sparse AutoEncoders (SAEs), and uncertainty-aware LLMs.

Previously, I was an AI Engineer at Orobix, where I worked mainly on Computer Vision tasks (Classification, Segmentation, and Object Detection) and Reinforcement Learning. There, I was also the co-inventor of SheepRL, a distributed RL framework.

News

Apr 2026Our new paper “Artificial Effort” is out as a preprint. We benchmark 23 LLMs on 8 canonical real-effort tasks from experimental economics and find that most tasks can now be solved accurately and at negligible cost, with monetary incentives having no effect on LLM performance. Project page.
Mar 2026Our paper “How Good Are LLMs in Disambiguating Entities in Tabular Data? A Comprehensive Study” has been accepted for publication in Data & Knowledge Engineering (Elsevier). The work provides a large-scale evaluation of LLMs for entity disambiguation in tabular data, highlighting strengths, limitations, and future research directions.
Nov 2025Starting my PhD at the University of Bergamo focusing on uncertainty-aware LLMs.
Sep 2025Our paper titled "Efficient Uncertainty Estimation for LLM-based Entity Linking in Tabular Data" was accepted at Ontology Matching (OM) 2025, co-located with ISWC 2025.
Aug 2025Our paper titled "Group-SAE: Efficient Training of Sparse Autoencoders for Large Language Models via Layer Groups" was accepted at EMNLP 2025.
Jun 2025Our paper titled "MammoTab 25: A Large-Scale Dataset for Semantic Table Interpretation - Training, Testing, and Detecting Weaknesses" was accepted at ISWC 2025.
Aug 2024Our paper titled "Accelerating Sparse Autoencoder Training via Layer-Wise Transfer Learning in Large Language Models" was accepted at BlackBoxNLP 2024.