Publications
Relevant works
2025
- Mach. LearningRegion-aware Minimal Counterfactual Rules for Model-agnostic Explainable ClassificationGuido Gagliardi, Antonio Luca Alfeo, Riccardo Guidotti, and 1 more authorMachine Learning, 2025
The increasing demand for transparency in machine learning has spurred the development of techniques that provide faithful explanations for complex black-box models. In this work, we introduce RaMiCo (Region Aware Minimal Counterfactual Rules), a model-agnostic method that extracts global counterfactual rules by mining instances from diverse regions of the input space. RaMiCo focuses on single-feature substitutions to generate minimal and region-aware rules that encapsulate the overall decision-making process of the target model. These global rules can be further localised to specific input instances, enabling users to obtain tailored explanations for individual predictions. Comprehensive experiments on multiple benchmark datasets demonstrate that RaMiCo achieves competitive fidelity in replicating black-box behaviour and exhibits high coverage in capturing the intrinsic structure of white-box classifiers. RaMiCo supports the development of trustworthy and secure machine learning systems by providing transparent, human-understandable explanations in the form of concise global rules. This design enables users to verify and inspect the model’s decision logic, reducing the risk of hidden biases, unintended behaviours, or adversarial exploitation. These features make RaMiCo particularly suitable for applications where the reliability, safety, and verifiability of automated decisions are essential.
@article{gagliardi2025region, title = {Region-aware Minimal Counterfactual Rules for Model-agnostic Explainable Classification}, author = {Gagliardi, Guido and Alfeo, Antonio Luca and Guidotti, Riccardo and Cimino, Mario GCA}, journal = {Machine Learning}, volume = {114}, number = {10}, pages = {225}, year = {2025}, publisher = {Springer}, url = {https://doi.org/10.1007/s10994-025-06847-5}, } - GeoInformaticaShape-based methods in mobility data analysis: effectiveness and limitationsCristiano Landi, and Riccardo GuidottiGeoInformatica, 2025
Although Mobility Data Analysis (MDA) has been explored for a long time, it still lags behind advancements in other fields. A common issue in MDA is the lack of methods’ standardization and reusability. On the other hand, for instance, in time series analysis, the existing methods are typically general-purpose, and it is possible to apply them across diverse datasets and applications without extensive customization. Still, in MDA, most contributions are ad-hoc and designed to address specific research questions, which limits their generalizability and reusability. Recently, some researchers explored the application of shapelet transform to trajectory data, i.e., extracting discriminatory sub-trajectories from training data to be used as classification features. Unlike current MDA methods, this line of research eliminates the need for feature engineering, greatly improving its ability to generalize. While shapelets on mobility data have shown state-of-the-art performance on public classification datasets, it is still not clear why they work. Are these subtrajectories merely proxies for geographic location, or do they also capture motion dynamics? We empirically show that shapelet-based approaches are a viable alternative to classical methods and flexible enough to solve MDA tasks related solely to trajectory shape, solely to movement dynamics, and those related to both. Additionally, we investigate the problem of Geographic Transferability, showing that such approaches offer a promising starting point for tackling this challenge.
@article{landi2025shape, title = {Shape-based methods in mobility data analysis: effectiveness and limitations}, author = {Landi, Cristiano and Guidotti, Riccardo}, journal = {GeoInformatica}, pages = {1--34}, year = {2025}, url = {https://doi.org/10.1007/s10707-025-00555-x}, publisher = {Springer}, } - Mach. LearningSafeGen: safeguarding privacy and fairness through a genetic methodMartina Cinquini, Marta Marchiori Manerba, Federico Mazzoni, and 2 more authorsMachine Learning, 2025
To ensure that Machine Learning systems produce unharmful outcomes, pursuing a joint optimization of performance and ethical profiles such as privacy and fairness is crucial. However, jointly optimizing these two ethical dimensions while maintaining predictive accuracy remains a fundamental challenge. Indeed, privacy-preserving techniques may worsen fairness and restrain the model’s ability to learn accurate statistical patterns, while data mitigation techniques may inadvertently compromise privacy. Aiming to bridge this gap, we propose safeGen, a preprocessing fairness enhancing and privacy-preserving method for tabular data. SafeGen employs synthetic data generation through a genetic algorithm to ensure that sensitive attributes are protected while maintaining the necessary statistical properties. We assess our method across multiple datasets, comparing it against state-of-the-art privacy-preserving and fairness approaches through a threefold evaluation: privacy preservation, fairness enhancement, and generated data plausibility. Through extensive experiments, we demonstrate that SafeGen consistently achieves strong anonymization while preserving or improving dataset fairness across several benchmarks. Additionally, through hybrid privacy-fairness constraints and the use of a genetic synthesizer, SafeGen ensures the plausibility of synthetic records while minimizing discrimination. Our findings demonstrate that modeling fairness and privacy within a unified generative method yields significantly better outcomes than addressing these constraints separately, reinforcing the importance of integrated approaches when multiple ethical objectives must be simultaneously satisfied.
@article{cinquini2025safegen, title = {SafeGen: safeguarding privacy and fairness through a genetic method}, author = {Cinquini, Martina and Marchiori Manerba, Marta and Mazzoni, Federico and Pratesi, Francesca and Guidotti, Riccardo}, journal = {Machine Learning}, volume = {114}, number = {10}, pages = {227}, year = {2025}, publisher = {Springer}, url = {https://doi.org/10.1007/s10994-025-06835-9}, } - AAAIA Practical Approach to Causal Inference over TimeMartina Cinquini, Isacco Beretta, Salvatore Ruggieri, and 1 more authorIn AAAI-25, Association for the Advancement of Artificial Intelligence, February 25 - March 4, 2025, Philadelphia, PA, USA, 2025
In this paper, we focus on estimating the causal effect of an intervention over time on a dynamical system. To that end, we formally define causal interventions and their effects over time on discrete-time stochastic processes (DSPs). Then, we show under which conditions the equilibrium states of a DSP, both before and after a causal intervention, can be captured by a structural causal model (SCM). With such an equivalence at hand, we provide an explicit mapping from vector autoregressive models (VARs), broadly applied in econometrics, to linear, but potentially cyclic and/or affected by unmeasured confounders, SCMs. The resulting causal VAR framework allows us to perform causal inference over time from observational time series data. Our experiments on synthetic and real-world datasets show that the proposed framework achieves strong performance in terms of observational forecasting while enabling accurate estimation of the causal effect of interventions on dynamical systems. We demonstrate, through a case study, the potential practical questions that can be addressed using the proposed causal VAR framework.
@inproceedings{cinquini2025practical, title = {A Practical Approach to Causal Inference over Time}, author = {Cinquini, Martina and Beretta, Isacco and Ruggieri, Salvatore and Valera, Isabel}, editor = {Walsh, Toby and Shah, Julie and Kolter, Zico}, booktitle = {AAAI-25, Association for the Advancement of Artificial Intelligence, February 25 - March 4, 2025, Philadelphia, PA, {USA}}, pages = {14832--14839}, publisher = {{AAAI} Press}, year = {2025}, url = {https://doi.org/10.1609/aaai.v39i14.33626}, doi = {10.1609/AAAI.V39I14.33626}, timestamp = {Thu, 17 Apr 2025 17:08:57 +0200}, biburl = {https://dblp.org/rec/conf/aaai/CinquiniBRV25.bib}, bibsource = {dblp computer science bibliography, https://dblp.org}, } - IEEEA Bias Injection Technique to Assess the Resilience of Causal Discovery MethodsMartina Cinquini, Karima Makhlouf, Sami Zhioua, and 2 more authorsIEEE Access, 2025
Causal discovery (CD) algorithms are increasingly applied to socially and ethically sensitive domains. However, their evaluation under realistic conditions remains challenging due to the scarcity of real-world datasets annotated with ground-truth causal structures. Whereas synthetic data generators support controlled benchmarking, they often overlook forms of bias, such as dependencies involving sensitive attributes, which may significantly affect the observed distribution and compromise the trustworthiness of downstream analysis. This paper introduces a novel synthetic data generation framework that enables controlled bias injection while preserving the causal relationships specified in a ground-truth causal graph. The framework aims to evaluate the reliability of CD methods by examining the impact of varying bias levels and outcome binarization thresholds. Experimental results show that even moderate bias levels can lead CD approaches to fail to correctly infer causal links, particularly those connecting sensitive attributes to decision outcomes. These findings underscore the need for expert validation and highlight the limitations of current CD methods in fairness-critical applications. Our proposal thus provides an essential tool for benchmarking and improving CD algorithms in biased, real-world data settings.
@article{cinquini2025bias, author = {Cinquini, Martina and Makhlouf, Karima and Zhioua, Sami and Palamidessi, Catuscia and Guidotti, Riccardo}, journal = {IEEE Access}, title = {A Bias Injection Technique to Assess the Resilience of Causal Discovery Methods}, year = {2025}, volume = {13}, number = {}, pages = {97376-97391}, keywords = {Synthetic data;Data models;Europe;Resilience;Mathematical models;Generators;Benchmark testing;Reliability;Prevention and mitigation;Technological innovation;Fairness;bias;synthetic data generation;machine learning;causal discovery}, doi = {10.1109/ACCESS.2025.3573201}, }
2024
- DSInterpretable Machine Learning for Oral Lesion Diagnosis Through Prototypical Instances IdentificationAlessio Cascione, Mattia Setzu, Federico A. Galatolo, and 2 more authorsIn Discovery Science - 27th International Conference, DS 2024, Pisa, Italy, October 14-16, 2024, Proceedings, Part II, 2024
Decision-making processes in healthcare can be highly complex and challenging. Machine Learning tools offer significant potential to assist in these processes. However, many current methodologies rely on complex models that are not easily interpretable by experts. This underscores the need to develop interpretable models that can provide meaningful support in clinical decision-making. When approaching such tasks, humans typically compare the situation at hand to a few key examples and representative cases imprinted in their memory. Using an approach which selects such exemplary cases and grounds its predictions on them could contribute to obtaining high-performing interpretable solutions to such problems. To this end, we evaluate PIVOTTREE, an interpretable prototype selection model, on an oral lesion detection problem. We demonstrate the efficacy of using such method in terms of performance and offer a qualitative and quantitative comparison between exemplary cases and ground-truth prototypes selected by experts.
@inproceedings{cascioneSGCG24, author = {Cascione, Alessio and Setzu, Mattia and Galatolo, Federico A. and Cimino, Mario G. C. A. and Guidotti, Riccardo}, editor = {Pedreschi, Dino and Monreale, Anna and Guidotti, Riccardo and Pellungrini, Roberto and Naretto, Francesca}, title = {Interpretable Machine Learning for Oral Lesion Diagnosis Through Prototypical Instances Identification}, booktitle = {Discovery Science - 27th International Conference, {DS} 2024, Pisa, Italy, October 14-16, 2024, Proceedings, Part {II}}, series = {Lecture Notes in Computer Science}, volume = {15244}, pages = {316--331}, publisher = {Springer}, year = {2024}, url = {https://doi.org/10.1007/978-3-031-78980-9\_20}, doi = {10.1007/978-3-031-78980-9\_20}, timestamp = {Mon, 03 Mar 2025 21:02:15 +0100}, biburl = {https://dblp.org/rec/conf/dis/CascioneSGCG24.bib}, bibsource = {dblp computer science bibliography, https://dblp.org}, } - Inf. FusionExplainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directionsLuca Longo, Mario Brcic, Federico Cabitza, and 16 more authorsInformation Fusion, 2024
Understanding black box models has become paramount as systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper highlights the advancements in XAI and its application in real-world scenarios and addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse fields to identify open problems, striving to synchronize research agendas and accelerate XAI in practical applications. By fostering collaborative discussion and interdisciplinary cooperation, we aim to propel XAI forward, contributing to its continued success. We aim to develop a comprehensive proposal for advancing XAI. To achieve this goal, we present a manifesto of 28 open problems categorized into nine categories. These challenges encapsulate the complexities and nuances of XAI and offer a road map for future research. For each problem, we provide promising research directions in the hope of harnessing the collective intelligence of interested stakeholders.
@article{longo2024manifesto, author = {Longo, Luca and Brcic, Mario and Cabitza, Federico and Choi, Jaesik and Confalonieri, Roberto and Ser, Javier Del and Guidotti, Riccardo and Hayashi, Yoichi and Herrera, Francisco and Holzinger, Andreas and Jiang, Richard and Khosravi, Hassan and L{\'{e}}cu{\'{e}}, Freddy and Malgieri, Gianclaudio and P{\'{a}}ez, Andr{\'{e}}s and Samek, Wojciech and Schneider, Johannes and Speith, Timo and Stumpf, Simone}, title = {Explainable Artificial Intelligence {(XAI)} 2.0: {A} manifesto of open challenges and interdisciplinary research directions}, journal = {Information Fusion}, volume = {106}, pages = {102301}, year = {2024}, url = {https://doi.org/10.1016/j.inffus.2024.102301}, doi = {10.1016/J.INFFUS.2024.102301}, timestamp = {Sun, 04 Aug 2024 19:49:40 +0200}, biburl = {https://dblp.org/rec/journals/inffus/LongoBCCCSGHHHJKLMPSSSS24.bib}, bibsource = {dblp computer science bibliography, https://dblp.org}, } - IEEEVariational Compression of Circuits for State PreparationAlessandro Berti, Giacomo Antonioli, Anna Bernasconi, and 3 more authorsIn IEEE International Conference on Quantum Computing and Engineering, QCE 2024, Montreal, QC, Canada, September 15-20, 2024, 2024
In quantum computing, state preparation techniques for loading classical data into quantum states can be resource-intensive and error-prone, especially on Noisy Intermediate-Scale Quantum (NISQ) devices. This work applies variational learning to compress quantum circuits for state preparation, transforming resource-heavy techniques into hardware-efficient ansatzes suitable for current quantum architectures. This approach enhances the efficiency of state preparation, broadening the applicability of quantum algorithms. We demonstrate significant reductions in circuit size and depth compared to the explicit circuit of the FF-QRAM state preparation. Additionally, we show that the learned ansatzes perform better in noisy environments, with significant fidelity improvements over the explicit state preparation technique under analysis.
@inproceedings{BertiABCGP24, author = {Berti, Alessandro and Antonioli, Giacomo and Bernasconi, Anna and Corso, Gianna M. Del and Guidotti, Riccardo and Poggiali, Alessandro}, editor = {Osinski, Marek and Cour, Brian La and Yeh, Lia}, title = {Variational Compression of Circuits for State Preparation}, booktitle = {{IEEE} International Conference on Quantum Computing and Engineering, {QCE} 2024, Montreal, QC, Canada, September 15-20, 2024}, pages = {44--48}, publisher = {{IEEE}}, year = {2024}, url = {https://doi.org/10.1109/QCE60285.2024.10250}, doi = {10.1109/QCE60285.2024.10250}, timestamp = {Thu, 01 May 2025 20:25:41 +0200}, biburl = {https://dblp.org/rec/conf/qce/BertiABCGP24.bib}, bibsource = {dblp computer science bibliography, https://dblp.org}, } - QuantumThe role of encodings and distance metrics for the quantum nearest neighborAlessandro Berti, Anna Bernasconi, Gianna M Del Corso, and 1 more authorQuantum Machine Intelligence, 2024
Over the past few years, we observed a rethinking of classical artificial intelligence algorithms from a quantum computing perspective. This trend is driven by the peculiar properties of quantum mechanics, which offer the potential to enhance artificial intelligence capabilities, enabling it to surpass the constraints of classical computing. However, redesigning classical algorithms into their quantum equivalents is not straightforward and poses numerous challenges. In this study, we analyze in-depth two orthogonal designs of the quantum K-nearest neighbor classifier. In particular, we show two solutions based on amplitude encoding and basis encoding of data, respectively. These two types of encoding impact the overall structure of the respective algorithms, which employ different distance metrics and show different performances. By breaking down each quantum algorithm, we clarify and compare implementation aspects ranging from data preparation to classification. Eventually, we discuss the difficulties associated with data preparation, the theoretical advantage of quantum algorithms, and their impact on performance with respect to the classical counterpart.
@article{berti2024role, title = {The role of encodings and distance metrics for the quantum nearest neighbor}, author = {Berti, Alessandro and Bernasconi, Anna and Del Corso, Gianna M and Guidotti, Riccardo}, journal = {Quantum Machine Intelligence}, volume = {6}, number = {2}, pages = {62}, year = {2024}, publisher = {Springer}, } - IEEECounterfactual and Prototypical Explanations for Tabular Data via Interpretable Latent SpaceSimone Piaggesi, Francesco Bodria, Riccardo Guidotti, and 2 more authorsIEEE Access, 2024
Artificial Intelligence decision-making systems have dramatically increased their predictive power in recent years, beating humans in many different specific tasks. However, with increased performance has come an increase in the complexity of the black-box models adopted by the AI systems, making them entirely obscure for the decision process adopted. Explainable AI is a field that seeks to make AI decisions more transparent by producing explanations. In this paper, we propose CP-ILS, a comprehensive interpretable feature reduction method for tabular data capable of generating Counterfactual and Prototypical post-hoc explanations using an Interpretable Latent Space. CP-ILS optimizes a transparent feature space whose similarity and linearity properties enable the easy extraction of local and global explanations for any pre-trained black-box model, in the form of counterfactual/prototype pairs. We evaluated the effectiveness of the created latent space by showing its capability to preserve pair-wise similarities like well-known dimensionality reduction techniques. Moreover, we assessed the quality of counterfactuals and prototypes generated with CP-ILS against state-of-the-art explainers, demonstrating that our approach obtains more robust, plausible, and accurate explanations than its competitors under most experimental conditions.
@article{piaggesi2024counterfactual, title = {Counterfactual and Prototypical Explanations for Tabular Data via Interpretable Latent Space}, author = {Piaggesi, Simone and Bodria, Francesco and Guidotti, Riccardo and Giannotti, Fosca and Pedreschi, Dino}, journal = {IEEE Access}, year = {2024}, publisher = {IEEE}, } - QuantumQuantum subroutine for variance estimation: algorithmic design and applicationsAnna Bernasconi, Alessandro Berti, Gianna M Del Corso, and 2 more authorsQuantum Machine Intelligence, 2024
Quantum computing sets the foundation for new ways of designing algorithms, thanks to the peculiar properties inherited by quantum mechanics. The exploration of this new paradigm faces new challenges concerning which field quantum speedup can be achieved. Toward finding solutions, looking for the design of quantum subroutines that are more efficient than their classical counterpart poses solid pillars to new powerful quantum algorithms. Herewith, we delve into a grounding subroutine, the computation of the variance, whose usefulness spaces across different fields of application, particularly the artificial intelligence (AI) one. Indeed, the finding of the quantum counterpart of these building blocks impacts vertically those algorithms that leverage this metric. In this work, we propose QVAR, a quantum subroutine, to compute the variance that exhibits a logarithmic complexity both in the circuit depth and width, excluding the state preparation cost. With the vision of showing the use of QVAR as a subroutine for new quantum algorithms, we tackle two tasks from the AI domain: feature selection and outlier detection. In particular, we showcase two AI hybrid quantum algorithms that leverage QVAR: the hybrid quantum feature selection (HQFS) algorithm and the quantum outlier detection algorithm (QODA). In this manuscript, we describe the implementation of QVAR, HQFS, and QODA, providing their correctness and complexities and showing the effectiveness of these hybrid quantum algorithms with respect to their classical counterpart.
@article{bernasconi2024quantum, title = {Quantum subroutine for variance estimation: algorithmic design and applications}, author = {Bernasconi, Anna and Berti, Alessandro and Del Corso, Gianna M and Guidotti, Riccardo and Poggiali, Alessandro}, journal = {Quantum Machine Intelligence}, volume = {6}, doi = {10.1007/s42484-024-00213-9}, number = {2}, pages = {78}, year = {2024}, publisher = {Springer}, } - IEEEFast, Interpretable and Deterministic Time Series Classification with a Bag-Of-Receptive-FieldsFrancesco Spinnato, Riccardo Guidotti, Anna Monreale, and 1 more authorIEEE Access, 2024
The current trend in the literature on Time Series Classification is to develop increasingly accurate algorithms by combining multiple models in ensemble hybrids, representing time series in complex and expressive feature spaces, and extracting features from different representations of the same time series. As a consequence of this focus on predictive performance, the best time series classifiers are black-box models, which are not understandable from a human standpoint. Even the approaches that are regarded as interpretable, such as shapelet-based ones, rely on randomization to maintain computational efficiency. This poses challenges for interpretability, as the explanation can change from run to run. Given these limitations, we propose the Bag-Of-Receptive-Field (BORF), a fast, interpretable, and deterministic time series transform. Building upon the classical Bag-Of-Patterns, we bridge the gap between convolutional operators and discretization, enhancing the Symbolic Aggregate Approximation (SAX) with dilation and stride, which can more effectively capture temporal patterns at multiple scales. We propose an algorithmic speedup that reduces the time complexity associated with SAX-based classifiers, allowing the extension of the Bag-Of-Patterns to the more flexible Bag-Of-Receptive-Fields, represented as a sparse multivariate tensor. The empirical results from testing our proposal on more than 150 univariate and multivariate classification datasets demonstrate good accuracy and great computational efficiency compared to traditional SAX-based methods and state-of-the-art time series classifiers, while providing easy-to-understand explanations.
@article{spinnato2024fast, title = {Fast, Interpretable and Deterministic Time Series Classification with a Bag-Of-Receptive-Fields}, author = {Spinnato, Francesco and Guidotti, Riccardo and Monreale, Anna and Nanni, Mirco}, journal = {IEEE Access}, year = {2024}, publisher = {IEEE}, } - ECMLData-Agnostic Pivotal Instances Selection for Decision-Making ModelsAlessio Cascione, Mattia Setzu, and Riccardo GuidottiIn Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2024
As decision-making processes become increasingly complex, machine learning tools have become essential resources for tackling business and social issues. However, many methodologies rely on complex models that experts and everyday users cannot really interpret or understand. This is why constructing interpretable models is crucial. Humans typically make decisions by comparing the case at hand with a few exemplary and representative cases imprinted in their minds. Our objective is to design an approach that can select such exemplary cases, which we call pivots, to build an interpretable predictive model. To this aim, we propose a hierarchical and interpretable pivot selection model inspired by Decision Trees, and based on the similarity between pivots and input instances. Such a model can be used both as a pivot selection method, and as a standalone predictive model. By design, our proposal can be applied to any data type, as we can exploit pre-trained networks for data transformation. Through experiments on various datasets of tabular data, texts, images, and time series, we have demonstrated the superiority of our proposal compared to naive alternatives and state-of-the-art instance selectors, while minimizing the model complexity, i.e., the number of pivots identified.
@inproceedings{cascione2024data, title = {Data-Agnostic Pivotal Instances Selection for Decision-Making Models}, author = {Cascione, Alessio and Setzu, Mattia and Guidotti, Riccardo}, booktitle = {Joint European Conference on Machine Learning and Knowledge Discovery in Databases}, pages = {367--386}, year = {2024}, organization = {Springer}, } - AAAIGenerative Model for Decision TreesRiccardo Guidotti, Anna Monreale, Mattia Setzu, and 1 more authorProceedings of the AAAI Conference on Artificial Intelligence, Mar 2024
Decision trees are among the most popular supervised models due to their interpretability and knowledge representation resembling human reasoning. Commonly-used decision tree induction algorithms are based on greedy top-down strategies. Although these approaches are known to be an efficient heuristic, the resulting trees are only locally optimal and tend to have overly complex structures. On the other hand, optimal decision tree algorithms attempt to create an entire decision tree at once to achieve global optimality. We place our proposal between these approaches by designing a generative model for decision trees. Our method first learns a latent decision tree space through a variational architecture using pre-trained decision tree models. Then, it adopts a genetic procedure to explore such latent space to find a compact decision tree with good predictive performance. We compare our proposal against classical tree induction methods, optimal approaches, and ensemble models. The results show that our proposal can generate accurate and shallow, i.e., interpretable, decision trees.
@article{guidotti2024generative, title = {Generative Model for Decision Trees}, volume = {38}, url = {https://ojs.aaai.org/index.php/AAAI/article/view/30104}, doi = {10.1609/aaai.v38i19.30104}, number = {19}, journal = {Proceedings of the AAAI Conference on Artificial Intelligence}, author = {Guidotti, Riccardo and Monreale, Anna and Setzu, Mattia and Volpi, Giulia}, year = {2024}, month = mar, pages = {21116-21124}, }