This paper presents MANILA, a web-based low-code application to benchmark machine learning models and fairness-enhancing methods and select the one achieving the best fairness and effectiveness trade-off. It is grounded on an Extended Feature Model that models a general fairness benchmarking workflow as a Software Product Line. The constraints defined among the features guide users in creating experiments that do not lead to execution errors. We describe the architecture and implementation of MANILA and evaluate it in terms of expressiveness and correctness.
MANILA: A Low-Code Application to Benchmark Machine Learning Models and Fairness-Enhancing Methods
d'Aloisio, Giordano
2025-01-01
Abstract
This paper presents MANILA, a web-based low-code application to benchmark machine learning models and fairness-enhancing methods and select the one achieving the best fairness and effectiveness trade-off. It is grounded on an Extended Feature Model that models a general fairness benchmarking workflow as a Software Product Line. The constraints defined among the features guide users in creating experiments that do not lead to execution errors. We describe the architecture and implementation of MANILA and evaluate it in terms of expressiveness and correctness.File in questo prodotto:
| File | Dimensione | Formato | |
|---|---|---|---|
|
3696630.3728600.pdf
accesso aperto
Tipologia:
Documento in Versione Editoriale
Licenza:
Creative commons
Dimensione
1.35 MB
Formato
Adobe PDF
|
1.35 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


