The Support Vector Machines (SVMs) dual formulation has a non-separable structure that makes the design of a convergent distributed algorithm a very difficult task. Recently some separable and distributable reformulations of the SVM training problem have been obtained by fixing one primal variable. While this strategy seems effective for some applications, in certain cases it could be weak since it drastically reduces the overall final performance. In this work we present the first fully distributable algorithm for SVMs training that globally converges to a solution of the original (non-separable) SVMs dual formulation. Besides a detailed convergence analysis, we provide a simple demonstrative example showing the advantages of the original SVMs dual formulation with respect to the weak separable one and highlights the practical effectiveness of our method. We report further tests to show practical convergence of the proposed method on real-world datasets.
|Titolo:||A convergent and fully distributable SVMs training algorithm|
|Data di pubblicazione:||2016|
|Appare nelle tipologie:||4.1 Contributo in Atti di convegno|