What is it about?

The time complexity of support vector machines (SVMs) prohibits training on huge data sets with millions of data points. Recently, multilevel approaches to train SVMs have been developed to allow for time-efficient training on huge data sets. While regular SVMs perform the entire training in one -- time consuming -- optimization step, multilevel SVMs first build a hierarchy of problems decreasing in size that resemble the original problem and then train an SVM model for each hierarchy level, benefiting from the solved models of previous levels. We present a faster multilevel support vector machine that uses a label propagation algorithm to construct the problem hierarchy. Extensive experiments indicate that our approach is up to orders of magnitude faster than the previous fastest algorithm while having comparable classification quality. For example, already one of our sequential solvers is on average a factor 15 faster than the parallel ThunderSVM algorithm, while having similar classification quality

Featured Image

Why is it important?

Our method enables researchers to training on huge data sets with millions of data points.

Read the Original

This page is a summary of: Faster Support Vector Machines, ACM Journal of Experimental Algorithmics, December 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3484730.
You can read the full text:

Read

Contributors

The following have contributed to this page