Efficient additive kernels via explicit feature maps

Andrea Vedaldi, Andrew Zisserman
2010
2 references

Abstract

Maji and Berg have recently introduced an explicit feature map approximating the intersection kernel. This enables efficient learning methods for linear kernels to be applied to the non-linear intersection kernel, expanding the applicability of this model to much larger problems. In this paper we generalize this idea, and analyse a large family of additive kernels, called homogeneous, in a unified framework. The family includes the intersection, Hellinger's, and χ 2 kernels commonly employed in computer vision. Using the framework we are able to: (i) provide explicit feature maps for all homogeneous additive kernels along with closed form expression for all common kernels; (ii) derive corresponding approximate finite-dimensional feature maps based on the Fourier sampling theorem; and (iii) quantify the extent of the approximation. We demonstrate that the approximations have indistinguishable performance from the full kernel on a number of standard datasets, yet greatly reduce the train/test times of SVM implementations. We show that the χ 2 kernel, which has been found to yield the best performance in most applications, also has the most compact feature representation. Given these train/test advantages we are able to obtain a significant performance improvement over current state of the art results based on the intersection kernel.

1 repository
2 references

Code References

scikit-learn/scikit-learn
1 file
doc/modules/kernel_approximation.rst
2
The authors of [VZ2010]_ prefer the version above as it is always positive
.. [VZ2010] `"Efficient additive kernels via explicit feature maps"
Link copied to clipboard!