On the rate of convergence of the bagged nearest neighbor estimate - Université Rennes 2 Accéder directement au contenu
Article Dans Une Revue Journal of Machine Learning Research Année : 2010

On the rate of convergence of the bagged nearest neighbor estimate

Résumé

Bagging is a simple way to combine estimates in order to improve their performance. This method, suggested by Breiman in 1996, proceeds by resampling from the original data set, constructing a predictor from each subsample, and decide by combining. By bagging an n-sample, the crude nearest neighbor regression estimate is turned into a consistent weighted nearest neighbor regression estimate, which is amenable to statistical analysis. Letting the resampling size k_n grows appropriately with n, it is shown that this estimate may achieve optimal rate of convergence, independently from the fact that resampling is done with or without replacement. Since the estimate with the optimal rate of convergence depends on the unknown distribution of the observations, adaptation results by data-splitting are presented.
Fichier non déposé

Dates et versions

hal-00911992 , version 1 (01-12-2013)

Identifiants

  • HAL Id : hal-00911992 , version 1

Citer

Gérard Biau, Frédéric Cérou, Arnaud Guyader. On the rate of convergence of the bagged nearest neighbor estimate. Journal of Machine Learning Research, 2010, 11, pp.687-712. ⟨hal-00911992⟩
212 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More