首页> 外文会议>IEEE International Conference on Advanced Computing >Nearest Neighbor Classifiers: Reducing the Computational Demands
【24h】

Nearest Neighbor Classifiers: Reducing the Computational Demands

机译:最近的邻邻分类器:降低计算需求

获取原文

摘要

Nearest Neighbor Classifiers demand high computational resources i.e, time and memory. Two distinct methods are followed by researchers in Pattern Recognition to reduce this computational burden. The first method is reducing the reference set or training set and the second method is dimensionality reduction which are referred as Prototype Selection and Feature Reduction(a.k.a Feature Extraction or Feature selection) respectively. In this paper, we cascaded the two methods to reduce the data set in both directions there by reducing the computational burden of Nearest Neighbor Classifier. The experiments are done on the bench mark datasets and the results obtained are satisfactory.
机译:最近的邻邻分类器需要高计算资源i.e,时间和内存。在模式识别中的研究人员遵循两个不同的方法,以减少这种计算负担。第一种方法正在减少参考集或训练集,第二种方法是分别称为原型选择和特征减少(A.k.a特征提取或特征选择)的维度降低。在本文中,我们通过降低最近邻分类的计算负担来级联两种方法来减少两个方向上的数据集。实验在替补标记数据集上完成,得到的结果令人满意。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号