首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Competitive Learning With Pairwise Constraints
【24h】

Competitive Learning With Pairwise Constraints

机译:成对约束下的竞争性学习

获取原文
获取原文并翻译 | 示例
           

摘要

Constrained clustering has been an active research topic since the last decade. Most studies focus on batch-mode algorithms. This brief introduces two algorithms for on-line constrained learning, named on-line linear constrained vector quantization error (O-LCVQE) and constrained rival penalized competitive learning (C-RPCL). The former is a variant of the LCVQE algorithm for on-line settings, whereas the latter is an adaptation of the (on-line) RPCL algorithm to deal with constrained clustering. The accuracy results—in terms of the normalized mutual information (NMI)—from experiments with nine datasets show that the partitions induced by O-LCVQE are competitive with those found by the (batch-mode) LCVQE. Compared with this formidable baseline algorithm, it is surprising that C-RPCL can provide better partitions (in terms of the NMI) for most of the datasets. Also, experiments on a large dataset show that on-line algorithms for constrained clustering can significantly reduce the computational time.
机译:自最近十年以来,约束聚类一直是活跃的研究主题。大多数研究集中在批处理模式算法上。本简介介绍了两种用于在线约束学习的算法,分别称为在线线性约束矢量量化误差(O-LCVQE)和约束竞争者惩罚性竞争学习(C-RPCL)。前者是用于在线设置的LCVQE算法的一种变体,而后者是(在线)RPCL算法的一种改编,用于处理约束聚类。来自九个数据集的实验的准确性结果(以归一化互信息(NMI)表示)表明,由O-LCVQE诱导的分区与(批处理模式)LCVQE所发现的分区具有竞争力。与这种强大的基线算法相比,令人惊讶的是C-RPCL可以为大多数数据集提供更好的分区(就NMI而言)。同样,在大型数据集上的实验表明,用于约束聚类的在线算法可以显着减少计算时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号