Staff Publications

Staff Publications

  • external user (warningwarning)
  • Log in as
  • language uk
  • About

    'Staff publications' is the digital repository of Wageningen University & Research

    'Staff publications' contains references to publications authored by Wageningen University staff from 1976 onward.

    Publications authored by the staff of the Research Institutes are available from 1995 onwards.

    Full text documents are added when available. The database is updated daily and currently holds about 240,000 items, of which 72,000 in open access.

    We have a manual that explains all the features 

Record number 560998
Title An experimental protocol for the evaluation of open-ended category learning algorithms
Author(s) Chauhan, Aneesh; Lopes, Luis Seabra
Source In: 2015 IEEE International Conference on Evolving and Adaptive Intelligent Systems, EAIS 2015. - Institute of Electrical and Electronics Engineers Inc. (2015 IEEE International Conference on Evolving and Adaptive Intelligent Systems, EAIS 2015 ) - ISBN 9781467366977
Event IEEE International Conference on Evolving and Adaptive Intelligent Systems, EAIS 2015, Douai, 2015-12-01/2015-12-03
Publication type Contribution in proceedings
Publication year 2015

There has been a steady surge in various sub-fields of machine learning where the focus is on developing systems that learn in an open-ended manner. This is particularly visible in the fields of language grounding and data stream learning. These systems are designed to evolve as new data arrive, modifying and adjusting learned categories, as well as, accommodating new categories. Although some of the features of incremental learning are present in open-ended learning, the latter can not be characterized as standard incremental learning. This paper presents and discusses the key characteristics of open-ended learning, differentiating it from the standard incremental approaches. The main contribution of this paper is concerned with the evaluation of these algorithms. Typically, the performance of learning algorithms is assessed using traditional train-test methods, such as holdout, cross-validation etc. These evaluation methods are not suited for applications where environments and tasks can change and therefore the learning system is frequently facing new categories. To address this, a well defined and practical protocol is proposed. The utility of the protocol is demonstrated by evaluating and comparing a set of learning algorithms at the task of open-ended visual category learning.

There are no comments yet. You can post the first one!
Post a comment
Please log in to use this service. Login as Wageningen University & Research user or guest user in upper right hand corner of this page.