|Title||An experimental protocol for the evaluation of open-ended category learning algorithms|
|Author(s)||Chauhan, Aneesh; Lopes, Luis Seabra|
|Source||In: 2015 IEEE International Conference on Evolving and Adaptive Intelligent Systems, EAIS 2015. - Institute of Electrical and Electronics Engineers Inc. (2015 IEEE International Conference on Evolving and Adaptive Intelligent Systems, EAIS 2015 ) - ISBN 9781467366977|
|Event||IEEE International Conference on Evolving and Adaptive Intelligent Systems, EAIS 2015, Douai, 2015-12-01/2015-12-03|
|Publication type||Contribution in proceedings|
There has been a steady surge in various sub-fields of machine learning where the focus is on developing systems that learn in an open-ended manner. This is particularly visible in the fields of language grounding and data stream learning. These systems are designed to evolve as new data arrive, modifying and adjusting learned categories, as well as, accommodating new categories. Although some of the features of incremental learning are present in open-ended learning, the latter can not be characterized as standard incremental learning. This paper presents and discusses the key characteristics of open-ended learning, differentiating it from the standard incremental approaches. The main contribution of this paper is concerned with the evaluation of these algorithms. Typically, the performance of learning algorithms is assessed using traditional train-test methods, such as holdout, cross-validation etc. These evaluation methods are not suited for applications where environments and tasks can change and therefore the learning system is frequently facing new categories. To address this, a well defined and practical protocol is proposed. The utility of the protocol is demonstrated by evaluating and comparing a set of learning algorithms at the task of open-ended visual category learning.