The binomial-neighbour instance-based learner on a multiclass performance measure schemeInstance-based learningk-Nearest neighboursLazy learnersAction recognitionThis paper presents a novel instance-based learning methodology the Binomial-Neighbour (B-N) algorithm. Unlike to other k-Nearest Neighbour ...
In this paper, we propose a new method to select prototypes for an instance-based learner such as the k-nearest neighbor rule (k-NN). The problem of instance selection for instance-based learning can be defined as “the isolation of the smallest set of instances that enable us to predict...
To find the correct correspondence between an im- age region and the keyword “tiger”, a learner must be able to differentiate “tiger” regions from other noisy regions at the outset. In this paper, we formulate image annotation as a super- vised learning problem under Multiple-Instance ...
It is chosen here as the main supervised learning model for inducing an online learner for the RS pattern recognition. It is known to be fast and lightweight, suitable for real-time machine learning application. Hence it is an appropriate choice for coupling with the EEAC-IBL preprocessing ...
Multiple instance learning (MIL) is a type of supervised learning, where instead of receiving a collection of individually labeled examples, the learner is given weakly labeled bags of instances. If the bag contains at least one positive instance, the bag is assigned a positive label, otherwise,...
have been made [34]. Another alternative is the generation of a cost-sensitive classifier starting from a learning algorithm plus a training collection and a cost distribution [18]. The Naïve Bayes learner is the most widely used algorithm. Although the independence assumption is...
Sample selection methods aim at selecting select a representative subset from the original dataset, in such a manner that the performance of the learner generated from the selected subset would be similar (or even better) than on the original dataset. The main advantages in applying sample selectio...
The value of the attribute that is to be predicted is known to the learner in the training set, but unknown in the testing set. The theory demonstrates that cross-validation error has two components: error on the training set (inaccuracy) and sensitivity to noise (instability). This general...
Then, a novel identification scheme using hybrid features and a feature-weighted instance-based learner is put forward. Experiment results show that the proposed scheme is satisfactory in terms of classification accuracy and our feature-weighted instance-based learner gives better results than classical ...
learner, into the Boosting framework for object detec- tion. The new feature, called a multiple instance fea- ture, is the Noisy-OR aggregation of the classification results of a bag of local visual descriptors. Essentially, each bag of local visual descriptors is considered as one local pa...