The purpose of this paper is to derive optimal rules for sequential classification problems. In a sequential classification test, for instance, in an educational context, the decision is to classify a student as
classification error. The optimal test consists in choosing the maximum of weighted likelihood functions. The weights are generally very difficult to calculate [11], even in some simple cases. Furthermore, the minimax test may satisfy the equalization property, i.e., the worst classification errors...
However, the optimal hyperplane is given by the common tangent to the ellipsoids at the first point of their tangency. During our optimisation scheme, we alternate between allowing these ellipsoids, albeit a penalised verson of them, to intersect i.e. \(h(\mathbf {w},\kappa )=0\), and ...
The framework of minimax sequential decision theory is proposed for solving such testing problems; that is, optimal rules are obtained by minimizing the maximum expected losses associated with all possible decision rules at each stage of testing. The main advantage of this approach is that costs of...
A different problem arose in some cases when the optimal value was exactly 0, as it was unable to find any feasible solution. This is probably due to rounding errors. To eliminate these problems we transformed the matrix games by adding a constant, slightly higher than the minimum matrix ...
However, this set does not depend on the localization parameter h > 0, in other words, the quantities A and M are not involved in the selection of optimal size of the local neighborhood given by (1.8). Contrarily the con- stants β, L are used for the derivation of the optimal size ...
Then with probability at least 1 − δ over the draws of the random sample, the optimal worst-case misclassification probability is given by 1 − α∗ = 1 1 + κ∗2 . When presented with a new input observation x , we make our prediction according to what side of the optimal ...
The classification boundary between ui and vi is a regression surface. MPMR uses the same training and testing dataset as used by the GP. Radial basis function ( K x i , x = exp - x i - x x i - x T 2 σ 2 where σ is width of radial basis function) has been used as kerne...
For dealing with the issues of choosing a suitable kernel function and its optimal parameter, Dagher proposed a kernel-free quadratic surface support vector machine model [12] for binary classification. Luo et. al. extended it to a soft-margin quadratic surface SVM (SQSSVM) model [13]. Furthe...