By exploiting the structure of the SPO loss function and an additional strong convexity assumption on the feasible region, we can dramatically improve the dependence on the dimension via an analysis and corresp
Recently, metric learning and similarity learning have attracted a large amount of interest. Many models and optimization algorithms have been proposed. However, there is relatively little work on the generalization analysis of such methods. In this paper, we derive novel generalization bounds of metri...
Learning theory is rich in bounds that have been derived and that relate quantities such as the empirical error, the true error probability, the number of training vectors, and the VC dimension or a VC related quantity. In his elegant theory of learning, Valiant [Vali 84] proposed to express...
Generalization bounds for meta-learning: An information-theoretic analysis Advances in Neural Information Processing Systems, 34 (2021), pp. 25878-25890 View in ScopusGoogle Scholar Chung et al., 2018 Chung Y., Haas P.J., Upfal E., Kraska T. Unknown examples & machine learning model generali...
then we present integral identities for∗differentiable convex functions, from which we provide certain estimates of the upper bounds for trapezoid and midpoint formulas via the multiplicative fractional integrals operator. We also show that the inequalities given here are an extension of some existing...
(OOD) Input Domain.We examine an input domain with larger platforms compared to those utilized during training. Specifically, we extend the range of thexcoordinate in the input vectors to cover [-10, 10]. The bounds for the other inputs remain the same as during training. For additional de...
Achieving small prediction errorR(α) is the ultimate goal of (quantum) machine learning. AsPis generally not known, the training errorR^S(α)is often taken as a proxy forR(α). This strategy can be justified via bounds on thegeneralization error...
(10) With the above heavy lifting, we start deriving the empir- ical duality gap, which is our ultimate goal of the theoretical analysis. The empirical duality gap includes the above two components. Combining the above bounds on two gaps, we can bound the deviation betwe...
Learning theory is rich in bounds that have been derived and that relate quantities such as the empirical error, the true error probability, the number of training vectors, and the VC dimension or a VC related quantity. In his elegant theory of learning, Valiant [Vali 84] proposed to express...
In this paper, firstly we introduced $$\delta $$ -geometric mean and a new class of convex functions called $$\delta $$ -GA-convex functions. After that, w