Information-Theoretic Generalization Bounds for SGLD via Data-Dependent Estimates In this work, we improve upon the stepwise analysis of noisy iterative learning algorithms initiated by Pensia, Jog, and Loh (2018) and recently extended by Bu, Zou, and Veeravalli (2019). Our main contributions are...
By exploiting the structure of the SPO loss function and an additional strong convexity assumption on the feasible region, we can dramatically improve the dependence on the dimension via an analysis and corresponding bounds that are akin to the margin guarantees in classification problems.Othman El ...
To view our results in the context of the quest for quantum advantage, it is important to note that we do not prove a quantum advantage of quantum over classical machine learning. However, generalization bounds for QMLMs are necessary to understand their potential for quantum advantage. Namely, ...
Recently, metric learning and similarity learning have attracted a large amount of interest. Many models and optimization algorithms have been proposed. However, there is relatively little work on the generalization analysis of such methods. In this paper, we derive novel generalization bounds of metri...
(OOD) Input Domain.We examine an input domain with larger platforms compared to those utilized during training. Specifically, we extend the range of thexcoordinate in the input vectors to cover [-10, 10]. The bounds for the other inputs remain the same as during training. For additional de...
In this study we describe a methodology to exploit a specific type of domain knowledge in order to find tighter error bounds on the performance of classification via Support Vector Machines. The domain knowledge we consider is that the input space lies inside of a specified convex polytope. ...
In this study we describe a methodology to exploit a specific type of domain knowledge in order to find tighter error bounds on the performance of classification via Support Vector Machines. The domain knowledge we consider is that the input space lies inside of a specified convex polytope. ...
Here, we apply the technique to derive bounds for maximum entropy density estimation in general Markov random fields. Even though the analysis we use is standard, our results show that the generalization bound is not related to the log partition function, but instead is upper bounded by the ...