Tong, "Communication-efficient sorting algorithms on reconfigurable array of processors with slotted optical buses," J. of Parallel Distribut. Comput. 57, pp. 166- 187, 1999.M. Hamdi,,C. Qiao,,Y. Pan,,and J. Ton
J. Mota, "Communication-Efficient Algorithms For Distributed Optimiza- tion," Ph.D. thesis, Carnegie Mellon University, PA, Technical University of Lisbon, Lisbon, Portugal, 2013.J. F. Mota, "Communication-efficient algorithms for distributed optimization," arXiv preprint arXiv:1312.0263, 2013....
1. Pre-trained base predictors: base predictors can be pre-trained on publicly available data, thus reducing the need for user data in training. 2. Convergence guarantee: ensemble methods often require training relatively few parameters, which typically results in far fewer rounds of optimization an...
Moreover, our algorithms are more communication-efficient than the prior state-of-the-art. For smooth loss functions, our algorithm achieves the optimal excess risk bound and has communication complexity that matches the non-private lower bound. Additionally, our algorithms are more computationally ...
In the distributed learning literature for communication efficiency, most of existing works on distributed machine learning consist of two categories: (1) How to design communication efficient algorithms to reduce the round of communications. For instance, [4] proposed DANE (Distributed Approximate NEwton...
In this paper, we study distributed algorithms for large-scale AUC maximization with a deep neural network as a predictive model. Although distributed learning techniques have been investigated extensively in deep learning, they are not directly applicable to stochastic AUC maximization with deep neural...
communication compression algorithms such as 1-bit SGD and 1-bit Adam help to reduce the volume of each communication. However, we find that simply using one of the techniques is not sufficient to solve the communication challenge, especially on low...
Our experiments on real-world datasets show that PV-Tree significantly outperforms the existing parallel decision tree algorithms in the tradeoff between accuracy and efficiency. Opens in a new tab Publication Events Microsoft at NIPS 2016 Groups Machine Learning Area MSR Asia Theory...
Federated Optimization 我们将联邦学习中隐含的优化问题称为联邦优化,并与分布式优化建立了联系(和对比)。联合优化有几个关键特性,可以将其与典型的分布式优化问题区分开来: 非独立同分布:给定客户机上的训练数据通常基于特定用户对移动设备的使用,因此任何特定用户的本地数据集都不能代表总体分布。
This generalizes the one-crash-one-vote guarantee used in shared-memory algorithms to a many-crashes-many-votes approach. At the same time, this technique renders the algorithm message-efficient. Given a set of generated votes, the delayed propagation scheme ensures that each vote accounts for ...