学术报告
On Linear Convergence of ADMM for Decentralized Quantile Regression
题目: On Linear Convergence of ADMM for Decentralized Quantile Regression
报告人:练恒 教授(香港城市大学)
Abstract:The alternating direction method of multipliers (ADMM) is a natural method of choice for distributed parameter learning. For smooth and strongly convex consensus optimization problems, it has been shown that ADMM and some of its variants enjoy linear convergence in the distributed setting, much like in the traditional non-distributed setting. The optimization problem associated with parameter estimation in quantile regression is neither smooth nor strongly convex (although is convex) and thus it can only have sublinear convergence at best. Although this insinuates slow convergence, we show that, if the local sample size is sufficiently large compared to parameter dimension and network size, distributed estimation in quantile regression actually exhibits linear convergence up to the statistical precision.
报告人简介:练恒,现任香港城市大学数学系教授,于2000年在中国科学技术大学获得数学和计算机学士学位,2007年在美国布朗大学获得计算机硕士,经济学硕士和应用数学博士学位。先后在新加坡南洋理工大学,澳大利亚新南威尔士大学,和香港城市大学工作。在高水平国际期刊上发表学术论文30多篇,包括《Annals of Statistics》《Journal of the Royal Statistical Society,Series B》、《Journal of the American Statistical Association》《Journal of Machine Learning Research》《IEEE Transactions on Pattern Analysis and Machine Intelligence》. 研究方向包括高维数据分析,函数数据分析,机器学习等。
时间: 2024年4月24日 10:30-11:30
地点:腾讯会议号:831-581-695
联系人:胡涛