瑞典皇家理工学院Ming Xiao教授学术报告

来源:信息科学与技术学院  作者:方旭明  日期:2019-07-02  点击数:200

西南交通大学创源大讲堂

Innovative Source Lecture Hall, Southwest Jiaotong University

无线通信与信息编码前沿系列讲座

报告题目(Title): Decentralized Multi-Task Learning

报告专家(Speaker): Ming Xiao, Associate Professor, School of Electrical Engineering and Computer Science, Royal Institute of Technology, KTH, Sweden

报告时间(Time)July 10, 2019, 9:30-10:30pm (Wednesday) (710日上午9:30-10:30)

报告地点(Venue)Room X9521, #9 Building (9号楼, X9521)

主持人(Chair): Professor Xuming Fang(方旭明)

内容提要(Outline of the Talk:

With the development of various distributed applications based on AI, distributed machine learning has become increasingly important.  It is common to formulate the problem as multi-task learning in distributed learning. In multi-task learning (MTL), related tasks learn jointly to improve generalization performance. To exploit the high learning speed of extreme learning machines (ELMs), we apply the ELM framework to the MTL problem, where the output weights of ELMs for all the tasks are learned collaboratively. We first present the ELM based MTL problem in the centralized setting, which is solved by the proposed MTL-ELM algorithm. Due to the fact that many data sets of different tasks are geo-distributed, decentralized machine learning is studied. We formulate the decentralized MTL problem based on ELM as majorized multi-block optimization with coupled bi-convex objective functions. To solve the problem, we propose the DMTLELM algorithm, which is a hybrid Jacobian and Gauss-Seidel Proximal multi-block alternating direction method of multipliers (ADMM). Further, to reduce the computation load of DMTLELM, DMTL-ELM with first-order approximation (FO-DMTLELM) is presented. Theoretical analysis shows that the convergence to the stationary point of DMTL-ELM and FO-DMTLELM can be guaranteed conditionally. Through simulations, we demonstrate the convergence of proposed MTL-ELM, DMTLELM and FO-DMTL-ELM algorithms, and also show that they can outperform existing MTL methods. Moreover by adjusting the dimension of hidden feature space, there exists a trade-off between communication load and learning accuracy for DMTLELM.

 

报告人简介(Short Biography of the Speaker):

Ming Xiao received his Ph.D degree in 2007. Currently, he is an Associate Professor in KTH Royal Institute of Technology, Stockholm, Sweden. And he works for Department of Information Science and Engineering in School of Electrical Engineering and Computer Science. His main research interests are the machine learning, network coding (applications in the physical layer), cooperative relaying; distributed storage systems, channel codes (turbo codes, fountain codes), millimeter wave communications, energy-efficiency communications. He has also published 120+ journal papers (on IEEE Trans. On Info. Theory, IEEE J. Select. Areas Commun., IEEE Trans. Commun., IEEE Trans. Wireless Commun., IEEE Trans. Vehicular Technology, etc.), and 10+ conference papers (on IEEE ICC, IEEE Globecom, ICT, etc.). Dr. Xiao is an IEEE Senior Member and a senior researcher in communication field. Currently, he was the editor for IEEE Trans. Commun (2012 – 2017) and the Senior editor for IEEE Communications Letters from 2015. And he was the editor for IEEE JSAC, SI, "Millimeter Wave communications for future mobile networks" (2016-2017), and is presently the editor for IEEE Trans on Wireless Communications (2018-), and area editor for IEEE Open Journal on Communications (2019 --)