ACML 2016 tutorial on "Recent Advances in Distributed Machine Learning"

By Taifeng Wang, Wei Chen

For consult any details before the conference, please contact Taifeng Wang

Abstract

In recent years, artificial intelligence has demonstrated its power in many important applications. Besides the novel machine learning algorithms (e.g., deep neural networks), their distributed implementations play a very critical role in these successes. In this tutorial, we will first review popular machine learning models and their corresponding optimization techniques. Second, we will introduce different ways of parallelizing machine learning algorithms, i.e., data parallelism, model parallelism, synchronous parallelism, asynchronous parallelism, and so on, and discuss their theoretical properties, advantages, and limitations. Third, we will discuss some recent research works that try to overcome the limitations of standard parallelization mechanisms, including advanced asynchronous parallelism and new communication and aggregation methods. Finally, we will introduce how to leverage popular distributed machine learning platforms, such as Spark MlLib, DMTK, Tensorflow, to parallelize a given machine learning algorithm, in order to give the audience some practical guidelines on this topic.

Bio of the instructors

Taifeng Wang is a lead researcher in Machine Learning group, Microsoft Research Asia. His research interests include machine learning, distributed system, search ads click prediction, graph mining. many of his technologies have been transferred to Microsoft’s products and online services, such as Bing, Microsoft Advertising, and Azure. Currently, he is working on distributed machine learning, and leading Microsoft’s open source project DMTK (Microsoft Distributed Machine Learning Toolkit).  He has published tens of papers at top conference and journals and served as the PC member of many premium conferences such as KDD, WWW, NIPS, SIGIR, IJCAI, AAAI and WSDM. He has been tutorial speakers in WWW 2011 and SIGIR 2012, and he has organized a workshop on Deep learning in WSDM 2015.

Wei Chen is a researcher in Machine Learning Group, Microsoft Research Asia. Her current research interests include: distributed machine learning, machine learning for games, deep learning theory, mechanism design, and learning to rank. She has published tens of papers at top conferences/journals such as IJCAI, AAAI, NIPS, COLT, EC, and ACM TIST. Her group won the research breakthrough award in MSR in 2012. Before she joined Microsoft in July 2011, she obtained her Ph. D. in Mathematics from Academy of Mathematics and System Science, Chinese Academy of Sciences.

Tutorial slides will be soon available here soon.

Back to ACML homepage