Optimization in the Big Data Era

(05 Dec 2022–16 Dec 2022)

Organizing Committee



  • Defeng Sun (The Hong Kong Polytechnic University)

Contact Information

General Enquiries: ims-enquiry(AT)nus.edu.sg
Scientific Aspects Enquiries: mattohkc(AT)nus.edu.sg


The field of optimization has undergone tremendous growth in its applications in science, engineering, business and finance in the past few decades. The growth has accelerated in recent years with the advent of big data analytics where optimization forms the core engine for solving and analyzing the underlying models and problems of extracting meaningful information from available data for the purpose of better decision making or getting better insights into the data sources. Spurred by the application needs and motivated by new emerging models in machine learning and data analytics, optimization research (in theory and algorithms) has also undergone rapid transformation and progress in recent years. In particular, demands for fast algorithms to solve extremely large-scale optimization problems arising from big data analytics have spurred numerous exciting new research directions in optimization theory and algorithms. The latter in turn helps to shape the development of optimization models and techniques in machine and statistical learning.

In data analytics, structured convex and nonconvex composite optimization models and their algorithms are essential components in analyzing and solving the problems such as classification, pattern recognition, completion, clustering, recovery, dimension reduction. To date, a wide variety of fast first-order methods (including proximal gradient, stochastic gradient, block coordinate descent, ADMM, deep neural networks, and their distributed counterparts) and second-order methods (semismooth-Newton augmented Lagrangian, proximal Newton) have been designed to solve a wide array of machine learning and high-dimensional statistical problems. These include problems such as low-rank matrix completion, low-rank factorization, sparse inverse covariance estimation, graphical model selection, sparse support vector machine, distance weighted discrimination, structured lasso problems, reinforced learning, robust optimization, etc.

This program will bring together optimization researchers who are interested in the theory and design of algorithms for big data applications and machine learning researchers and statisticians who heavily employ optimization models and algorithms in analyzing and solving their domain problems. The goal will be to foster exchange of ideas and collaborations; to communicate the latest advancements and challenges in both camps so as to highlight the opportunities for advancing the design of better models and their corresponding theoretical analysis, and faster algorithms for solving even larger application problems. Ultimately, we hope that the IMS program will play a part in shaping the next generation optimization methods and models for large-scale machine learning and statistical applications. This program will witness reports on the latest exciting developments in the interplay between optimization, machine and statistical learning. It will include talks to report on (a) the recent theoretical advances in high-dimensional statistics based on non-convex optimization models and their application in data analytics; (b) the success stories of machine learning such as the design and application of DNNs and convolutional neural networks in supervised learning tasks including speech recognition and image classifications; (c) recent advances in fast algorithms for solving large-scale structured optimization models generally arising in machine learning and statistics; (d) new recent directions such as the design of second-order semismooth-Newton methods capable of exploiting structured Hessian sparsity in various convex composite optimization problems to make the cost per iteration almost as cheap as that of a first-order method; (e) stochastic gradient methods as well as the emerging techniques of streaming and sketching for problems with high redundancy in the data.


Workshop 1 (5–9 December 2022): IMS auditorium

Address of IMS
National University Of Singapore (Kent Ridge Campus)
3 Prince George's Park, Singapore 118402

Map of IMS: https://www.streetdirectory.com/sg/institute-for-mathematical-sciences-national-university-of-singapore-nus/3-prince-georges-park-118402/107327_6417.html 

Workshop 2 (12–16 December 2022): LT33

Address of LT33
National University Of Singapore (Kent Ridge Campus)
10 Kent Ridge Crescent, Singapore 119260

Map of LT33: https://www.streetdirectory.com/sg/lecture-theatres-lt33-national-of-singapore/10-kent-ridge-crescent-119260/100752_7275.html



Workshop 1

  • Ahmet Alacaoglu (University of Wisconsin–Madison, USA)
  • Krishna Balasubramanian (University of California at Davis, USA)
  • Kabir Chandrasekher (Stanford University, USA)
  • Robert Freund (MIT Sloan School of Management, USA)
  • Michael Gastner (Yale-NUS College, Singapore)
  • Geovani Grapiglia (UC Louvain, Belgium)
  • Serge Gratton (ENSEEIHT and Université de Toulouse, France)
  • Patrice Koehl (University of California, Davis, USA)
  • Donghwan Kim (Korea Advanced Institute of Science & Technology, Korea)
  • Guanghui Lan (Georgia Institute of Technology, USA)
  • Yin Tat Lee (University of Washington, USA)
  • Ching-pei Lee (Academia Sinica, Taipei)
  • Zhaosong Lu (University of Minnesota, USA)
  • Laura Palagi (Sapienza University of Rome, Italy)
  • Peter Richtarik (King Abdullah University of Science and Technology, Saudi Arabia)
  • Clément W. Royer (Université Paris Dauphine-PSL, France) *zoom
  • Katya Scheinberg (Cornell University, USA)
  • Yan Shuo Tan (National University of Singapore, Singapore)
  • ‪Philippe Toint (Université de Namur, Belgium)
  • Stephen Wright (University of Wisconsin–Madison, USA)
  • Yancheng Yuan (The Hong Kong Polytechnic University, Hong Kong)
  • Junyu Zhang (National University of Singapore, Singapore)
  • Yangjing Zhang (Chinese Academy of Sciences, China)


Workshop 2

  • Waheed Bajwa (Rutgers University, USA)
  • Chao Ding (Chinese Academy of Sciences, China)
  • Ethan Xingyuan Fang (Duke University, USA)
  • Niao He (ETH Zurich, Switzerland)
  • Michael Hintermuller (Weierstrass Institute, Germany) *zoom
  • Nhat Ho (The University of Texas at Austin, USA)
  • Meixia Lin (Singapore University of Technology and Design, Singapore)
  • Yurii Nesterov (UCLouvain, Belgium)
  • Viet Anh Nguyen (The Chinese University of Hong Kong, China)
  • Jong-Shi Pang (University of Southern California, USA)
  • Houduo Qi (University of Southampton, UK)
  • Shoham Sabach (Technion - Israel Institute of Technology, Israel)
  • Anthony Man-Cho So (The Chinese University of Hong Kong, China)
  • Yong Sheng Soh (National University of Singapore, Singapore)
  • Mahdi Soltanolkotabi (University of Southern California, USA)
  • Akiko Takeda (The University of Tokyo, Japan)
  • Christos Thrampoulidis (University of British Columbia, Canada) *zoom
  • Xin Tong (National University of Singapore, Singapore)
  • Antonios Varvitsiotis (Singapore University of Technology and Design, Singapore)
  • Shuoguang Yang (Hong Kong University of Science and Technology, China)
  • Man-Chung Yue (The University of Hong Kong, China)
  • Anru Zhang (Duke University, USA)


Workshop 1 

Scroll to Top