Mingyi Hong

profilePic 

Mingyi Hong

Associate Professor
Electrical and Computer Engineering
University of Minnesota
6-109 Keller Hall
University of Minnesota, Minneapolis, MN 55455
Google Scholar citation
Biographical Sketch, [Curriculum Vitae]
Email: mhong at umn.edu

Research Interests

My research focus on contemporary issues in optimization, information processing and wireless networking.

See here for our publication, and here for the current projects.

Teaching

  • EE 3015 Signal and Systems, Spring 2019, UMN, ECE Department

  • EE 5239 Nonlinear Optimization, Fall 2017,2018, 2019, 2020 UMN, ECE Department

RA and Postdoctoral position available

We have research assistants and post doctoral fellow position available. If you are interested, please contact Dr. Hong via email

Group News

  • Sept. 2020, papers accepted in NeurIPS Four papers accepted in NeurIPS, with two as spotlights.

    • X Chen, ZS Wu, M Hong, Understanding Gradient Clipping in Private SGD: A Geometric Perspective, available at [arXiv] (spotlight)

    • S Lu, M Razaviyayn, B Yang, K Huang, M Hong, SNAP: Finding Approximate Second-Order Stationary Solutions Efficiently for Non-convex Linearly Constrained Problems, available at [arXiv] (spotlight)

    • HT Wai, M Hong, Z Wang and Z Yang, Provably Efficient Neural GTD for Off-Policy Learning

    • X Chen, T Chen, H Sun, ZS Wu, M Hong, Distributed Training with Heterogeneous Data: Bridging Median-and Mean-Based Algorithms, available at [arXiv]

  • August. 2020, Best Paper Prize Our 2016 paper on ADMM receives a Best Paper Award (Silver) from ICCM.

  • August. 2020, Dr. Prashant Khanduri joins the group as a post-doctoral fellow, welcome! [Dr. Khanduri].

  • July. 2020, Intel-NSF Research Award: We are one of the 15 teams that receipts the NSF - Intel research award. We propose to understand how to use state-of-the-art artificial intelligence tools for wireless communication and networking; this is a joint collaboration with Dr. Dongning Guo at Northwestern University, and Dr. Xiao Fu at Oregon State University; see [the Press Release from Intel], and [in-depth coverage at Forbes.com].

  • July. 2020 working paper: our work (with Hoi-To, Zhuoran, and Zhaoran) entitled A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis and Application to Actor-Critic has been made available online at [arXiv]; This paper proposes and analyzes two-time scale algorithms for bi-level optimization, and build an interesting connection of the proposed algorithms to actor-critic algorithms in reinforcement learning. In particular, we provide various complexity estimates for two-timescale bi-level optimization. See the image below.

1500 
  • June. 2020 working paper: our work (with Junyu, Mengdi, and Shuzhong) entitled Generalization Bounds for Stochastic Saddle Point Problems has been made available online at [arXiv]; This paper studies the generalization bounds for the empirical saddle point (ESP) solution to stochastic saddle point (SSP) problems. For SSP with Lipschitz continuous and strongly convex-strongly concave objective functions, we establish an O(1/n) generalization bound; We also provide generalization bounds under a variety of assumptions, including the cases without strong convexity and without bounded domains. We illustrate our results in two examples: batch policy learning in Markov decision process, and mixed strategy Nash equilibrium estimation for stochastic games. In each of these examples, we show that a regularized ESP solution enjoys a near-optimal sample complexity.

  • June. 2020 working paper: our work (with Siliang, Haoran, and Junyu) entitled On the Divergence of Decentralized Non-Convex Optimization has been made available online at [arXiv]; We study a generic class of decentralized algorithms in which N agents jointly optimize sum local objectives. By constructing some counter-examples, we show that when certain local Lipschitz conditions (LLC) on the local function gradients are not satisfied, most of the existing decentralized algorithms diverge, even if the global Lipschitz condition (GLC) is satisfied, where the sum function f has Lipschitz gradient. We then design a first-order algorithm, which is capable of computing stationary solutions with neither the LLC nor the GLC. In particular, we show that the proposed algorithm converges sublinearly to certain epsilon-stationary solution, where the precise rate depends on various algorithmic and problem parameters. In particular, if the local functions are Qth order polynomials, then the rate becomes O(1/ϵ^(Q−1)). Such a rate is tight for the special case of Q=2 where each local function satisfies LLC.

1500 
  • June. 2020, Two papers accepted by ICML 2020

    • Min-Max Optimization without Gradients: Convergence and Applications to Black-Box Evasion and Poisoning Attacks, joint work with Sijia, Songtao (IBM), Xiangyi, Yao Feng (Tsinghua), Kaidi (Northeastern) · Abdullah, Una-May (MIT); We propose a zeroth order-Min-Max algorithm, and apply to certain black-box min-max optimization and black-box evasion and poisoning attacks in adversarial machine learning. available at [arXiv];

1500 
2000 
  • June. 2020, Two-part paper accepted: our work (with Qingjiang, Xiao,and Tsung-Hui) entitled “Penalty Dual Decomposition Method For Nonsmooth Nonconvex Optimization, Part I and II” has been accepted by TSP; The paper develops a novel penalty based methods for solving constrained problems arising in signal processing; available at [arXiv];

  • June. 2020, overview paper accepted: our work (with Meisam, Tianjian, Songtao Mazair and Maher) entitled “Non-convex Min-Max Optimization:Applications, Challenges, and RecentTheoretical Advances ” has been accepted by SPM; The paper provides an overview of recent advances on solving a class of mini-max problem; available at [arXiv];

  • June. 2020, paper accepted: our work (with Mehmet, Seyed, Burhan, and Steen) entitled “Dense Recurrent Neural Networks for Accelerated MRI: History-Cognizant Unrolling of Optimization Algorithms” has been accepted by JSTSP; The paper proposes a novel physical-driven deep learning method based on unrolling for medical imaging; available at [arXiv];

1500 
  • May. 2020 working paper: our work (with Xinwei, Sairaj, Wotao and Yang) entitled “Title: FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data” has been made available online at [arXiv]; This paper discusses a number of issues with the current federated learning algorithms, especially non-convergence issues, and communication efficiency issues when the data is heterogeneous among the users; it also proposed a primal-dual based algorithm that attempts to resolve those issues. An interesting result is that we show there is a tradeoff between heterogeneity and potential communication savings.

1500 
  • April. 2020, paper accepted: our work (with Guoyong,Xiao) entitled “Spectrum Cartography via Coupled Block-Term Tensor Decompositions” has been accepted by TSP; available at [arXiv];

  • April. 2020, paper accepted: our work (with Kexin,Xiao, et al) entitled “Multi-user Adaptive Video Delivery over Wireless Networks: A Physical Layer Resource-Aware Deep Reinforcement Learning Approach” has been accepted by TCSVT; available at [arXiv];

  • Feb. 2020, paper accepted: our work (with Songtao, Ioanis and Yongxin) entitled “Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications” has been accepted by TSP; available at [arXiv];

  • Jan. 2020, paper accepted: our survey paper (with Tsung-Hui,Hoi-To, Xinwei and Songtao) entitled “Distributed Learning in the Non-Convex World: From Batch to Streaming Data, and Beyond” has been accepted by IEEE SPM; available at here;

2000 
  • Dec. 2019, paper accepted: our paper (with Songtao, Ioanis and Yongxin) entitled “Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications” has been accepted by IEEE TSP; available at [arXiv];

  • Dec. 2019 working paper: our work (with Junyu and Shuzhong) entitled “On Lower Iteration Complexity Bounds for the Saddle Point Problems” has been made available online; available at [arXiv];

  • Nov. 2019 M. Visited Princeton University (Host: Yuxin Chen and Jason Lee);

  • Oct. 2019 working paper: our work (with Haoran and Songtao) entitled “Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: A Joint Gradient Estimation and Tracking Approach” has been made available online; available at [arXiv];

  • Sept. 2019 paper accepted: We have 3 papers accepted by NeurIPS 2019; 1) Joint work with Hoi-To, Zhaoran, Zhuoran and Kexin on using Min-Max Analysis for Policy Evaluation; 2) Joint work with Zhuoran, Yongxin, Zhaoran on Global convergence for actor-critic; 3) Joint work with Sijia, Xiangyi, Kaidi, Xingguo, Xue Lin and David on Zeroth-Order Adam method;

  • Aug. 2019, grant approved: The proposal entitled “Online Modeling of Heterogeneous Autonomy” is funded by AFOSR from 2019-2022. We propose to understand the behavior of autonomous agents by using reinforcement learning;

  • July. 2019, grant approved: The proposal entitled “CIF: Small: A Simple and Unifying Optimization Framework for Signal and Information Processing Problems with Min-Max Structures” is funded by NSF from 2019-2022. We propose to understand the challenging problem of min-max optimization;

  • July. 2019 paper accepted: our work (with Haoran and Harish, Aliye and Mike at Bell Labs) entitled Deep Learning Based Preamble Detection and TOA Estimation has been accepted by Globecom2019;

  • June 2019 paper accepted: our work (with Xiangyi, Sheng, Chao and Mohammadkazem) entitled A Deep Learning Method for Online Capacity Estimation of Lithium-Ion Batteries has been accepted by Journal of Energy Storage;

  • June 2019 paper accepted: our work (with Haoran) entitled Distributed Non-Convex First-Order Optimization and Information Processing: Lower Complexity Bounds and Rate Optimal Algorithms has been accepted by IEEE TSP;available at [arXiv.org];

  • June. 2019 M. attended CTW 2019 and presented our work on solving mini-max problems; see [here] for slides;

  • June. 2019, Songtao presented his work on alternating gradient descent as a long talk in ICML 2019;

  • May. 2019 paper accepted: our work (with Meisam, Navid and Tom) entitled ”A Doubly Stochastic Gauss-Seidel Algorithm for Solving Linear Equations and Certain Convex Minimization Problems” has been accepted by MP series B; available at [arXiv.org];

  • May. 2019 M. attended ICRL 2019 and presented two papers; see [here] for a blog from IBM about our research on analyzing Adam algorithm for non-convex problems;

  • April. 2019 paper accepted: our work (with Songtao and Zhengdao) entitled ”On the Sublinear Convergence of Randomly Perturbed Alternating Gradient Descent to Second Order Stationary Solutions” has been accepted by ICML 2019; available at [arXiv.org];

  • Mar. 2019 paper accepted: our work (with Xiangfeng, Shiqian, Meisam, Tsung-Hui and Tom) entitled “A Block Successive Upper Bound Minimization Method of Multipliers for Linearly Constrained Convex Optimization” has been accepted by MOR; available at [Optimization-Online];

  • Mar. 2019, Xinwei Zhang jointed our group as a Ph.D. student. Thanks Xinwei for deciding to stay with us after working with us since Fall 2018.

  • Mar. 2019, grant approved: The proposal entitled “Multi-Aspect Intelligence Fusion and Analytics: Models, Identifiability and Computation” (UMN lead, joint with Dr. Xiao Fu at Oregon State) is funded by ARO from 2019-2022. We propose to use novel tensor based optimization methods to deal with multi-aspect data;

  • Feb. 2019 working paper: our work (with Songtao, Ioanis and Yongxin) entitled “Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications” has been made available online; available at [arXiv];

  • Feb. 2019 three papers accepted by ICASSP 2019;

  • Jan. 2019 paper accepted: our work (with Davood) entitled “Perturbed Proximal Primal Dual Algorithm for Nonconvex Nonsmooth Optimization” has been accepted by MP Series B;

  • Dec. 2018 paper accepted: our work (with Xiangyi, Sijia and Ruoyu) entitled “On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization” has been accepted by ICLR 2019; available at [openreview.net];

This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. We develop an analysis framework and a set of mild sufficient conditions that guarantee the convergence of the Adam-type methods, with a convergence rate of order O(log(T)/sqrt(T)) for non-convex stochastic optimization. We show that the conditions are essential, by identifying concrete examples in which violating the conditions makes an algorithm diverge. Besides providing one of the first comprehensive analysis for Adam-type methods in the non-convex setting, our results can also help the practitioners to easily monitor the progress of algorithms and determine their convergence behavior.

  • Dec. 2018 paper accepted: our work (with Xiangyi, Sijia and Ping-Yu) entitled “signSGD via Zeroth-Order Oracle” has been accepted by ICLR 2019; available at [openreview.net];

View My Stats