Mingyi Hong

profilePic 

Mingyi Hong

Associate Professor
Electrical and Computer Engineering
University of Minnesota
6-109 Keller Hall
University of Minnesota, Minneapolis, MN 55455
Google Scholar citation
Biographical Sketch, [Curriculum Vitae]
Email: mhong at umn.edu

Research Interests

My research focus on contemporary issues in optimization, information processing and wireless networking.

See here for our publication, and here for the current projects.

Teaching

  • EE 3015 Signal and Systems, Spring 2019, UMN, ECE Department

  • EE 5239 Nonlinear Optimization, Fall 2017,2018, 2019, 2020 UMN, ECE Department

RA and Postdoctoral position available

We have research assistants and post doctoral fellow position available. If you are interested, please contact Dr. Hong via email

Group News

  • Feb. 2021 our work (with Tianyi, Xinwei, Wotao) entitled Hybrid Federated Learning: Algorithms and Implementation has been awarded a Best Student Paper Award in the NeurIPS 2020 Workshop on Scalbility, Privacy, and Security in Federated Learning (NeurIPS-SpicyFL 2020). The paper is online at [arXiv]; Also see the slides here;

  • Dec. 2020 working paper: our work (with Tianyi, Kaiqing, Han) entitled Asynchronous Advantage Actor Critic: Non-asymptotic Analysis and Linear Speedup has been made available online at [arXiv]; This paper analyzes the complexity of the popular A3C algorithm, and shows that the algorithm's convergence rate (hence efficiency) improves as the number of nodes increases.

  • Dec. 2020 working paper: our work (with Haoran, Xiao, Wenqiang, Tsung-Hui, Minghe) entitled Learning to Continuously Optimize Wireless Resource In Episodically Dynamic Environment has been made available online at [arXiv]; This paper proposes a new approach based on continuous learning, which helps machine learning based wireless communication strategies to continuously optimize, even when the environment is time-varying. Please see below for our basic system schematic.

2000 
  • Dec. 2020 working paper: our work (with Tianyi, Xinwei, Wotao) entitled Hybrid FL: Algorithms and Implementation has been made available online at [arXiv]; This paper proposes a new formulation, and a novel algorithm for the hybrid Federated Learning setting, where the distributed agents do not share complete data nor features. The proposed algorithm is shown to perform well, and sometimes can outperform centralized algorithms. The setting of the proposed problem is given below.

1500 
  • Dec. 2020 working paper: our work (with Junyu) entitled First-Order Algorithms Without Lipschitz Gradient: A Sequential Local Optimization Approach has been made available online at [arXiv]; This paper focuses on a class of important optimization, where there is a lack of global Lipschitz gradient. We propose a sequential local optimization framework, which is capable of adapting a number of existing first-order methods, so that they can utilize local Lipschitz continuity to optimize. Numerical results on tensor factorization, linear neural networks all show that the proposed methods are very efficient.

  • Oct. 2020, paper accepted: our work (with Yi, Ming-Min,and Min-Jian) entitled “Learned conjugate gradient descent network for massive MIMO detection” has been accepted by TSP; The paper develops a novel deep learning based algorithm for solving MIMO detection problem; available at [arXiv];

  • Sept. 2020, papers accepted in NeurIPS Four papers accepted in NeurIPS, with two as spotlights.

    • X Chen, ZS Wu, M Hong, Understanding Gradient Clipping in Private SGD: A Geometric Perspective, available at [arXiv] (spotlight)

    • S Lu, M Razaviyayn, B Yang, K Huang, M Hong, SNAP: Finding Approximate Second-Order Stationary Solutions Efficiently for Non-convex Linearly Constrained Problems, available at [arXiv] (spotlight)

    • HT Wai, M Hong, Z Wang and Z Yang, Provably Efficient Neural GTD for Off-Policy Learning

    • X Chen, T Chen, H Sun, ZS Wu, M Hong, Distributed Training with Heterogeneous Data: Bridging Median-and Mean-Based Algorithms, available at [arXiv]

  • Sept. 2020 Dr. Songtao Lu joined IBM TJ Watson Research Center as a Research Staff.

  • August. 2020, Best Paper Prize Our 2016 paper on ADMM receives a Best Paper Award (Silver) from ICCM.

  • August. 2020, Dr. Prashant Khanduri joins the group as a post-doctoral fellow, welcome! [Dr. Khanduri].

  • July. 2020, Intel-NSF Research Award: We are one of the 15 teams that receipts the NSF - Intel research award. We propose to understand how to use state-of-the-art optimization and learning tools for wireless communication and networking; this is a joint collaboration with Dr. Dongning Guo at Northwestern University, and Dr. Xiao Fu at Oregon State University; see [the Press Release from Intel], and [in-depth coverage at Forbes.com].

  • July. 2020 working paper: our work (with Hoi-To, Zhuoran, and Zhaoran) entitled A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis and Application to Actor-Critic has been made available online at [arXiv]; This paper proposes and analyzes two-time scale algorithms for bi-level optimization, and build an interesting connection of the proposed algorithms to actor-critic algorithms in reinforcement learning. In particular, we provide various complexity estimates for two-timescale bi-level optimization. See the image below.

1500 
  • June. 2020 working paper: our work (with Junyu, Mengdi, and Shuzhong) entitled Generalization Bounds for Stochastic Saddle Point Problems has been made available online at [arXiv]; This paper studies the generalization bounds for the empirical saddle point (ESP) solution to stochastic saddle point (SSP) problems. For SSP with Lipschitz continuous and strongly convex-strongly concave objective functions, we establish an O(1/n) generalization bound; We also provide generalization bounds under a variety of assumptions, including the cases without strong convexity and without bounded domains. We illustrate our results in two examples: batch policy learning in Markov decision process, and mixed strategy Nash equilibrium estimation for stochastic games. In each of these examples, we show that a regularized ESP solution enjoys a near-optimal sample complexity.

  • June. 2020 working paper: our work (with Siliang, Haoran, and Junyu) entitled On the Divergence of Decentralized Non-Convex Optimization has been made available online at [arXiv]; We study a generic class of decentralized algorithms in which N agents jointly optimize sum local objectives. By constructing some counter-examples, we show that when certain local Lipschitz conditions (LLC) on the local function gradients are not satisfied, most of the existing decentralized algorithms diverge, even if the global Lipschitz condition (GLC) is satisfied, where the sum function f has Lipschitz gradient. We then design a first-order algorithm, which is capable of computing stationary solutions with neither the LLC nor the GLC. In particular, we show that the proposed algorithm converges sublinearly to certain epsilon-stationary solution, where the precise rate depends on various algorithmic and problem parameters. In particular, if the local functions are Qth order polynomials, then the rate becomes O(1/ϵ^(Q−1)). Such a rate is tight for the special case of Q=2 where each local function satisfies LLC.

1500 
  • June. 2020, Two papers accepted by ICML 2020

    • Min-Max Optimization without Gradients: Convergence and Applications to Black-Box Evasion and Poisoning Attacks, joint work with Sijia, Songtao (IBM), Xiangyi, Yao Feng (Tsinghua), Kaidi (Northeastern) · Abdullah, Una-May (MIT); We propose a zeroth order-Min-Max algorithm, and apply to certain black-box min-max optimization and black-box evasion and poisoning attacks in adversarial machine learning. available at [arXiv];

1500 
    • Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: Joint Gradient Estimation and Tracking Joint work with Haoran, Songtao; We proposed an sample, communication and computation efficient algorithm, DGET, for decentralized non-convex optimization; available at [arXiv];

2000 
  • June. 2020, Two-part paper accepted: our work (with Qingjiang, Xiao,and Tsung-Hui) entitled “Penalty Dual Decomposition Method For Nonsmooth Nonconvex Optimization, Part I and II” has been accepted by TSP; The paper develops a novel penalty based methods for solving constrained problems arising in signal processing; available at [arXiv];

  • June. 2020, overview paper accepted: our work (with Meisam, Tianjian, Songtao Mazair and Maher) entitled “Non-convex Min-Max Optimization:Applications, Challenges, and RecentTheoretical Advances ” has been accepted by SPM; The paper provides an overview of recent advances on solving a class of mini-max problem; available at [arXiv];

  • June. 2020, paper accepted: our work (with Mehmet, Seyed, Burhan, and Steen) entitled “Dense Recurrent Neural Networks for Accelerated MRI: History-Cognizant Unrolling of Optimization Algorithms” has been accepted by JSTSP; The paper proposes a novel physical-driven deep learning method based on unrolling for medical imaging; available at [arXiv];

1500 
  • May. 2020 working paper: our work (with Xinwei, Sairaj, Wotao and Yang) entitled “Title: FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data” has been made available online at [arXiv]; This paper discusses a number of issues with the current federated learning algorithms, especially non-convergence issues, and communication efficiency issues when the data is heterogeneous among the users; it also proposed a primal-dual based algorithm that attempts to resolve those issues. An interesting result is that we show there is a tradeoff between heterogeneity and potential communication savings.

1500 
  • April. 2020, paper accepted: our work (with Guoyong,Xiao) entitled “Spectrum Cartography via Coupled Block-Term Tensor Decompositions” has been accepted by TSP; available at [arXiv];

  • April. 2020, paper accepted: our work (with Kexin,Xiao, et al) entitled “Multi-user Adaptive Video Delivery over Wireless Networks: A Physical Layer Resource-Aware Deep Reinforcement Learning Approach” has been accepted by TCSVT; available at [arXiv];

  • Feb. 2020, paper accepted: our work (with Songtao, Ioanis and Yongxin) entitled “Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications” has been accepted by TSP; available at [arXiv];

  • Jan. 2020, paper accepted: our survey paper (with Tsung-Hui,Hoi-To, Xinwei and Songtao) entitled “Distributed Learning in the Non-Convex World: From Batch to Streaming Data, and Beyond” has been accepted by IEEE SPM; available at here;

2000 
View My Stats