LQRSP – Sparsity-Promoting Linear Quadratic Regulator

 

Fu Lin, Makan Fardad, and Mihailo R. Jovanovic

January 2012

Matlab Files
Download all Matlab files

Presentation
2012 MIT LIDS Talk

Papers
2013 IEEE TAC Paper
2012 ACC Paper
2011 ACC Paper

Introduction

lqrsp.m is a Matlab function for the design of sparse and block sparse state-feedback gains that minimize the variance amplification (i.e., the {cal H}_2 norm) of distributed systems. Our long-term objective is to develop a toolbox for sparse feedback synthesis. This will allow users to identify control configurations that strike a balance between the performance and the sparsity of the controller and to examine the influence of different control configurations on the performance of distributed systems.

The design procedure consists of two steps:

  1. Identification of sparsity patterns by incorporating sparsity-promoting penalty functions into the optimal control problem.

  2. Optimization of feedback gains subject to structural constraints determined by the identified sparsity patterns.

Problem formulation

Consider the following control problem

     begin{array}{rrcl}     {bf state~equation}!!:     &     dot{x}     & = &     A , x ,+, B_1 , d ,+, B_2 , u     [0.25cm]     {bf performance~output}!!:     &         z     & = &     left[     begin{array}{c}     Q^{1/2}      0     end{array}      right]     , x      , + ,     left[     begin{array}{c}     0      R^{1/2}     end{array}      right]     , u     [0.5cm]     {bf control~input}!!:     &         u     & = &     - ,     F , x     end{array}


where d is the exogenous input signal and z is the performance output. The matrix F is a state feedback gain, Q = Q^T geq 0 and R = R^T > 0 are the state and control performance weights, and the closed-loop system is given by

     begin{array}{rcl}     dot{x}     & = &     (A ,-, B_2 F) , x ,+, B_1 , d     [0.25cm]     z     & = &     left[     begin{array}{c}     Q^{1/2}       -,R^{1/2} F      end{array}     right]     x.     end{array}


We study the sparsity-promoting optimal control problem

     {rm minimize}     ;;     J(F)     , + ,     gamma     ,     {rm{bf card}} , (F)     hspace{5cm}     ({rm SP})


in which the closed-loop {cal H}_2 norm J is augmented by a function {rm{bf card}} that counts the number of nonzero elements in the feedback gain matrix F. For example,

     F     ;=;     left[     begin{array}{cccc}     5  &   0 & 0 &  0          0  &   3.2 & 1.6 & 0          0 & -4 & 0 & 1     end{array}     right]     ~~~     Rightarrow     ~~~     {rm{bf card}} , (F)     ;=;     5.


As the parameter gamma varies over [0, +infty), the solution traces the optimal trade-off path between the quadratic performance and the feedback gain sparsity.

     begin{array}{c}     {bf parameterized~family~of~feedback~gains}!!:     [0.15cm]     F (gamma)     ~ := ~     {rm arg , min}_F     left{     J(F)     ; + ;     gamma , {rm{bf card}} , (F)     right}     end{array}
solution path 

Increased emphasis on sparsity encourages sparser control architectures at the expense of deteriorating system's performance.
For gamma = 0 the optimal controller

     F (0)      , := ,     F_c      , = ,     R^{-1} B_2^T P

is obtained from the positive definite solution of the algebraic Riccati equation

     A^T P     ~ + ~     P , A     ~ - ~      P , B_2 , R^{-1} B_2^T P     ~ + ~     Q     ~ = ~     0.


Control architectures for gamma > 0 depend on interconnections in the distributed plant and the state and control performance weights.

After a desired level of sparsity is achieved, we fix the sparsity structure {cal S} and find the optimal structured feedback gain F.

     {bf structured}~{cal H}_2~{bf problem}!!:     ~~     left{     begin{array}{ll}     {rm minimize}     &     J(F)     [0.2cm]     {rm subject~to}     &     F     in     {cal S}     end{array}     right.     hspace{5cm}     {rm (SH2)}

Relaxations of cardinality function

In addition to the cardinality function, we also consider several sparsity-promoting relaxations including

     begin{array}{rrcl}     ell_1~{bf norm}!!:     &     g_1(F)     & !! = !! &     {displaystyle sum_{i, , j}} , | F_{ij} |     [0.5cm]     {bf weighted}~ell_1~{bf norm}!!:     &     g_2(F)     & !! = !! &     {displaystyle sum_{i, , j}} ,  W_{ij} | F_{ij} |     [0.5cm]      {bf sum}{rm-}{bf of}{rm-}{bf logs}!!:     &     g_3(F)     & !! = !! &     {displaystyle sum_{i, , j}} , log left( 1 , + ,      {displaystyle frac{| F_{ij} |}{varepsilon} } right),     ~~     0 , <  , varepsilon , ll , 1.     end{array}

To illustrate the sparsity-promoting properties of these relaxations, let us consider the problem of finding the sparsest feedback gain that provides a given level of performance sigma > 0. Convex (ell_1 and weighted ell_1 norm) and non-convex (sum-of-logs) relaxations of the cardinality function lead to

     begin{array}{ccc}     begin{array}{c}     {bf original~problem!!:}     [0.25cm]     begin{array}{ll}     {rm minimize}     &     {rm{bf card}} , (F)     [0.35cm]     {rm subject~to}     &     J(F)     , leq ,     sigma     end{array}     end{array}     &~~     begin{array}{c}          Rightarrow     end{array}     ~~&     begin{array}{c}     {bf relaxation!!:}     [0.25cm]     begin{array}{ll}     {rm minimize}     &     g(F)     [0.35cm]     {rm subject~to}     &     J(F)     , leq ,     sigma     end{array}     end{array}     end{array}


The solution of the relaxed problem is the intersection of the constraint set {cal C} = { F ,|, J(F) leq sigma } and the smallest sub-level set of g that touches {cal C}.

penalty_function 

Alternating direction method of multipliers (ADMM)

ADMM is a simple but powerful algorithm well-suited to large optimization problems. It blends the separability of the dual decomposition with the superior convergence of the method of multipliers. In the sparsity-promoting linear quadratic regulator problem, we use ADMM as the general purpose algorithm that

  1. identifies sparsity patterns

  2. provides good initial condition for structured design.

The algorithm consists of four steps:

  • Step 1: introduce additional variable/constraint

     begin{array}{ll}     {rm minimize}     &     J(F)     ; + ;     gamma ,  g (G)     [0.25cm]     {rm subject~to}     &     F ; - ; G ; = ; 0     end{array}
  • Step 2: introduce the augmented Lagrangian

     {cal L}_rho (F,G,Lambda)     ~ = ~     J(F)     ~ + ~     gamma     ,     g(G)     ~ + ~     {rm trace}     left(     Lambda^T     (F , - , G)     right)     ; + ;;     {displaystyle frac{rho}{2}     ,     || F , - , G ||_F^2     }
  • Step 3: use ADMM for the augmented Lagrangian minimization

     begin{array}{rllll}     bf{F !! - !! minimization~problem:}         &&     F^{k+1}     & !! := !! &     {displaystyle {rm arg , min}_{F}}     ;     {cal L}_rho     (F,G^k,Lambda^k)     [0.35cm]     bf{G !! - !! minimization~problem:}             &&     G^{k+1}     & !! := !! &     {displaystyle {rm arg , min}_{G}}     ;     {cal L}_rho     (F^{k+1},G,Lambda^k)     [0.35cm]     bf{Lambda !! - !! update~step:}     &&         Lambda^{k+1}     & !! := !! &     Lambda^{k}     ;+;     rho left( F^{k+1} ; - ; G^{k+1} right)     end{array}
  • Step 4: polishing - solve the structured {cal H}_2 problem for the identified sparsity pattern.

     {bf structured}~{cal H}_2~{bf problem}!!:     ~~     left{     begin{array}{ll}     {rm minimize}     &     J(F)     [0.2cm]     {rm subject~to}     &     F     in     {cal S}     end{array}     right.

Sketch of the algorithm

Solution to the G-minimization problem

Since all of the above mentioned sparsity-promoting penalty functions can be written as a summation of componentwise functions, we can decompose the G-minimization problem into sub-problems expressed in terms of the individual elements of G,

     {rm minimize}     ~~     gamma     ,     g(G_{ij})     , + ,     frac{rho}{2}     ,     ( G_{ij} - V_{ij} )^2


where  V = (1/rho) Lambda^k + F^{k+1}. This separability of the G-minimization problem facilitates development of element-wise analytical solutions.

G_solution 

Left panel:
solution of the G-minimization sub-problem for weighted ell_1

     {rm minimize}     ~~     gamma , W_{ij} |G_{ij}|     , + ,     frac{rho}{2} (G_{ij} ,-, V_{ij})^2.

Right panel:
solution of the G-minimization sub-problem for cardinality function

     {rm minimize}     ~~     gamma , {bf card},(G_{ij})     , + ,     frac{rho}{2} (G_{ij} ,-, V_{ij})^2.

For sum-of-logs function, the solution to the G-minimization sub-problem depends on the parameters {gamma, rho, varepsilon }. For example, the solution of

     {rm minimize}     ~~     gamma , log , Big( 1 , + , frac{G_{ij}}{varepsilon} Big)     , + ,     frac{rho}{2} , Big( G_{ij} ,-, V_{ij} Big)^2


with {rho = 100, varepsilon = 0.1} resembles the shrinkage operator for small values of gamma (e.g., gamma = 0.1), and it resembles the truncation operator for large values of gamma (e.g., gamma = 10).

G_solution_slog 

Solution to the F-minimization problem

The necessary conditions for the optimality of the F-minimization problem are given by the three coupled nonlinear equations in the unknowns matrices F, L, and P,

NC equations 

For a fixed F, the first two equations are Lyapunov equations for the closed-loop controllability and observability gramians L and P, respectively; for fixed L and P, the third equation is a Sylvester equation for F. This structure of the necessary conditions for the optimality motivates the following iterative scheme

iterative scheme 

We have shown that the difference between the feedback gains in two consecutive steps provides a descent direction. This observation in conjunction with standard line search has allowed us to establish convergence of the iterative scheme.

Polishing

The necessary conditions for the optimality of the structured linear quadratic regulator problem are given by

     begin{array}{rcl}     (A , - , B_2 , F )^T P     ~ + ~     P , (A , - , B_2 , F )     & = &     - left( Q ,+, F^T R , F  right)     [0.25cm]     (A , - , B_2 , F ) , L ,+, L , (A , - , B_2 , F )^T     & = &     -B_1 B_1^T     [0.25cm]     left[ left( R , F , - , B_2^T P right) L right] circ I_{cal S}     & = &     0,     end{array}


where circ denotes the entry-wise multiplication of two matrices, and I_{cal S} is the structural identity. For example,

     begin{array}{rcccl}     F ; = ;     left[         begin{array}{cccc}          * &  * &     &             {*} &  * & *   &               &  * & *   & *             &    & *   & *         end{array}     right]     &     &     Rightarrow     &     &     I_{cal S} ; = ;     left[         begin{array}{cccc}          1 & 1 &   &             1 & 1 & 1 &               & 1 & 1 & 1             &   & 1 & 1         end{array}     right].       end{array}


We have used Newton's method and employed the conjugate gradient scheme to compute the Newton direction efficiently.

Extension: Block sparsity

In many problems it is of interest to design feedback gains that have block sparse structure. In view of this, we have also considered the optimal control problem that promotes sparsity of the feedback gain at the level of the submatrices instead of at the level of the individual elements.

The function that counts the number of nonzero submatrices is given by

     {bf card}_{mathrm{b}}     ,     (F)     ; = ;     sum_{i, , j}     ,     {bf card}     left(     | F_{ij} |_F     right)

where F_{ij} in R^{n_i times m_j} and | cdot |_F is the Frobenius norm. We have also examined the following relaxations

     begin{array}{rrcl}     {bf sum~of~Frobenius~norms}!!:     &     g_4(F)     & !! = !! &     {displaystyle sum_{i, , j}} , | F_{ij} |_F     [0.5cm]     {bf weighted~sum~of~Frobenius~norms}!!:     &     g_5(F)     & !! = !! &     {displaystyle sum_{i, , j}} ,  W_{ij} | F_{ij} |_F     [0.5cm]      {bf generalized~sum}{rm!-!}{bf of}{rm!-!}{bf logs}!!:     &     g_6(F)     & !! = !! &     {displaystyle sum_{i, , j}} ,      log left( 1 , + , {displaystyle frac{| F_{ij} |_F}{varepsilon} } right),     ~~     0 , <  , varepsilon , ll , 1.     end{array}
block_sparse 

An example of a block sparse feedback gain with submatrices of size 2 times 4.
Additional information is provided in the block sparsity example.

Description of lqrsp.m

Matlab Syntax
>> solpath = lqrsp(A,B1,B2,Q,R,options);

Our Matlab function lqrsp.m takes the matrices

         { A, B_1, B_2, Q, R }

and the input options and returns the gamma-parameterized family of solutions to both the sparsity-promoting optimal control problem (SP) and the structured {cal H}_2 problem (SH2). Input options allows users to specify the following parameters

  1. sparsity-promoting penalty function: {bf card}, {bf card}_b, or one of the relaxations g_i with i = { 1, ldots, 6 };

  2. values of gamma and rho;

  3. maximum number of ADMM iterations;

  4. maximum number of re-weighted iterations (if weighted ell_1 or weighted sum of Frobenius norms is used);

  5. block size [n_i, , m_j] of the submatrix F_{ij} in R^{n_i times m_j} (if block sparsity is desired).

The gamma-parameterized output solpath is a structure that contains

  1. solpath.F – feedback gains resulting from the control problem (SP);

  2. solpath.Fopt – feedback gains resulting from the structured control problem (SH2);

  3. solpath.J – quadratic performance of the solutions to (SP);

  4. solpath.Jopt – optimal quadratic performance of the solutions to (SH2);

  5. solpath.nnz – number of nonzero elements (blocks) of optimal sparse (block sparse) feedback gains;

  6. solpath.gam – values of sparsity-promoting parameter gamma.

help lqrsp
  Sparsity-Promoting Linear Quadratic Regulator

  Written by Fu Lin, January 2012

  Description:

  The sparsity-promoting linear quadratic regulator problem is solved using
  the alternating direction method of multipliers (ADMM)

        minimize J(F) + gamma * g(F)             (SP)

  where J is the closed-loop H2 norm, g is a sparsity-promoting penalty
  function, and gamma is the parameter that emphasizes the importance of
  sparsity.

  After the sparsity pattern has been identified, the structured H2 problem
  is solved

        minimize    J(F)
                                                                 (SH2)
        subject to  F belongs to identified sparsity pattern

  Syntax:

  solpath = lqrsp(A,B1,B2,Q,R,options)

  Inputs:

  (I)  State-space representation matrices {A,B1,B2,Q,R};

  (II) Options

      a) options.method specifies the sparsity-promoting penalty function g

         options.method = 'card'     --> cardinality function,
                        = 'l1'       --> l1 norm,
                        = 'wl1'      --> weighted l1 norm,
                        = 'slog'     --> sum-of-logs function,
                        = 'blkcard'  --> block cardinality function,
                        = 'blkl1'    --> sum of Frobenius norms,
                        = 'blkwl1'   --> weighted sum of Frobenius norms,
                        = 'blkslog'  --> block sum-of-logs function;

      b) options.gamval         --> range of gamma values;
      c) options.rho            --> augmented Lagrangian parameter;
      d) options.maxiter        --> maximum number of ADMM iterations;
      e) options.blksize        --> size of block sub-matrices of F;
      f) options.reweightedIter --> number of iterations for the reweighted
                                    update scheme.

  The default values of these fields are

  options.method         = 'wl1';
  options.gamval         = logspace(-4,1,50);
  options.rho            = 100;
  options.maxiter        = 1000;
  options.blksize        = [1 1];
  options.reweightedIter = 3.

  Output:

  solpath -- the solution path parameterized by gamma -- is a structure
  that contains:

  (1) solpath.F    --> feedback gains resulting from the control problem
                       (SP);
  (2) solpath.Fopt --> feedback gains resulting from the structured control
                       problem (SH2);
  (3) solpath.J    --> quadratic performance of the solutions to (SP);
  (4) solpath.Jopt --> optimal quadratic performance of the solutions to
                       (SH2);
  (5) solpath.nnz  --> number of nonzero elements (blocks) of optimal
                       sparse (block sparse) feedback gains;
  (6) solpath.gam  --> values of sparsity-promoting parameter gamma.

  Reference:

  F. Lin, M. Fardad, and M. R. Jovanovic
  ``Design of optimal sparse feedback gains via the alternating direction method of multipliers'' 
  IEEE Trans. Automat. Control, vol. 58, no. 9, pp. 2426-2431, September 2013.

  Additional information:

  http://www.umn.edu/~mihailo/software/lqrsp/


Acknowledgements

We would like to thank Stephen Boyd for encouraging us to develop this software.

This project is supported in part by the National Science Foundation under

  • CAREER Award CMMI-0644793

  • Award CMMI-0927509

  • Award CMMI-0927720.