Performance evaluation

erdos-renyi 

We solve the problem {rm (P1)} for growing connected resistive Erdos-Renyi networks with the edge probability 1.05 log (n)/n. We choose gamma = 0.8 ,gamma_{max}, where gamma_{max} identifies the value of the regularization parameter gamma for which all edge weights in the controller graph are equal to zero. The duality gap, eta, and the absolute value of the dual residual, r_d, are used as stopping criteria. We set the tolerances for eta and r_d to 10^{-6} and 10^{-3}, respectively.

Customized IP method vs CVX

Here, we compare the performance of our customized primal-dual interior-point algorithms with CVX. The direct algorithm uses Cholesky factorization to compute the search direction, and the indirect algorithm uses the matrix-free preconditioned conjugate gradients method (with diagonal preconditioner). We set R = I and choose the state weight that penalizes the mean-square deviation from the network average Q = I - (1/n) mathbf{1} mathbf{1}^T.

Figures below compare scalability of direct and indirect algorithms with CVX.

 

T_{rm {cvx}}, T_{rm {chol}}, and T_{rm {pcg}} vs n

Solve times in seconds for networks with growing number of nodes. As the size of the network increases, implementations based on Cholesky factorization and PCG method significantly outperform CVX.

The solid lines in the following figures show ratios of the running times for CVX and our customized algorithms versus the number of nodes. As indicated by the dashed red lines, on average, algorithms based on Cholesky factorization and PCG method are about 35 and 188 times faster than CVX (for network sizes that can be handled by CVX).

 

T_{rm {cvx}}/T_{rm {chol}} vs n

 

T_{rm {cvx}}/T_{rm {pcg}} vs n

Influence of the regularization parameter on performance

Next figure illustrates the influence of the regularization parameter gamma on the performance of the matrix-free PCG algorithm for an Erdos-Renyi network with n = 200 nodes.

 

Duality gap vs cumulative number of PCG iterations.

The total number of PCG iterations decreases with gamma, from 367 for gamma = 0.01 gamma_{max} to 66 for gamma = 0.9 gamma_{max}. Thus, the indirect method computes sparser solutions more efficiently. Furthermore, each PCG iteration takes about 0.05 seconds. These results are consistent with the observations made by Koh, Kim, and Boyd ’07, where an efficient interior-point method was developed for large-scale ell_1-regularized logistic regression problems.

IP method vs proximal algorithms

To compare the performance of proximal gradient and Newton methods and illustrate scalability of these algorithms, we solve the problem of growing Erdos-Renyi networks with different number of nodes.

Figures below compare scalability of primal-dual IP algorithm, proximal gradient (proxBB), and proximal Newton (proxN) algorithms.

 

T_{rm {IP}}, T_{rm {proxBB}}, and T_{rm {proxN}}{} vs n

Solve times in seconds for networks with growing number of nodes. As the size of the network increases, both proximal gradient and proximal Newton algorithms significantly outperform the IP method.

The solid lines in the following figures show ratios of the solve times between the IP method and the proximal algorithms versus the number of nodes. As indicated by the dashed lines, on average, proximal gradient and proximal Newton algorithms are about 12 and 14 times faster than the customized IP algorithm (for network sizes that can be handled by the IP method).

 

T_{rm {proxBB}}/T_{rm {IP}} vs n

 

T_{rm {proxN}}/T_{rm {IP}} vs n

Performance of different customized algorithms

The following table compares our customized algorithms in terms of speed and the number of iterations for the problem of growing connected resistive Erdos-Renyi networks with different number of nodes and gamma = 0.8gamma_{rm {max}}. We set the tolerances for r_d and eta to 10^{-3} and 10^{-4}, respectively.

Number of nodes n = 100 n = 200 n = 300
Number of edges m = 4714 m = 19316 m =  43986
IP (Chol)  6.138 , rm {sec} /7 , rm {iter} 247.396/8 out of memory
IP (PCG) 0.344 ; {rm sec} /7 ; {rm iter} 4.663/8   16.499  /8
proxBB 0.062 ; {rm sec} /10 ; {rm iter}  0.357/ 10   1.279/11
proxN 0.055 ; {rm sec}/4 ; {rm iter}  0.291/4   1.078/4

We see that the implementation based on PCG method is significantly more efficient than the implementation based on Cholesky factorization. Even though both of them compute the optimal solution in about 10 interior-point (outer) iterations, the indirect algorithm is superior in terms of speed. Furthermore, for n = 300, the direct method runs out of memory and the indirect method computes the optimal solution in about 16 seconds. Moreover, even for such a small network, proxN and proxBB are significantly faster than IP (PCG) algorithm.

Larger networks

We next illustrate performance of the customized algorithms for the problem of growing connected resistive Erdos-Renyi networks with the edge probability 1.05 log (n)/n and gamma = 1.8. The results for larger networks are shown in the following Table. In particular, for a network with n = 1300 nodes and m = 839487 edges in the controller graph, it takes about four and a half hours for IP to solve the problem. However, proxN and proxBB converge in less than 1.5 and 3 minutes, respectively.

Number of nodes n = 700 n = 1000 n = 1300
Number of edges m = 242249 m = 495879 m = 839487
IP (PCG) 394.256; {rm sec} /13 ; {rm iter}    1014.282/13 16533.931/13
proxBB 15.353 ; {rm sec} /11 ; {rm iter}   55.944/ 13  135.090/15
proxN 11.992 ; {rm sec} /4 ; {rm iter}    34.759/4 95.859/5

Additional computational experiments indicate that these two algorithms can handle even larger networks; for example, the problem of growing the Erdos-Renyi network with n = 1500 nodes and m = 1118541 edges in the controller graph can be solved in about 2 and 4 minutes for proxN and proxBB, respectively (with Matlab implementation on a PC). Implementation of these algorithms in a lower-level language (such as C) may further improve algorithmic efficiency.

Alternative performance criterion

We next use our customized algorithms for adding edges to an Erdos-Renyi network with n = 500 nodes and m = 123094 edges in the controller graph. The following shows the relative error in the objective function in logarithmic scale versus the computation time. Here, x^star is the optimal vector of the edge weights obtained using primal-dual interior point method. We set the tolerances for r_d and eta to 10^{-3} and 10^{-4}, respectively. Even for a problem of relatively small size, both proxN and proxBB converge much faster than the IP algorithm and proxN outperforms proxBB.

Thus, a possible less sophisticated criterion for termination is the relative change in the objective function value,  frac{f(x^k) - f(x^star)}{f(x^star)} < epsilon. We have examined this criterion for our customized algorithm. Not only did not we encounter false termination but also the accuracy of the obtained solutions were very high. However, for a fair comparison with IP algorithm, we report the results using the generic stopping criteria in Remarks 1 and 5.

 

(f(x^k) - f(x^star))/f(x^star)

Proximal gradient vs fast greedy algorithm

We next compare our proximal gradient algorithm with the fast greedy algorithm of Summers et al., 2015. We solve problem {rm (P1)} for Erdos-Renyi networks with different number of nodes (n=5 to 500) and gamma = 0.4 ,gamma_{max}. After proxBB identifies the edges in the controller graph, we use the greedy method to select the same number of edges. Finally, we polish the identified edge weights for both methods.

 

displaystyle{{rm Solve~time!} ;; {rm vs} ;; n}

The solve times (in seconds) versus the number of nodes. As the number of nodes increases the proximal gradient algorithm significantly outperforms the fast greedy method.

 

displaystyle{{rm Performance~loss!:} ;; frac{J , - , J_c}{J_c}  times 100% ;; {rm vs} ;; n}

Relative to the optimal centralized controller, both methods yield similar performance degradation of the closed-loop network.