


332
CHAPTER 14 SPECIAL ALGORITHMS FOR CONVEX OPTIMIZATION
bounds versus iteration number are shown in gure 14.14. The worst case relative
error is shown in gure 14.15. The ellipsoid Ek is shown at various iterations in
gure 14.16.
8
7
6
Ak
5
det
4
p
3
2
1
0
0
10
20
30
40
50
60
iteration, k
The volume of the ellipsoid decreases exponentially with
Figure
14.13
E
k
the iteration number .k
For the deep-cut algorithm we consider the inequality speci cations
'pk trk(
) 0:75 'max sens(
) 1:5 'rms yp(
) 0:05:
using the constraint function
= max 'pk trk(
) 0:75 'max sens(
) 1:5
;
;
0:75
1:5
'rms yp(
) 0:05
;
0:05
:
(14.22)
The deep-cut ellipsoid algorithm nds a feasible point in 8 iterations. The execution
of the algorithm is traced in table 14.1 and gure 14.17. Figure 14.17 also shows
the set of feasible points, which coincides with the set where 'wt max( ) 0:75.
14.5
Example: LQG Weight Selection via Duality
In this section we demonstrate some of the methods described in this chapter on
the problem of weight selection for an LQG controller design. We consider the
multicriterion LQG problem formulated in section 12.2.1, and will use the notation




14.5 EXAMPLE: LQG WEIGHT SELECTION VIA DUALITY
333
0:8
0:78
U
k
0:76
;
;
;
0:74
;
0:72
0:7
0:68
0:66
@
I
@
L
k
0:64
0:62
0:6
0
10
20
30
40
50
60
iteration, k
Upper and lower bounds on the solution , as a function
Figure
14.14
of the iteration number , for the ellipsoid algorithm.
k
1
;
10
k
2
;
10
)=LLk
;
(Uk
3
;
10
4
;
10
0
10
20
30
40
50
60
iteration, k
For the ellipsoid algorithm the maximum relative error, de-
Figure
14.15
ned as (
) , falls below 0.01% by iteration 54.
U
;
L
=L
k
k
k




334
CHAPTER 14 SPECIAL ALGORITHMS FOR CONVEX OPTIMIZATION
2
1
E
1:5
1
0:5
7
E
x
22
E
17
E
0
;0:5
;1
;1
;0:5
0
0:5
1
1:5
2
The initial ellipsoid, 1, is a circle of radius 1.5 centered at
Figure
14.16
E
0 5 0 5] . The ellipsoids at the 7th, 17th, and 22nd iterations are shown,
T
:
:
together with the optimum .
x
2
1
E
1:5
1
0:5
1
4
E
x
4
2
E
x
0
8
8
E
x
2
x
;0:5
;1
;1
;0:5
0
0:5
1
1:5
2
The initial ellipsoid, 1 is a circle of radius 1.5 centered at
Figure
14.17
E
1 = 0 5 0 5] . After eight iterations of the deep-cut ellipsoid algorithm
T
x
:
:
a feasible point 8 is found for the constraint function (14.22). The set of
x
feasible points for these specications is shaded. Note that each ellipsoid
contains the entire feasible set. See also table 14.1.



































































































14.5 EXAMPLE: LQG WEIGHT SELECTION VIA DUALITY
335
k
xk
(xk)
pk trk
max sens
rms yp
0:75?
1:5?
0:05? action
1
0:500
2:853
0:0372
0:500
0:902
0:643
yes
no
yes
cut max sens
2
0:524
1:739
0:0543
0:256
0:160
0:858
no
no
no
cut max sens
;
3
0:160
1:489
0:0526
;0:050
0:051
0:692
yes
yes
no
cut rms yp
4
0:344
2:075
0:0422
0:274
0:384
0:661
yes
no
yes
cut max sens
5
0:169
1:514
0:0498
0:021
0:009
0:719
yes
no
yes
cut max sens
6
0:129
1:488
0:0522
;0:051
0:043
0:692
yes
yes
no
cut rms yp
7
0:236
1:561
0:0466
0:122
0:041
0:693
yes
no
yes
cut max sens
8
0:206
1:498
0:0483
0:065
0:001
0:708
;
yes
yes
yes
done
The steps performed by the deep-cut ellipsoid algorithm for
T
able
14.1
the constraint function (14.22). At each iteration a deep-cut was done using
the function that had the largest normalized constraint violation. See also
gure 14.17.
de ned there. The speci cations are realizability and limits on the RMS values of
the components of z, i.e.,
RMS(z1) pa1 ... RMS(zL) paL
(14.23)
(with the particular power spectral density matrix for w described in section 12.2.1).
Each zi is either an actuator signal or some linear combination of the system state.
will denote the set of a
L+ that corresponds to achievable speci cations of the
A
2
R
form (14.23).
We will describe an algorithm that solves the feasibility problem for this family
of speci cations, i.e., determines whether or not a
, and if so, nds a controller
2
A
that achieves the given speci cations. This problem can usually be solved by a
skilled designer using ad hoc weight adjustment and LQG design (see section 3.7.2),
but we will describe an organized algorithm that cannot fail.
By the convex duality principle (see section 6.6 the technical condition holds in
this case), we know that
a
there is no
0 with ( ) > aT :
2
A
(
)




336
CHAPTER 14 SPECIAL ALGORITHMS FOR CONVEX OPTIMIZATION
Since this condition on is not a ected by positive scaling, we may assume that the
sum of the weights is one. We can thus pose our problem in terms of the following
convex optimization problem:
=
min
aT
( ):
(14.24)
0
;
1 +
+ L = 1
Let denote a minimizer (which in fact is unique) for the problem (14.24). Then
we have:
a
0
(14.25)
2
A
(
)
and moreover if
0, then the LQG design that corresponds to weights achieves
the speci cation (see section 12.2.1). So nding and will completely solve our
problem.
The equivalence (14.25) can be given a simple interpretation. aT is the LQG
cost, using the weights , that a design corresponding to a would achieve, if it were
feasible. ( ) is simply the smallest achievable LQG cost, using weights . It follows
that if aT < ( ), then the speci cation corresponding to a is not achievable,
since it would beat the optimal LQG design. Thus, we can interpret (14.24) as the
problem of nding the LQG weights such that the LQG cost of a design that just
meets the speci cation a most favorably compares to the LQG-optimal cost.
We can evaluate a subgradient of aT
( ) (in fact, it is di erentiable) using
;
the formula given in section 13.4.8:
2
1(Hlqg ) 3
a
.
6
..
7
@ ;aT
( )
(14.26)
;
2
;
4
5
L(Hlqg )
where Hlqg is the LQG-optimal design for the weights . We can therefore
solve (14.24) using any of the algorithms described in this chapter. We will give
some of the details for the ellipsoid algorithm.
We can handle the equality constraint 1 + + L = 1 by letting
2
1 3
x =
.
6
.. 7
4
5
L 1
;
be our optimization variables, and setting
L = 1
1
L 1:
;
;
;
;
The inequality constraint on x is then
max x1 ... xL 1 x1 + + xL 1 1 0:
(14.27)
f;
;
;
g
;
;





14.5 EXAMPLE: LQG WEIGHT SELECTION VIA DUALITY
337
A subgradient g
L 1
;
for the objective is readily derived from (14.26):
2
R
2
a1 1(Hlqg ) aL + L(Hlqg ) 3
;
;
g =
.
6
..
7
4
5
aL 1 L 1(Hlqg ) aL + L(Hlqg )
;
;
;
;
where Hlqg is optimal for the current weights.
As initial ellipsoid we take the ball of radius one centered at the weight vector
2
1=L 3
x(1) =
.
6
.. 7:
4
5
1=L
Since this ball contains the feasible set given by (14.27), we are guaranteed that
every feasible minimizer of (14.24) is inside our initial ellipsoid.
We can now directly apply the algorithm of section 14.4.3. We note a few
simpli cations to the stopping criterion. First, the feasible set is clearly not empty,
so the algorithm will not terminate during a feasibility cut. Second, the stopping
criterion can be:
until ; aT < ( ) or i(Hlqg ) ai for i = 1 ... L
The algorithm will terminate either when a is known to be infeasible (aT < ( )),
or when a feasible design has been found.
This ellipsoid algorithm is guaranteed to terminate in a solution to our problem,
except for the case when the speci cation a is Pareto optimal, i.e., is itself an LQG-
optimal speci cation for some weight vector. In this exceptional case, Hlqg and
will converge to the unique design and associated weights that meet the speci cation
a. This exceptional case can be ruled out by adding a small tolerance to either of
the inequalities in the stopping criterion above.
14.5.1
<