
13.4.7
A Worst Case Norm
We consider the particular worst case norm described in section 5.1.3:
(H) = H wc = sup Hu
u
Mampl _u
Mslew :
k
k
fk
k
j
k
k
k
k
g
1
1
1
We rst rewrite as
Z
(H) = sup
1
v(t)h(t)dt v
Mampl _v
Mslew :
(13.5)
0
k
k
k
k
1
1
Now for each signal v we de ne the linear functional
v
Z
(H) = 1 v(t)h(t)dt:
0
We can express the worst case norm as a maximum of a set of these linear func-
tionals: (H)=sup v(H) v Mampl _v Mslew :
f
j
k
k
k
k
g
1
1
We proceed as follows to nd a subgradient of at the transfer matrix H0. Find
a signal v0 such that
v
Z
1
0
Mampl
_v0
Mslew
v0(t)h0(t)dt = (H0):
k
k
k
k
1
1
0
(It can be shown that in this case there always is such a v0 some methods for nding
v0 are described in the Notes and References for chapter 5.) Then a subgradient of
at H0 is given by
sg(H) = v0(H):
The same procedure works for any worst case norm: rst, nd a worst case
input signal u0 such that H0 wc = H0u0 output. This task must usually be done
k
k
k
k
to evaluate H0 wc anyway. Now nd any subgradient of the convex functional
k
k
u0(H) = Hu0 output it will be a subgradient of
wc at H0.
k
k
k
k
13.4.8
Subgradient for the Negative Dual Function
In this section we show how to nd a subgradient for
at , where is the dual
;
function introduced in section 3.6.2 and discussed in section 6.6. Recall from (6.8) of
section 6.6 that
can be expressed as the maximum of a family of linear functions
;
of we can therefore use the maximum tool to nd a subgradient.
We start by nding any Hach such that
( ) = 1 1(Hach) + + L L(Hach):
Then a subgradient of
at is given by
;
2
1(Hach) 3
g =
.
6
..
7
:
;
4
5
L(Hach)













13.5 SUBGRADIENTS ON A FINITE-DIMENSIONAL SUBSPACE
307
13.5
Subgradients on a Finite-Dimensional Subspace
In the previous section we determined subgradients for many of the convex func-
tionals we encountered in chapters 8{10. These subgradients are linear functionals
on the in nite-dimensional space of transfer matrices most numerical computation
will be done on nite-dimensional subspaces of transfer matrices (as we will see in
chapter 15). In this section we show how the subgradients computed above can be
used to calculate subgradients on nite-dimensional subspaces of transfer matrices.
Suppose that we have xed transfer matrices H0 H1 ... HN, and is some
convex functional on transfer matrices. We consider the convex function ' : N
R
!
given by
R
'(x) = (H0 + x1H1 + + xNHN):
To determine some g @'(~x), we nd a subgradient of at the transfer matrix
2
H0 + ~x1H1 + + ~xNHN, say, sg. Then
2
sg(H1) 3
g =
.
6
..
7
@'(~x):
2
4
5
sg(HN)
Let us give a speci c example using our standard plant of section 2.4. Consider
the weighted peak tracking error functional of section 11.1.2,
'pk trk(
) = W H(a)
13 + H(b)
13 + (1
)H(c)
13
1
;
;
;
pk gn
where
W = 0:5
s + 0:5
H(a)
44:1s3 + 334s2 + 1034s + 390
13 =
;
s6 + 20s5 + 155s4 + 586s3 + 1115s2 + 1034s + 390
H(b)
220s3 + 222s2 + 19015s + 7245
13 =
;
s6 + 29:1s5 + 297s4 + 1805s3 + 9882s2 + 19015s + 7245
H(c)
95:1s3 24:5s2 + 9505s + 2449
13 =
;
;
s6 + 33:9s5 + 425s4 + 2588s3 + 8224s2 + 9505s + 2449:
'pk trk has the form
'pk trk(
) = H0 + H1 + H2 pk gn
k
k
where H0=W H(c) 1;
H1 = W H(a) H(c)
;
H2 = W H(b) H(c) :
;












308