Optimal Multidimensional Signal Processing
in Wireless Sensor Networks
Anatoli Torokhti
1
and Stanley Miklavcic
2
1
School of Mathematics and Statistics, University of South Australia, Mawson Lakes, SA, Australia
2
Phenomics and Bioinformatics Research Centre, School of Mathematics and Statistics,
University of South Australia, Mawson Lakes, SA, Australia
Keywords:
Multidimensional Signal Processing.
Abstract:
Wireless sensor networks involve a set of spatially distributed sensors and a fusion center. Three methods for
finding models of the sensors and the fusion center are proposed.
1 INTRODUCTION
Wireless sensor networks (WSNs) have recently
emerged as a promising technology for a wide range
of multimedia applications (Vaseghi, 2007). A related
scenario involves a set of spatially distributed sensors
making local observations y
j
correlated with a signal
of interest x. Due to some external and instrumental
factors, observations are noisy. Each sensor Q
j
trans-
mits information about its measurements to a fusion
center P whose primary goal is to recover the original
signal within a prescribed accuracy. Fig 1 illustrates
the case.
Figure 1: Block diagram of the WSN. Here, N designates a
noisy environment,
e
x
1
, ...,
e
x
p
are estimations of x
1
,... , x
p
.
It is widely recognized that efficient transmission
strategies should reduce (compress) the amount of in-
formation transmitted by sensors. In this paper, the
above-mentioned efficient transmission strategies are
studied. We propose a novel approach based on a
reduction of the multidimensional signal processing
problem in WSNs to the new optimization problem.
We adopt a transform-based approach to deter-
mine the optimal transmission strategies in WSNs.
More precisely, each sensor applies a suitable linear
transform Q
j
to its random observation vector y
j
with
n
j
components so as to reduce its dimensionality to r
j
components. The fusion center applies a linear trans-
form P to reconstruct the random source vector of
interest x with m components. Thus, Q
j
and P are
given by matrices Q
j
R
r
j
×n
j
and P R
m×r
, respec-
tively, where r
j
n
j
, r = r
1
+ . .. + r
p
and r m.
Let us write (,Σ,µ) for a probability space. For
i = 1,.. ., p, let x
i
L
2
(,R
m
i
) be a random signal
with realizations x
i
= x
i
(ω) R
m
i
. We denote
x =
x
1
.
.
.
x
p
and kx(·)k
2
=
Z
kx(ω)k
2
2
dµ(ω)
(1)
where x L
2
(,R
m
), m= m
1
= ...+m
p
and kx(ω)k
2
is the Euclidean norm of x(ω) R
m
. We also denote
y =
y
1
.
.
.
y
p
where y
i
L
2
(,R
n
i
) and n = n
1
+. .. +
n
p
.
Let us define a sensor model Q
i
by the relation
[Q
i
(y
i
)](ω) = Q
i
[y
i
(ω)] (2)
where Q
i
: L
2
(,R
n
i
) L
2
(,R
r
i
) and Q
i
is a ma-
trix, Q
i
R
r
i
×n
i
. For
r = r
1
+ . .. + r
p
, (3)
a fusion center model, P : L
2
(,R
r
) L
2
(,R
m
), is
defined similarly to (2), by matrix P R
m×r
.
Problem 1. For j = 1, .. ., p, let x
j
and y
j
be refer-
ence signals and observed data, respectively. Find
126
Torokhti A. and Miklavcic S..
Optimal Multidimensional Signal Processing in Wireless Sensor Networks.
DOI: 10.5220/0004056101260129
In Proceedings of the International Conference on Signal Processing and Multimedia Applications and Wireless Information Networks and Systems
(SIGMAP-2012), pages 126-129
ISBN: 978-989-8565-25-9
Copyright
c
2012 SCITEPRESS (Science and Technology Publications, Lda.)
models of the sensors, Q
1
,... ,Q
p
, and a model of the
fusion center, P , that provide
min
P ,Q
1
,...,Q
p
x P
Q
1
(y
1
)
.
.
.
Q
p
(y
p
)
2
. (4)
2 MAIN RESULTS
2.1 First Method: WSN Equipped with
Orthogonal Data Convertors
Let us extend the original problem (4) to the problem
equipped with additional data converters, G
1
,...,G
p
,
such that they transform observations y
1
,... ,y
p
to
vectors with the special property givenby Definition 1
below. This property allows us to determine solution
in a quite simple way.
For x and y presented by
x = [x
(1)
,... ,x
(m)
]
T
and y = [y
(1)
,... ,y
(n)
]
T
with x
()
L
2
(,R) and y
(q)
L
2
(,R) where =
1,. .. ,m and q = 1,...,n, respectively, we write
E[xy
T
] = E
xy
=
nD
x
()
,y
(q)
Eo
m,n
ℓ,q=1
R
m×n
and
D
x
()
,y
(q)
E
=
R
x
()
(ω)y
(q)
(ω) dµ(ω).
Definition 1. For i = 1,... , p, let
u
i
= G
i
(y
i
)
where G
i
: L
2
(,R
n
i
) L
2
(,R
n
i
). The data con-
verters G
1
,...,G
p
are called pairwise orthogonal if
E
u
i
u
j
= O when i 6= j, (5)
where O is the zero matrix.
The determination of the pairwise orthogonal data
converters G
1
,... ,G
p
is given in Lemma 1 below.
Let us now extend problem (4) by including data
converters G
1
,... ,G
p
.
Problem 2. For i = 1, ..., p, find models of the sen-
sors, Q
1
,... ,Q
p
, and a model of the fusion center, P ,
that provide
min
P ,Q
1
,...,Q
p
x P
Q
1
G
1
(y
1
)
.
.
.
Q
p
G
p
(y
p
)
2
. (6)
Let us denote by M
the Moor-Penrose pseudo-
inverse of a matrix M.
First, we give the models of orthogonal data con-
verters G
1
,...,G
p
that satisfy (5) as follows.
Lemma 1. Let u
i
= G
i
(y
i
) for i = 1, .. ., p and let
G
1
,... ,G
p
be such that
G
1
(y
1
) = y
1
and G
i
(y
i
) = y
i
i1
k=1
Z
ik
(u
k
) (7)
for i = 2,..., p with Z
ik
: L
2
(,R
m
i
) L
2
(,R
m
i
)
defined by
Z
ik
= E
y
i
u
k
E
u
k
u
k
+ M
ik
(I E
u
k
u
k
E
u
k
u
k
) (8)
with M
ik
R
n×n
arbitrary. Then the G
1
,... ,G
p
are
pairwise orthogonal data converters.
Next, to nd a solution of Problem 2, we write
P = [P
1
... P
p
] where, for j = 1,..., p, P
j
is defined
by matrix P
j
R
m×r
j
. Then
x [P
1
... P
p
]
Q
1
G
1
(y
1
)
.
.
.
Q
p
G
p
(y
p
)
2
=
x [F
1
,... ,F
p
](u)
2
, (9)
where
F
i
= P
i
Q
i
and u = [u
T
1
,... ,u
T
p
]
T
. (10)
Thus, problem (6) is reduced to the equivalent prob-
lem of finding F
1
,... ,F
p
that solve
min
F
1
,...,F
p
x [F
1
,... ,F
p
](u)
2
(11)
subject to
rank F
1
r
1
, ..., rank F
p
r
p
. (12)
To find a solution of problem (11)–(12) we write
x [F
1
,... ,F
p
](u)
2
= kE
1/2
xx
k
2
kE
xu
(E
1/2
uu
)
k
2
+ kE
xu
(E
1/2
uu
)
FE
1/2
uu
k
2
.
Here, the only term that depends on F
1
,...,F
p
is
kE
xu
(E
1/2
uu
)
[F
1
,...,F
p
]E
1/2
uu
k
2
= kA [F
1
,... ,F
p
]Ck
2
(13)
where A = E
xu
(E
1/2
uu
)
and C = E
1/2
uu
. Due to the
property (5), matrix E
uu
is block-diagonal,
E
uu
=
E
u
1
u
1
O ... O
O E
u
2
u
2
... O
... ... ... ...
O O ... E
u
p
u
p
.
Therefore, matrix C is also is block-diagonal,
C =
C
11
O .. . O
O C
22
... O
... ... ... . ..
O O .. . C
pp
.
OptimalMultidimensionalSignalProcessinginWirelessSensorNetworks
127
If we write A = [A
1
...A
p
] where, for j = 1,..., p,
A
j
R
m×n
j
, then it follows from (13) that
kA [F
1
,... ,F
p
]Ck
2
=
p
j=1
kA
j
F
j
C
j j
k
2
.
Thus, problem (11)–(12) is reduced to p individ-
ual problems of finding F
j
, for j = 1,.. ., p, that solves
min
F
j
kA
j
F
j
C
j j
k
2
with rank F
j
r
j
. (14)
The solution has been given in (Torokhti and
Friedland, 2009) as follows.
2.1.1 Best Rank-constrained Matrix
Approximation
Let C
m×n
be a set of m× n complex valued matrices,
and denote by R (m,n,r) C
m×n
the variety of all
m× n matrices of rank r at most. Fix A = [a
ij
]
m,n
i, j=1
C
m×n
. Then A
C
n×m
is the conjugate transpose of
A. Let the singular value decomposition (SVD) of A
be given by
A = U
A
Σ
A
V
A
, (15)
where U
A
C
m×m
, V
A
C
n×n
are unitary matrices,
Σ
A
:= diag(σ
1
(A),... ,σ
min(m,n)
(A)) C
m×n
is a gen-
eralized diagonal matrix, with the singular values
σ
1
(A) σ
2
(A) .. . 0 on the main diagonal.
Let U
A
= [u
1
u
2
...u
m
] and V
A
= [v
1
v
2
...v
n
] be
the representations of U and V in terms of their m and
n columns, respectively. Let
P
A,L
:=
rank A
i=1
u
i
u
i
C
m×m
and P
A,R
:=
rank A
i=1
v
i
v
i
C
n×n
(16)
be the orthogonal projections on the range of A
and A
, correspondingly. Define a truncated SVD,
{A}
r
, of matrix A by
{A}
r
:=
r
i=1
σ
i
(A)u
i
v
i
= U
Ar
Σ
Ar
V
Ar
C
m×n
(17)
for r = 1,. .. ,rank A, where
U
Ar
= [u
1
u
2
...u
r
], Σ
Ar
= diag(σ
1
(A),... ,σ
r
(A))
and V
Ar
= [v
1
v
2
...v
r
]. (18)
For r > rank A, we write {A}
r
:= A (or{A}
r
=
{A}
rank A
). For 1 r < rank A, the matrix {A}
r
is
uniquely defined if and only if σ
r
(A) > σ
r+1
(A).
Recall that A
:= V
A
Σ
A
U
A
C
n×m
is the Moore-
Penrose generalized inverse of A, where Σ
A
:=
diag
1
σ
1
(A)
,... ,
1
σ
rank A
(A)
,0, .. .,0
C
n×m
.
Henceforth k ·k designates the Frobenius norm.
Theorem 1 below provides a solution to the prob-
lem of finding a matrix F that solves
min
FR (p,q,r)
||A BFC||. (19)
Theorem 1. (Friedland and Torokhti, 2007) Let A
C
m×n
, B C
m×p
andC C
q×n
be given matrices. Let
L
B
= (I
p
P
B,R
)S and L
C
= T(I
q
P
C,L
) (20)
where S C
p×p
and T C
q×q
are any matrices, and
I
p
is the p × p identity matrix. Then the matrix
F = (I
p
+ L
B
)B
{P
B,L
AP
C,R
}
r
C
(I
q
+ L
C
) (21)
is a minimizing matrix for the minimal problem (19).
Any minimizing F has the above form.
2.1.2 Determination of Models of Sensors and
Fusion Center that Satisfy (6)
It follows from (19), (21), that a solution of the prob-
lem in (14) is a particular case of Theorem 1.
Indeed if, in (19)–(21), we write A
j
, F
j
, C
j j
and r
j
instead of A, F, C and r, respectively, and set n = n
j
,
p = m, q= n
j
and B = I then (14) coincides with (19).
Its solution follows from (21) in the form
F
j
= {A
j
P
C
jj
,R
}
r
j
C
j j
(I
n
j
+ L
C
jj
), (22)
where similarly to L
C
in (20), L
C
jj
= T
j
(I
n
j
P
C
jj
,L
)
with T
j
to be any n
j
×n
j
matrix. The solution of prob-
lem (11)–(12) is given by (22) as well.
Since (11)–(12) is equivalent to (6), it remains to
show that models of sensors, Q
1
,... ,Q
p
, and a model
of the fusion center, P , that satisfy (6), follow from
(22). To this end, we recall that by (10),
F
j
= P
j
Q
j
where F
j
, P
j
and Q
j
are defined by matrices F
j
R
m×n
j
, P
j
R
m×r
j
and Q
j
R
r
j
×n
j
, respectively,
where F
j
= P
j
Q
j
. The matrices P
j
and Q
j
are deter-
mined as follows. Let us write the SVD of F
j
in (22)
as
F
j
= U
F
j
Σ
F
j
V
T
F
j
(23)
where matrices
U
F
j
= [u
j1
,... ,u
jm
] R
m×m
,
Σ
F
j
= diag(σ
1
(F
j
),... , σ
min(m,n
j
)
(F
j
)) R
m×n
j
and V
F
j
= [v
j1
,... ,v
jn
] R
n
j
×n
j
are similar to matrices U
A
, Σ
A
and V
A
for the SVD of
matrix A in (15), respectively. In particular, σ
j1
, .. . ,
σ
jmin(m,n
j
)
are the associated singular values. Let
U
F
j
r
j
= [u
j1
,... ,u
jr
j
] R
m×r
j
, (24)
Σ
F
j
r
j
= diag(σ
1
(F
j
),... ,σ
r
j
(F
j
)) R
r
j
×r
j
(25)
and V
F
j
r
j
= [v
j1
,... ,v
jr
j
] R
n×r
j
. (26)
SIGMAP2012-InternationalConferenceonSignalProcessingandMultimediaApplications
128
Then F
j
in (22) can be written in form F
j
= P
j
Q
j
where, for j = 1,.. ., p,
P
j
= U
F
j
r
j
Σ
F
j
r
j
, Q
j
= V
T
F
j
r
j
, (27)
or
P
j
= U
F
j
r
j
R
m×r
j
, Q
j
= Σ
F
j
r
j
V
T
F
j
r
j
. (28)
Thus, we have proved the following.
Theorem 2. The models of the sensors and the fu-
sion center that satisfy (6) are given by matrices
Q
1
,... ,Q
p
and P = [P
1
,... ,P
p
], respectively, deter-
mined by (27) or (28).
2.2 Second Method: Direct Solution of
WSN Problem (4)
Here, we consider a way to determine models of the
sensors, Q
1
,... ,Q
p
, and the fusion center, P , for the
case when the orthogonal data converters, G
1
,... ,G
p
(see (6), Definition 1 and Lemma 1), are not used, i.e.
when Q
1
,...,Q
p
and P should satisfy (4).
In this case, similar to (9) and (10), we have
x [P
1
... P
p
]
Q
1
(y
1
)
.
.
.
Q
p
(y
p
)
2
= kE
1/2
xx
k
2
kE
xy
(E
1/2
yy
)
k
2
+kE
xy
(E
1/2
yy
)
FE
1/2
yy
k
2
,
where, as before, for j = 1,. .., p, F
j
= P
j
Q
j
. Here,
the only term that depends on F
1
,... ,F
p
is
kE
xy
(E
1/2
yy
)
[F
1
,... ,F
p
]E
1/2
uu
k
2
= kA[F
1
,... ,F
p
]Ck
2
where A = E
xy
(E
1/2
yy
)
and C = E
1/2
yy
. Thus, problem
(4) is reduced to finding F
j
, for j = 1, .. ., p, that solve
min
F
1
,...,F
p
kA [F
1
,... ,F
p
]Ck
2
(29)
subject to
rank F
1
r
1
, ..., rank F
p
r
p
. (30)
A difference from (13) is that in (29), matrix C is
not block-diagonal. In this general case, a solution
to problem (29)–(30), F
1
,... ,F
p
, follows from the ex-
tension of Theorem 1. This result will be provided at
the conference. Then, for j = 1,... , p, each matrix F
j
that satisfies (29)–(30) is presented in the form (27)
or (28).
Thus, in this case, the models of the sensors and
the fusion center that satisfy (4) are given by matrices
Q
1
,... ,Q
p
and P = [P
1
,... ,P
p
], respectively, deter-
mined by( 27) or (28) provided that F
1
,... ,F
p
solve
(29)–(30).
2.3 Third Method: Approximate
Solution of WSN Problem (4)
Here, we consider a method which represents a com-
promise between the first and second methods. In
(29), matrices A = [A
1
,... ,A
p
] and C can be repre-
sented in the form
A =
e
A
1
+ . .. +
e
A
p
and C = [C
T
1
,... ,C
T
p
]
T
, (31)
respectively, where
e
A
1
= [A
1
,O, .. .,O], .. .,
e
A
p
=
[O,. .. ,O, A
p
] and, for j = 1, ..., p, C
j
R
n
j
×n
is a
block of C. Then
kA [F
1
,... ,F
p
]Ck
2
p
j=1
k
e
A
j
p
j=1
F
j
C
j
k
2
. (32)
The latter motivates finding models of the sensors,
Q
1
,... ,Q
p
, and the fusion center, P = [P
1
,... ,P
p
], in
the form F
1
= P
1
Q
1
,... ,F
p
= P
p
Q
p
, where F
1
,... ,F
p
are determined from p individual problems of finding
F
j
, for j = 1, .. ., p, that solves
min
F
j
k
e
A
j
F
j
C
j
k
2
with rank F
j
r
j
. (33)
A direct comparison with (14) shows that the problem
in (33) is different from that in (14). This is because,
for j = 1,..., p, matrices A
j
, C
j j
and
e
A
j
, C
j
are dif-
ferent. Nevertheless, formally, the problems in (14)
and (33) are similar. Therefore, the solution of (33))
is given in the form (22) where the notation should be
changed in accordance with that in (31)–(33).
As a result, the following theorem is true.
Theorem 3. The models of the sensors and the fu-
sion center of the WSN that approximate the opti-
mal models are given by matrices Q
1
,... ,Q
p
and
P = [P
1
,... ,P
p
] determined by (27) or (28), where A
j
andC
j j
must be replaced with
e
A
j
andC
j
, respectively.
REFERENCES
Friedland, S. and Torokhti, A. (2007). Generalized rankcon-
strained matrix approximations. SIAM J. Matrix Anal.
Appl., 29:656659.
Torokhti, A. and Friedland, S. (2009). Towards theory of
generic principal component analysis. J. Multivariate
Analysis, 100:661669.
Vaseghi, S. V. (2007). Multimedia signal processing: the-
ory and applications in speech, music and communi-
cations. John Wiley and Sons.
OptimalMultidimensionalSignalProcessinginWirelessSensorNetworks
129