Robust Affine Projection Algorithm
using Selectively Shrunk Error Component
Seung Hun Kim
1
, Jae Jin Jeong
1
, Gyogwon Koo
1
and Sang Woo Kim
2
1
Department of Electrical Engineering, POSTECH, 77 Cheongam-Ro, Nam-Gu, 790-784, Pohang,
Gyeongbuk, Republic of Korea
2
Department of Electrical Engineering and the Department of Creative IT Engineering and Future IT Innovation
Laboratory, POSTECH, 77 Cheongam-Ro, Nam-Gu, 790-784, Pohang, Gyeongbuk, Republic of Korea
Keywords:
Adaptive Filter, Impulsive Noise, Variable Step-size, Affine Projection Sign Algorithm.
Abstract:
A novel robust affine projection algorithm (APA) is proposed, which selectively shrinks error components
in an error vector according to their individual possibilities of being interrupted by the impulsive noise. In
existing robust APAs, if there exists only one error component interrupted by the impulsive noise, all error
components of an error vector are shrunk using common step sizes which are inversely proportional to the
norm of the error vector. This improper scaling results in performance degradation with a high impulsive
noise probability and projection order. In this paper, we derive a modified minimization criterion considering
the individual possibilities of error components from a geometric interpretation. For a wide range of impulsive
noise probability and a high projection order, the performance of the proposed algorithm is verified in various
system identification events including an abrupt system change. The proposed algorithm showed the fastest
convergence rate and the lowest steady-state mean square deviation compared to the previous robust APAs and
a recent variable step-size affine projection sign algorithm.
1 INTRODUCTION
Adaptive filters are applicable to various fields includ-
ing echo cancellation, system identification, active
noise control, and they are designed according to sys-
tem environments and a designer’s purpose (Sayed,
2003). Normalized least-mean-square (NLMS) algo-
rithm is the most popular adaptive filter algorithm due
to its simple implementation, but it shows a degraded
convergence behavior with a colored input signal or
an impulsive interference.
To overcome its weakness to colored input signal,
affine projection algorithm (APA) (Ozeki and Umeda,
1984) and modified versions of APA (Kong et al.,
2007; Kim et al., 2009; Shin et al., 2004; Paleologu
et al., 2008) were suggested, but there still remained
the convergence problem with impulsive noise. To
improve robustness against impulsive noise, affine
projection sign algorithm (APSA) (Shao et al., 2010)
was introduced, which combined the APA and the
L
1
-norm minimization concept from the normalized
sign algorithm (Arikan et al., 1994). After that, there
have been several researches on variable step-size al-
gorithms for the APSA (Shin et al., 2012; Yoo et al.,
2014; Zhang and Zhang, 2013). However, the APSA
has a slow convergence rate compared to APA be-
cause it was derived from the minimization criterion
on the L
1
-norm of the error vector.
To give robustness to APA without losing its fast
convergence rate, several approaches using modified
step-size, which were designed to be robust against
impulsive noise, were introduced (Vega et al., 2010;
Song and Park, 2014). These algorithms have not
only faster convergence rate compared to APSA but
also robustness to impulsive noise. Nonetheless,
when the impulsive noise arises frequently or a high
projection order is needed, they show degraded per-
formance. This is because they adopt common step
size to all error components of the error vector. Even
though there exists only one interrupted component
within the error vector, the other uninterrupted com-
ponents would become extremely small values by
the step size, and this inappropriate shrinkage under-
mines the filter performance. Therefore, the error
components should be selectively shrunk according
to their individual possibilities of being interrupted.
This paper proposes a novel strategy for selec-
tive shrinking of error components in APA, and aims
511
Kim S., Jeong J., Koo G. and Kim S..
Robust Affine Projection Algorithm using Selectively Shrunk Error Component.
DOI: 10.5220/0005530605110516
In Proceedings of the 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO-2015), pages 511-516
ISBN: 978-989-758-122-9
Copyright
c
2015 SCITEPRESS (Science and Technology Publications, Lda.)
to design a robust APA which shows invariant per-
formance even with a high impulsive noise probabil-
ity and a high projection order. Being motivated by
(Rey Vega et al., 2008), the strategy is derived from
a geometric interpretation on location relationships
between a hypersphere and hyperplanes. The hyper-
sphere has its center at the current weight vector, and
its radius is the expected norm of the difference vec-
tor between current and previous weight. In addition,
the hyperplanes are sets of weight vectors satisfying a
posteriori error vector equals 0. Each hyperplane for
each error component is judged with no exceptions
if the impulsive noise disturbs. That is, if a hyper-
plane is out of the hypersphere, the hyperplane would
be parallel translated to meet the hypersphere. The
minimization criterion is obtained from this geomet-
ric interpretation, and the corresponding weight up-
date equation is derived.
To verify the performance of the proposed algo-
rithm, the simulations for system identification with
randomly generated system coefficient vector are per-
formed. The proposed algorithm shows the fastest
convergence rate and the lowest steady-state mean
square deviation (MSD) compared to the previous ro-
bust APAs (Vega et al., 2010; Song and Park, 2014),
and the recent variable step-size APSA (Yoo et al.,
2014) for a wide range of the impulsive noise proba-
bility and various projection orders.
Section 2 introduces the conventional affine pro-
jection algorithm and its geometric meaning. Section
3 handles the robust projection concept introduced in
(Vega et al., 2010; Rey Vega et al., 2008) and the pro-
posed algorithm. Section 4 shows the performance of
the proposed algorithm, and Section 5 is a conclusion.
2 CONVENTIONAL AFFINE
PROJECTION ALGORITHM
In conventional APA, the weight vector w
i
is recur-
sively updated from the previous weight vector w
i1
,
the input matrix U
i
and the error vector e
i
as follows:
w
i
= w
i1
+ µU
i
U
T
i
U
i
1
e
i
, (1)
where µ is a step-size, and
U
i
= [u
i
u
i1
··· u
iK+1
]
u
i
= [u(i) u(i 1) ··· u(i M + 1)]
T
e
i
= d
i
U
T
i
w
i
= [e
1
(i) e
2
(i) ··· e
K
(i)]
T
.
Here, M is the length of the input vector which is
equal to the length of an unknown system coefficient,
w
o
, and K is the number of input vectors used, which
is often called the projection order. Also, the error
vector is obtained from the desired system output vec-
tor d
i
= [d(i) d(i 1) ··· d(i K + 1)]
T
. Each com-
ponent of d
i
is calculated from d(i) = u
T
i
w
o
+ v(i)
where v(i) is a measurement noise.
For the specific case of µ
i
= 1, the weight update
equation for APA can be regarded as the following
optimization problem
min
w
i
kw
i
w
i1
k
2
subject to d
i
= U
T
i
w
i
. (2)
If we define the hyperplane as
H
j
(i)
4
={set of all vectors w satisfying
d(i j + 1) u
T
ij+1
w = 0}, (3)
then w
i
becomes the projection from w
i1
onto the
following intersection
K
\
j=1
H
j
(i). (4)
3 ROBUST PROJECTION
ALGORITHM
In (Rey Vega et al., 2008), the robust NLMS was in-
troduced. The algorithm was derived from the lo-
cation relationship between a hypersphere and a hy-
perplane. The hypersphere has the squared radius of
δ
i1
= E
kw
i
w
i1
k
2
with center at w
i1
, and the
hyperplane H
i
is the set of all w satisfying d(i)
u
T
i
w = 0. When the impulsive noise interrupts the
desired system output, H
i
would be out of the hyper-
sphere, and w
i
is obtained from the constricted pro-
jection onto H
i
to satisfy kw
i
w
i1
k =
p
δ
i1
.
From the similar aspect, in (Vega et al., 2010),
the robut APA was introduced. When an intersection
T
K
j=1
H
j
(i) is out of the hypersphere, w
i
is obtained
from the constricted projection onto the intersection
to satisfy kw
i
w
i1
k =
p
δ
i1
. The derived update
algorithm depends on the variable step size which is
inversely proportional to the norm of the error vector,
and it is an approximate solution obtained from the
assumption that the points on hypersphere are close
to each other when
δ is small.
However, such existing robust APAs including
also (Song and Park, 2014) multiplied the same step
size for all components of e
i
. That is, if there is only
one e
j
(i) interrupted by the impulsive noise, the other
components in e
i
are also shrunk even though they are
not interrupted. More reasonable solution is to apply
selective step-size to each error component according
to their individual possibilities of being interrupted by
the impulsive noise.
ICINCO2015-12thInternationalConferenceonInformaticsinControl,AutomationandRobotics
512
Figure 1: Geometric interpretation of the proposed algo-
rithm for the simplest case (M = 3,K = 2).
To obtain selective shrinking strategy, the pro-
posed algorithm finds a new intersection by moving
the hyperplanes which are out of the hypersphere to
the surface of the hypersphere. The distance between
w
i1
and j-th hyperplane is calculated as h
j
(i) =
|e
j
(i)|/ku
ij+1
k. That is, if h
j
(i) exceeds
p
δ
i1
,
then the corresponding H
j
(i) should be revised to
meet the distance
p
δ
i1
. From this, the modified
minimization criterion with fixed step-size µ = 1 is
obtained as follow
min
w
i
kw
i
w
i1
k
2
subject to f
i
= U
T
i
w
i
, (5)
where f
i
is defined as, for 1 j K,
f (i j + 1) = u
T
ij+1
w
i1
+
min
|e
j
(i)|, ku
ij+1
k
p
δ
i1
sign(e
j
(i)).
Note that when h
j
(i) does not exceed
p
δ
i1
,
f (i j + 1) is just same as d(i j + 1).
After solving (5), a new weight update equation
for the proposed algorithm is obtained such as
w
i
= w
i1
+ U
i
U
T
i
U
i
1
Σ
i
e
i
, (6)
where
Σ
i
(m,n) =
(
min
1,
p
δ
i1
ku
im+1
k
|e
m
(i)|
, if m = n
0, otherwise
and δ
i1
= αδ
i1
+ (1 α)min
δ
i1
,
e
1
(i)
ku
i
k
2
!
.
Here, same as in (Vega et al., 2010), δ is updated using
e
2
1
(i)/ku
i
k
2
instead of kU
i
U
T
i
U
i
1
e
i
k
2
, because it
has lower computational complexity and gives bet-
ter results especially when the impulsive noise en-
vironment. Note that the proposed algorithm selec-
tively shrinks the error components by using the scale
factor matrix Σ
i
in (6). Therefore, we named the
proposed algorithm as selectively shrunk error APA
(SSE-APA). The summary of the SSE-APA is given
in Table. 1.
The SSE-APA needs additional computational
cost for calculating the scale factor matrix. The input
Table 1: Proposed Algorithm Summary.
Initialization : δ
0
= 0.001, κ = 0.5, Σ
0
= I
K×K
Loop : for j = 1 : K
if
|e
j
(i)|
ku
ij+1
k
>
p
δ
i1
Σ
i
( j, j) =
p
δ
i1
ku
im+1
k
|e
m
(i)|
else
Σ
i
( j, j) = 1
end
end
Weight update equation:
w
i
= w
i1
+ U
i
U
T
i
U
i
1
Σ
i
e
i
vector norm can be obtained from the diagonal com-
ponent of U
T
i
U
i
, so 2K multiplications and K compar-
isons are needed. Also, 2 multiplications, 1 addition
and 1 comparison are needed to calculate the moving
average of δ
i
.
In Fig. 1, a geometric representation for (6) is
drawn. The plane H
1
(i) is out of the sphere, so it
is moved to H
0
1
(i), and w
i
is obtained from the pro-
jection onto the modified intersection. The simplest
case with M = 3, K = 2 is considered, because it is
not possible to visualize the hypersphere and the hy-
perplanes.
4 SIMULATION RESULT
To verify the performance of the proposed algorithm,
the system identification for the randomly generated
w
o
with length M = 64 was performed. The colored
input sequence was obtained by filtering a zero-mean
white Gaussian noise through the first order autore-
gressive model with its pole at 0.9. During 1000 in-
dependent trials, the impulsive noise environment was
generated as v
i
= b
i
+η
i
, where the background noise
b
i
is a zero-mean white Gaussian noise and the added
impulsive noise η
i
is the product of a Bernoulli pro-
cess ω
i
and a zero-mean white Gaussian noise A
i
, i.e.,
η
i
= ω
i
A
i
. Here, we defined the signal-to-background
noise ratio (SBR) and the signal-to-impulsive noise
ratio (SIR) as σ
2
d
/σ
2
b
and σ
2
d
/σ
2
A
, respectively, where
σ
2
(·)
is the variance of the random sequence (·). The
Bernoulli probability Pr(ω = 1) was randomly se-
lected within (0.01, 0.3) for every independent trial.
The parameters for (6) were heuristically decided as
δ
0
= 0.001 and α = 1 K/(κM) with κ = 0.5. The
parameter δ
0
is suggested to be a small positive num-
RobustAffineProjectionAlgorithmusingSelectivelyShrunkErrorComponent
513
0 2 4 6 8 10
x 10
4
−50
−40
−30
−20
−10
0
Number of iterations
NMSD (dB)
APSA (µ=0.01)
APSA (µ=0.001)
VSS−APSA
RVSS−APA
Scaler−APA
Proposed SSE−APA
Scaler−APA
VSS−APSA
RVSS−APA
SSE−APA
APSA(µ=0.01)
APSA(µ=0.001)
(a)
0 2 4 6 8 10
x 10
4
−50
−40
−30
−20
−10
0
Number of iterations
NMSD (dB)
APSA (µ=0.01)
APSA (µ=0.001)
VSS−APSA
RVSS−APA
Scaler−APA
Proposed SSE−APA
APSA(µ=0.01)
RVSS−APA
VSS−APSA
Scaler−APA
SSE−APA
APSA(µ=0.001)
(b)
0 2 4 6 8 10
x 10
4
−55
−50
−40
−30
−20
−10
0
Number of iterations
NMSD (dB)
APSA (µ=0.01)
APSA (µ=0.001)
VSS−APSA
RVSS−APA
Scaler−APA
Proposed SSE−APA
APSA(µ=0.01)
RVSS−APA
VSS−APSA
Sclaer−APA
APSA(µ=0.001)
SSE−APA
(c)
Figure 2: NMSD learning curves of conventional APSA
(µ = 0.01, 0.001), RVSS-APA (Vega et al., 2010), Scaler-
APA (Song and Park, 2014), VSS-APSA (Yoo et al., 2014)
and the proposed algorithm (a) K = 2, (b) K = 4 , (c) K = 6.
ber, i.e., δ
0
< 1, and κ is suggested to be a positive
number lower than 10.
To compare the performance with other algo-
rithms, we plotted normalized MSD (NMSD) learn-
ing curves in dB scale. Here, the NMSD is defined
as k
˜
w
i
k
2
/kw
o
k
2
where the weight error vector
˜
w
i
=
w
o
w
i
. The proposed SSE-APA is compared with
conventional APSA (µ = 0.01,0.001), VSS-APSA
(Yoo et al., 2014), RVSS-APA (Vega et al., 2010) and
Scaler-APA (Song and Park, 2014). For fair compar-
ison, the simulation results were generated using the
parameter decision guideline suggested in (Yoo et al.,
2014; Vega et al., 2010; Song and Park, 2014).
For the first simulation, the noise ratios were set to
SBR = 30 dB and SIR = 30 dB. Figure 2(a) repre-
0 2 4 6 8 10
x 10
4
−40
−35
−30
−25
−20
−15
−10
−5
0
Number of iterations
NMSD (dB)
APSA (µ=0.01)
APSA (µ=0.001)
VSS−APSA
RVSS−APA
Scaler−APA
Proposed SSE−APA
SSE−APA
APSA (µ=0.01)
VSS−APSA
APSA (µ=0.001)
RVSS−APA
Scaler−APA
(a)
0 2 4 6 8 10
x 10
4
−55
−50
−40
−30
−20
−10
0
Number of iterations
NMSD (dB)
APSA (µ=0.01)
APSA (µ=0.001)
VSS−APSA
RVSS−APA
Scaler−APA
Proposed SSE−APA
APSA (µ=0.01)
VSS−APSA
APSA (µ=0.001)
RVSS−APA
Scaler−APA
SSE−APA
(b)
Figure 3: NMSD learning curves of conventional APSA
(µ = 0.01, 0.001), RVSS-APA (Vega et al., 2010), Scaler-
APA (Song and Park, 2014), VSS-APSA (Yoo et al., 2014)
and the proposed algorithm (a) SBR 20 dB , (b) SIR
40 dB .
sents the NMSD learning curves when K = 2. As can
be seen, the proposed SSE-APA and the RVSS-APA
had a fast convergence rate and the lowest steady-
state MSD compared to other algorithms. When we
increased the projection order to K = 4 as in Fig-
ure 2(b), however, the RVSS-APA and the Scaler-
APA showed the severe performance degradation as
explained in the text. In contrast, the proposed SSE-
APA still had the fastest convergence rate and the low-
est steady-state MSD compared to other algorithms.
In Figure 2(c), as can be seen, the performances of the
RVSS-APA and the Scaler-APA were further worsen
when the projection order is increased to K = 6. As
expected, the proposed SSE-APA had consistently
better performance despite of the further increased
projection order.
To justify the performance of the proposed algo-
rithm in various environments, the SBR was changed
to 20 dB (Figure 3(a)), and the SIR was changed to
40 dB (Figure 3(b)) for K = 4. These changes both
mean the scaling of the noise, so the performances
of the overall algorithms were degraded. However,
the proposed SSE-APA still showed the fastest con-
vergence rate and the lowest steady-state MSD com-
pared to other algorithms.
Another important concern about robustness in the
ICINCO2015-12thInternationalConferenceonInformaticsinControl,AutomationandRobotics
514
Table 2: Reset Algorithm.
Parameter : V
T
= 3M, V
D
=
15
16
V
T
,
ξ = 10
5
, ε = 10
6
,
M = diag (1, ··· , 1, 0, ··· ,0):
V
T
V
D
1’s, V
D
0’s.
Ctrl update : c = sort
h
|e
1
(i)|
ku
i
k+ε
···
|e
1
(iV
T
+1)|
ku
iV
T
+1
k+ε
i
if mod(i,V
T
) = 0
ctrl
new
=
c
T
Mc
V
T
V
D
end
Reset decision :
i
=
ctrl
new
ctrl
old
/δ
i1
if
i
> ξ
δ
i
=δ
0
else
δ
i
= αδ
i1
+ (1 α)min
δ
i1
,
e
i
(1)
ku
i
k
2
end
ctrl
old
= ctrl
new
0 2 4 6 8 10
x 10
4
−50
−40
−30
−20
−10
0
10
Number of iterations
NMSD (dB)
APSA (µ=0.01)
APSA (µ=0.001)
VSS−APSA
RVSS−APA
Proposed SSE−APA
SSE−APA
RVSS−APA
VSS−APSA
APSA (µ=0.01)
APSA (µ=0.001)
Figure 4: NMSD learning curves of conventional APSA
(µ = 0.01,0.001), RVSS-APA (Vega et al., 2010), VSS-
APSA (Yoo et al., 2014) and the proposed algorithm with
reset algorithm (w
o
w
o
at iteration= 5 ×10
4
).
adaptive filter is an abrupt change in the system coef-
ficient. To track the change in the system coefficient
successfully, most variable step-size algorithms using
minimum operator need the reset algorithm. There-
fore, we adopted the reset algorithm to the proposed
algorithm which was introduced in (Rey Vega et al.,
2008; Vega et al., 2010) (Table. 2). To verify the
tracking performance of the proposed algorithm, the
sign of the system coefficient was reversed at half
iteration, i.e., w
o
w
o
. The simulation was per-
formed with K = 4 and the same values for other pa-
rameters. The scaler-AP was ruled out because there
was no suggested reset algorithm for (Song and Park,
2014). As can be seen in Figure 4, the proposed algo-
rithm showed the fastest convergence rate and lowest
steady-state MSD compared to other algorithms even
0 200 400 600 800 1000
−0.5
0
0.5
(a)
0 2 4 6 8 10
x 10
5
−35
−25
−15
−5
5
15
Number of iterations
NMSD (dB)
APSA (µ=0.01)
APSA (µ=0.001)
VSS−APSA
RVSS−APA
Proposed SSE−APA
(b)
Figure 5: Acoustic echo cancellation with double-talk situa-
tion: (a) Room impulse response (b) NMSD learning curves
of conventional APSA (µ = 0.01,0.001), RVSS-APA (Vega
et al., 2010), VSS-APSA (Yoo et al., 2014) and the pro-
posed algorithm (K = 6).
after the system change.
As the last simulation, to show the performance
in a real implementation, the proposed algorithm was
applied to the acoustic echo cancellation with double-
talk situation. The used room impulse response is
plotted in Figure 5(a). A far-end signal and a near-end
signal were real speech signals with 8-kHz sampling
rate, and two 20-sec near-end signals with 1000 times
greater energy than the far-end signal were added
before the half and the last iteration, respectively.
For two sections which the near-end signals inter-
rupted, i.e, i (300000, 460000)&(700000,860000),
the proposed algorithm showed a fast convergence
rate and a low steady-state MSD compared to other
algorithms as in Figure 5(b). The scaler-AP was ex-
cluded because it is hard to choose the proper param-
eter value β (Sayin et al., 2014).
5 CONCLUSIONS
A robust APA using selectively shrunk error compo-
nent was proposed in this paper. In APA, there both
exist interrupted and uninterrupted error components
by impulsive noises in an error vector. However, the
existing robust APAs applied common step size to
all error components in the error vector, so the per-
formance was degraded with a high impulsive noise
probability and a high projection order. To overcome
this, we proposed the modified minimization criterion
to selectively shrinks the error components based on
RobustAffineProjectionAlgorithmusingSelectivelyShrunkErrorComponent
515
the geometric interpretation. The performance of the
proposed SSE-APA was verified with a wide range of
impulsive noise probability, various projection orders,
lower SBR and SIR, and the system tracking scenario.
The simulation results showed that the proposed SSE-
APA achieved consistently the fastest convergence
rate and the lowest steady-state MSD compared to ex-
isting robust APAs and a recent VSS-APSA in vari-
ous environments with a wide range of the impulsive
noise probability.
ACKNOWLEDGEMENTS
This research was supported by the Basic Science
Research Program through the National Research
Foundation of Korea(NRF) funded by the Min-
istry of Education(NRF-2013R1A1A2058975) and
the MSIP(Ministry of Science, ICT and Future Plan-
ning), Korea, under the ICT Consilience Creative
Program (IITP-2015-R0346-15-1007) supervised by
the IITP(Institute for Information & communications
Technology Promotion).
REFERENCES
Arikan, O., Enis Cetin, A., and Erzin, E. (1994). Adap-
tive filtering for non-gaussian stable processes. Signal
Processing Letters, IEEE, 1(11):163–165.
Kim, S.-E., Kong, S.-j., and Song, W.-j. (2009). An affine
projection algorithm with evolving order. IEEE Signal
Process. Lett., 16(11):937–940.
Kong, S.-J., Hwang, K.-Y., and Song, W.-J. (2007). An
affine projection algorithm with dynamic selection
of input vectors. Signal Processing Letters, IEEE,
14(8):529–532.
Ozeki, K. and Umeda, T. (1984). An adaptive filtering algo-
rithm using an orthogonal projection to an affine sub-
space and its properties. Electronics and Communi-
cations in Japan (Part I: Communications), 67(5):19–
27.
Paleologu, C., Benesty, J., and Ciochina, S. (2008).
A variable step-size affine projection algorithm de-
signed for acoustic echo cancellation. Audio, Speech,
and Language Processing, IEEE Transactions on,
16(8):1466–1478.
Rey Vega, L., Rey, H., Benesty, J., and Tressens, S. (2008).
A new robust variable step-size nlms algorithm. Sig-
nal Processing, IEEE Transactions on, 56(5):1878–
1893.
Sayed, A. H. (2003). Fundamentals of adaptive filtering.
John Wiley & Sons.
Sayin, M., Vanli, N., and Kozat, S. (2014). A novel family
of adaptive filtering algorithms based on the logarith-
mic cost. Signal Processing, IEEE Transactions on,
62(17):4411–4424.
Shao, T., Zheng, Y. R., and Benesty, J. (2010). An
affine projection sign algorithm robust against impul-
sive interferences. Signal Processing Letters, IEEE,
17(4):327–330.
Shin, H.-C., Sayed, A. H., and Song, W.-J. (2004). Variable
step-size nlms and affine projection algorithms. IEEE
signal processing letters, 11(2):132–135.
Shin, J., Yoo, J., and Park, P. (2012). Variable step-size
affine projection sign algorithm. Electronics letters,
48(9):483–485.
Song, I. and Park, P. (2014). A variable step-size affine pro-
jection algorithm with a step-size scaler against impul-
sive measurement noise. Signal Processing, 96:321–
324.
Vega, L. R., Rey, H., and Benesty, J. (2010). A robust vari-
able step-size affine projection algorithm. Signal Pro-
cessing, 90(9):2806–2810.
Yoo, J., Shin, J., and Park, P. (2014). Variable step-size
affine projection sign algorithm.
Zhang, S. and Zhang, J. (2013). Modified variable step-size
affine projection sign algorithm. Electronics Letters,
49(20):1264–1265.
ICINCO2015-12thInternationalConferenceonInformaticsinControl,AutomationandRobotics
516