
adjustment change/size in “Case 1” is 0.14 or 14%
for either the buffer elongation or the shrinkage
operation. That is, the average amplitude of dynamic
buffer tuning for “Case 1” is 0.28 or 28%. The
traffic pattern of the TCP channel during the “Case
1” experiment is heavy-tailed, as confirmed by the
Selfis Tool (Karagiannis, 2003). The pattern for
“Case 2” is random because the mean m of the
distribution for the traffic trace is approximately
equal to its standard deviation
(i.e.
≈m
).
Table 1: A summary of the average buffer adjustment size
and amplitude
PID (Ip,
2001) ONNC
FLC
(Lin, 2003)
GAC
(Wong,
2002)
Traffic
pattern
Case 1 – mean
adjustment size
0.140 0.0265 0.0267 0.0299
Heavy-
tailed
(i.e.
LRD)
(mean adjustment
amplitude)
≈ ≈0.28
4
0.05
3
≈ 0.053 ≈ 0.06
Case 2 – mean
adjustment size
0.1373 0.0293 0.0296 0.0324
Random
Mean adjustment
size (in ratio) for
20 different study
cases
0.139 0.0279 0.0282 0.0311
Mean adjustment
amplitude (in ratio)
for above 20
different cases
≈ 0.27
7
≈ 0.05
6
≈ 0.056
≈ 0.06
2
5 CONCLUSION
The HBP technique is proposed for dynamic cyclical
optimization of FF neural network configurations
(e.g. the NNC). The optimization is, in effect, virtual
pruning of the insignificant NN connections. With
the NNC every HBP optimization cycle starts with
the same skeletal configuration. The experimental
results show that the interim ONNC versions always
have a shorter control cycle time. The ONNC model
is actually the “HBP+NNC” combination. The
verification results indicate that the average ONNC
control cycle time is 14.3 percent less than that of
the un-optimized NNC predecessor. In the
optimization process the connections of the NNC are
evaluated and those have insignificant impact on the
dynamic buffer tuning process are marked and
virtually pruned. The pruning process is logical or
virtual because it does not physically remove the
connections but excludes them in the neural
computation only. The HBP is applied solely to the
stage when Learner has just completed training and
before it swaps to assume the role as the Chief.
ACKNOWLEDGEMENT
The authors thank the Hong Kong PolyU for the
research grants: A-PF75 and G-T426.
REFERENCES
B. Braden et al., Recommendation on Queue Management
and Congestion Avoidance in the Internet, RFC2309,
April 1998
Allan K.Y. Wong, Wilfred W.K. Lin, May T.W. Ip, and
Tharam S. Dillon, Genetic Algorithm and PID Control
Together for Dynamic Anticipative Marginal Buffer
Management: An Effective Approach to Enhance
Dependability and Performance for Distributed Mobile
Object-Based Real-time Computing over the Internet,
Journal of Parallel and Distributed Computing (JPDC),
vol. 62, Sept. 2002
A. R. Gallant and H. White, On Learning the Derivatives
of an Unknown Mapping and Its Derivatives Using
Multiplayer Feedforward Networks, Neural Networks,
vol. 5, 1992
M. Hagan et al, Neural Network Design, PWS Publishing
Company, 1996
T. Karagiannis, M. Faloutsos, M. Molle, A User-friendly
Self-similarity Analysis Tool, ACM SIGCOMM
Computer Communication Review, 33(3), July 2003,
81-93
Wilfred W.K. Lin, An Adaptive IEPM (Internet End-to-
End Performance Measurement) Based Approach to
Enhance Fault Tolerance and Performance in Object
Based Distributed Computing over a Sizeable
Network (Exemplified by the Internet), MPhil Thesis,
Department of Computing, Hong Kong PolyU, 2002
Wilfred W.K. Lin, Allan K.Y. Wong and Tharam S.
Dillon, HBM: A Suitable Neural Network Pruning
Technique to Optimize the Execution Time of the
Novel Neural Network Controller (NNC) that
Eliminates Buffer Overflow, Proc. of Parallel and
Distributed Processing Techniques and Applications
(PDPTA), vol. 2, Las Vegas USA, June 2003
May T.W. Ip, Wilfred W.K. Lin, Allan K.Y. Wong,
Tharam S. Dillon and Dian Hui Wang, An Adaptive
Buffer Management Algorithm for Enhancing
Dependability and Performance in Mobile-Object-
Based Real-time Computing, Proc. of the IEEE
ISORC'2001, Magdenburg, Germany, May 2001
IIntelVtune,
http://developer.intel.com/software/products/vtune/
HBP: A NOVEL TECHNIQUE FOR DYNAMIC OPTIMIZATION OF THE FEED-FORWARD NEURAL NETWORK
CONFIGURATION
349