Artificial Intelligence Modelling Methodologies Applied
to a Polymerization Process
Silvia Curteanu
1
, Elena-Niculina Dragoi
1
, Florin Leon
2
and Cristina Butnariu
1
1
“Gheorghe Asachi” Technical University of Iasi, Faculty of Chemical Engineering and Environmental Protection, 73,
Prof. dr. doc. D. Mangeron Blvd., 700050, Iasi, Romania
2
Faculty of Automatic Control and Computer Engineering, Bucharest, Romania
Keywords: Neural Networks, Support Vector Machines, Differential Evolution, Clonal Selection, Polymerization.
Abstract: A series of modelling methodologies based on artificial intelligence tools are applied to solve a complex
real-world problem. Neural networks and support vector machines are used as models and differential
evolution and clonal selection algorithms as optimizers for structural and parametric optimization of the
models. The goal is to make a comparative analysis of these methods for the case study of the free radical
polymerization of styrene, a complex, difficult to model process, where the monomer conversion and
molecular masses are predicted as a function of reaction conditions, i.e. temperature, amount of initiator and
time. Four modelling methodologies are developed and evaluated in terms of accuracy.
1 INTRODUCTION
Artificial neural networks (ANNs) are recommended
tools for modelling complex nonlinear processes
because they require only input-output data, with no
need for in-depth knowledge of the rules governing
the system. They often lead to accurate results and
can be integrated into optimal control procedures.
Beside ANNs, support vector machines (SVMs)
are gaining popularity over other learning methods,
mainly due to their good generalization capability
(Burges, 1998). Another important advantage is that
SVMs perform well on high dimensional problems,
and there is ongoing research on improving their
scalability (Wang et al, 2011; Zhang et al., 2012).
Developing optimal ANN or SVM models with
the adequate parameters is not an easy task. In the
trial-and-error method (frequently applied by the
majority of researchers, especially from engineering
domains), the architecture is repeatedly modified by
hand and evaluated with the goal of lowering the
error. These repeated actions increase the
computational overhead and the search is usually
based on gradient descent, whose result is prone to
being trapped in local minima (Cartwright and
Curteanu, 2013). In the polymerization field, the use
of ANN and SVM is increasing. Different types of
processes are modelled with these techniques, as
proven by the different review works (Noor et al.,
2010; Cartwright and Curteanu, 2013).
Evolutionary algorithms (EAs) are promising
methods for optimizing both the architecture and the
internal ANN parameters (Almeida and Ludermir,
2008a; Almeida and Ludermir, 2008b). Among all
EAs, differential evolution (DE) is an especially
powerful approach. Its efficiency lies in a simple,
compact structure that uses stochastic direct search
(Subudhi and Jena, 2009). A series of applications
recommend it as an efficient tool, particularly for
highly non-linear objective functions. For instance,
Lahiri and Ganta (2009) developed a method which
incorporates a hybrid ANN and DE technique for the
ANN parameter tuning. The algorithm was applied
for the prediction of the hold up of the solid liquid
slurry flow. The oxygen mass transfer in the
presence of oxygen vectors was modelled using a
feed forward multilayer perceptron neural network
with parameters optimized using two DE-based
versions: classical and self-adaptive (Dragoi et al.,
2011). In combination with neural networks, a
modified DE version, including two initialization
strategies (normal distribution and normal
distribution combined with the opposition-based
principle) and a modified mutation, was applied for
modelling the oxygen transfer when n-dodecane is
added in aerobic fermentation systems of bacteria
(Dragoi et al., 2013a). The pharmaceutical freeze
drying process was studied from multiple points of
43
Curteanu S., Dragoi E., Leon F. and Butnariu C..
Artificial Intelligence Modelling Methodologies Applied to a Polymerization Process.
DOI: 10.5220/0005029800430049
In Proceedings of the 4th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH-2014),
pages 43-49
ISBN: 978-989-758-038-3
Copyright
c
2014 SCITEPRESS (Science and Technology Publications, Lda.)
view (modelling and system identification) using a
hybrid combination of DE with ANNs and back-
propagation as a local search procedure (Dragoi et
al., 2012a; Dragoi et al., 2013b). The classification
of some organic compounds based on their liquid
crystalline property was performed using ANNs
optimized with two different self-adaptive versions
of the DE algorithm (Dragoi et al., 2012b).
Another optimization tool used in this work is
the clonal selection (CS) algorithm, which belongs
to the artificial immune system (AIS) class. AIS is a
group of computational methods represented by
highly abstract models of biological immune
systems (Castro and Timmis, 2003). The main
motivation of using immune systems as a source of
inspiration for computational systems resides in their
capabilities related to self-evolution, self-
organization and self-sustainability (Ahmad and
Narayanan 2011). In addition, unlike other
biological systems such as the nervous system, the
immune system is not centrally controlled and
therefore detection and response can be locally
executed (Dasgupta and Nino, 2009). Related to the
combination of AISs with ANNs for chemical
engineering processes, to the authors’ knowledge,
only a few studies can be found (Tao et al., 2012).
For instance, different variations CS-ANN were
applied by our group for the removal of heavy
metals from residual water (Dragoi et al., 2012) and
for the optimization of CO
2
absorption in pneumatic
contractors (Cozma et al., 2013).
In this paper, ANNs and SVMs are employed as
acceptable alternatives to the phenomenological
models, which are difficult to develop and solve
with satisfactory accuracy for a complex
polymerization process. The goal is to perform a
comparative analysis of different approaches and to
identify the best method that generates simple but
highly efficient models. The modelling
methodologies include ANNs and SVMs, optimized
with techniques such as DE, CS or grid search. One
must emphasize the benefits of the hybrid modelling
techniques in terms of accuracy and in connection
with the particularities of the process. The novelty of
this approach lies in several aspects: the application
of different modelling techniques for a complex
chemical process, new elements introduced in the
DE optimizer and a novel combination DE-SVM.
2 DATABASE
The case study of our research work is the free
radical polymerization of styrene performed through
batch suspension technique. A complete
mathematical model previously elaborated and
solved (Curteanu, 2003) is used here as a simulator
for producing the working database. The model
contains the balance equations for the monomer
conversion, initiator concentration, distribution
moments of radicals and dead polymer and also,
equations that take into account diffusion constraints
(gel and glass effects). This last part is difficult to
model with satisfactory accuracy; therefore, input-
output data models are recommended alternatives to
be applied.
Data quality and quantity are essential for
modelling with machine learning techniques. In this
case, the collected data was chosen to cover the
whole domain of interest for the studied process and
to be uniformly distributed within this domain. Thus,
for the initiator concentration and temperature, the
ranges specific to the suspension polymerization of
styrene were 10-55 mol/l (variation step 5) and 60-
90 °C (variation step 10), respectively. Regarding
the reaction time, the interval was 0 to 2000 minutes,
because for lower concentrations of initiator and
lower temperature, the reaction time is longer
(Curteanu et al., 2010).
After the data was generated, an internal step of
data pre-processing was applied. This included
normalization, randomization and splitting the data
into training and testing subsets. The normalization
was achieved using the 0-1 method:
minmax
min
norm
xx
xx
x
(1)
Concerning data randomization, it was applied in
such a manner that all points from a single initiator-
temperature combination belong to either the
training or the testing set. In this way a separation
between experiments is maintained and the testing of
the model is not based on individual points but on an
entire experiment. The amount of training data is
80%, with the remaining 20% used for testing.
For the styrene polymerization process
considered here, the model input variables were
chosen as: initiator concentration, I
0
, temperature, T
and reaction time, t. The other two variables,
monomer conversion, x, and numerical average
molecular weight, Mn, represent the outputs of the
models. The modelling techniques aim to provide
predictions about the main properties (molecular
mass) and reaction characteristics (conversion) as a
function of the working conditions.
SIMULTECH2014-4thInternationalConferenceonSimulationandModelingMethodologies,Technologiesand
Applications
44
3 MODELLING METHODOLOGY
Two modelling approaches, ANN and SVM, were
applied to solve this real-world chemical
engineering problem. Since both ANN and SVM
have some parameters that need to be tuned in order
to obtain optimal results, two bio-inspired
algorithms, DE and CS, were applied and compared
for model optimization.
3.1 Optimizing Neural Networks
with Differential Evolution
DE, an algorithm based on the evolutionary
paradigm, is used to simultaneously perform
parametric and structural optimization of the neural
network model for the styrene polymerization. The
variant used in this work, called SADE-NN-2
(Dragoi et al., 2012a) is a combination of a self-
adaptive DE with ANNs and back-propagation (BP).
DE has the role of performing a global search, while
BP locally improves the best solution found in each
generation. This intertwinement of the two
algorithms is possible because all DE individuals are
in fact ANNs.
As in the case of all evolutionary algorithms, the
evolution of the population occurs by applying
mutation, recombination and selection steps until a
stopping criterion is met. Initially, a set of potential
solutions are generated using a random approach. In
this work, a Gaussian distribution is used. After that,
mutation has the role to add diversity and the
population of mutants is combined in the crossover
step with the current one to create a trial population.
In the DE case, two types of crossover can be
encountered: binomial (each characteristic of the
trial individual is randomly copied from one of the
two parents) or exponential (blocks of characteristics
are inherited alternatively from the parents). In
SADE-NN-2, the binomial version is used, as it was
observed that efficiency is increased only for a small
number of case studies when the exponential version
is employed.
Concerning the selection step, where the trial
individual competes with the current individual for
the right to participate in the next generation, a
tournament version with a “one-to-one” survival
criterion is used.
The characteristic feature of DE when compared
with other EAs is the mutation step in which a trial
vector is generated by adding to a base individual a
scaled differential term (Price et al., 2005). In
SADE-NN-2, this mutation principle is modified,
such that the individuals participating to the
differential terms are sorted based on their fitness.
Since it was observed that in various situations the
DE version called DE/Best/2/Bin (where the base
vector is represented by the best individual in the
population, 2 differential terms are employed and
the crossover type is binomial) obtains acceptable
results, this version was considered as a base for the
current study.
As the SADE-NN-2 is a self-adaptive method (in
which the F and Cr control parameters are included
into the optimization procedure, i.e. they evolve
simultaneously with the individuals) a separate
procedure for parameter optimization was not
required.
A direct encoding with real values was chosen
for the ANNs, because it is the least expensive
computationally. For each position in the population,
at each generation, at least one decoding procedure
is required. The ANN parameters chosen for
encoding are the number of hidden layers, the
number of neurons in each hidden layer, the weights,
the biases and the activation functions. Unlike the
majority of applications where the variation of the
activation functions is performed at the layer level,
in the SADE-NN-2, the variation is applied at the
neuron level.
3.2 Optimizing Neural Networks with
Clonal Selection
The second algorithm employed for neural network
optimization is clonal selection. It describes the
basic characteristics of the immune response when
an antigenic stimulus is applied to a vertebrate
(Abdul Hamid and Abdul Rahman, 2010).
The main immunological principles used are: a
specific memory set, selection and cloning of the
best antibodies, removal of the worst antibodies,
affinity maturation of the best immune cells and the
generation of a diverse set of antibodies (Dasgupta
and Nino, 2009). The main steps of the algorithm are
initialization, selection, cloning, affinity maturation
(the process of variation and selection achieved
through hyper-mutation) and receptor editing. The
last four steps are repeated until a stopping criterion
is met.
As in the DE case, the initialization is based on
the Gaussian distribution. In the selection step, the
best 30% of the population is cloned 10 times. After
that, each clone is hyper-mutated and its affinity
(computed using an affinity function similar to the
fitness function used in EAs) is determined. The
mutated clones with the highest affinity are selected
for introduction into the population. In the last step,
ArtificialIntelligenceModellingMethodologiesAppliedtoaPolymerizationProcess
45
5% of the population with the worst affinity is
replaced with newly generated individuals.
The CS-NN algorithm (Dragoi et al., 2012c) uses
the same type of ANN (feed-forward multilayer
perceptron) and the same encoding procedure as in
SADE-NN-2. In this manner, the differences
obtained between the two approaches are driven
only by the optimization procedures (DE or CS) and
their effectiveness can be assessed in a meaningful
way.
The characteristics of CS-NN which
distinguishes it among other CS variants (except its
combination with neural networks) are: the
introduction of the opposition-based principle in the
initialization phase and the introduction of a hyper-
mutation combining 3 types of hyper-mutation
(Gaussian, non-uniform and pair-wise interchange),
based on a random procedure.
3.3 Modelling with Support Vector
Machines
One of the main advantage of the SVM is the small
number of parameters that the user has to choose:
the type of kernel with its parameters and a cost
parameter which defines the balance between
tolerance for training errors and generalization
capability. For our case study, two support vector
regression (SVR) models were designed, for each
parameters of interest: x and Mn.
The experiments were performed using the
implementation provided by the LIBSVM library
(Chang and Lin, 2011), using the ε-SVR or the
µ-SVR variants. In ε-SVR, ε is a parameter of the
loss function with values in the [0, ) domain. Also,
the radial basis function (RBF) kernel was selected.
3.4 Support Vector Machines and
Differential Evolution
The fourth approach for modelling the styrene
polymerization process is a novel algorithm
combining DE with SVM (DE-SVM). DE acts as an
optimizer, while the SVM models the process.
Distinctively from the SADE-NN-2, in DE-SVM,
DE only performs parameter optimization. The
training procedure is the classic one used for SVMs.
The same self-adaptive DE version used in
SADE-NN-2 was also employed in DE-SVM. Thus,
the performance differences obtained are solely
determined by the performance of the model and not
by the ability of the optimizer to determine the best
solution. If the population was formed of neural
models in the case of SADE-NN-2, in the case of
DE-SVM, the individuals forming the population are
lists of SVM parameters such as: SVM type
(µ-SVR, ε-SVR), kernel type (linear, polynomial,
RBF and sigmoid), degree (applicable only for
polynomial kernel type), γ (a coefficient of
polynomial and sigmoid kernels) and C, the cost
parameter.
4 RESULTS AND DISCUSSION
After gathering the data describing the process, a
series of simulations with the four considered
algorithms were performed. In the case of the
modelling approaches based on ANNs (SADE-NN-2
and CS-NN), some limitations to the structure of the
network were imposed, in order to reduce the
complexity of the encoded individuals and,
therefore, to reduce the computational effort.
Consequently, for the hidden layers, it was
considered that a network with one hidden layer can
efficiently model the polymerization process. This
restriction is based on the authors’ experience: in the
majority of our studies a network with one hidden
layer provided satisfactory results. Also, it was
considered that 30 neurons in the hidden layer are
sufficient. A lower limit was imposed as well: the
algorithms can generate networks with no hidden
layers or with one hidden layer, with a number of
neurons between 4 and 30.
Initially, with both SADE-NN-2 and CS-NN, a
set of models with two outputs corresponding to the
parameters of the process, x and Mn, were generated.
Although the mean squared error (MSE) computed
on the normalized data had acceptable values: 0.081
and 0.125 in the training phase for SADE-NN-2 and
CS-NN, respectively, and 0.122 and 0.158 in the
testing phase, the average relative errors (AREs)
were not acceptable, exceeding 40% for some of the
outputs. Therefore, for each output a separate neural
model was created. A set of five best results are
listed in Table 1 for SADE-NN-2 and in Table 2 for
CS-NN.
As it can be observed from Tables 1 and 2, for
the x and Mn parameters, the best error in the testing
phase is obtained with CS-NN (CN1 model and CN6
respectively). These observations are also in trend
with the average values.
Concerning the SVM models, Table 3 presents
the best results obtained with
ε-SVR. A grid-search
approach was used to find the best values for the
model parameters. It is recommended to use
exponentially growing sequences of C and γ in order
to identify good parameters (Hsu et a1., 2010). The
SIMULTECH2014-4thInternationalConferenceonSimulationandModelingMethodologies,Technologiesand
Applications
46
Table 1: Best results obtained with SADE-NN-2 for each
process parameter.
Output
variable
Topo-
logy
MSE
training
MSE
testing
Model
id
x
3:19:01 0.0128 0.0094 DN1
3:19:01 0.0105 0.0104 DN2
3:11:01 0.0147 0.0132 DN3
3:04:01 0.0136 0.014 DN4
3:19:01 0.0109 0.077 DN5
Avera
g
e 0.0125 0.0248
Mn
3:19:01 0.0039 0.0036 DN6
3:11:01 0.0058 0.0048 DN7
3:11:01 0.0045 0.0051 DN8
3:14:01 0.0047 0.0073 DN9
3:11:01 0.0118 0.0125 DN10
Avera
g
e 0.0047 0.0045
Table 2: Best results obtained with CS-NN for each
process parameter.
Output
variable
Topo-
logy
MSE
training
MSE
testing
Model
id
x
3:20:01 0.0069 0.0083 CN1
3:19:01 0.0094 0.0103 CN2
3:17:01 0.0091 0.0115 CN3
3:11:01 0.0127 0.0128 CN4
3:16:01 0.0099 0.0159 CN5
Avera
g
e 0.0096 0.0117
Mn
3:08:01 0.0009 0.0016 CN6
3:10:01 0.0012 0.0017 CN7
3:12:01 0.0327 0.0031 CN8
3:16:01 0.003 0.0041 CN9
3:20:01 0.0051 0.0064 CN10
Avera
g
e 0.0085 0.0033
chosen values for the experiments are (2
-3
, 2
10
) for
C, and (2
-3
, 2
3
) for γ, with step 0.5. Low values for
MSE show the good performance obtained with
RBF kernel for both output variables, while the
polynomial kernel performance is not suited for
modelling the Mn variable.
For the combination of DE with SVM, the best
models obtained for the process parameters are
presented in Table 4.
In order to determine the efficiency of the best
models, the coefficient of determination (r
2
) was
also computed (Table 5).
By analyzing the results it can be observed that
in some cases, the coefficient of determination is not
closely correlated to the MSE. This fact can be
explained by the data distribution, since the system
has a different dynamic for each combination of
temperature-initiator.
In order to visualize the differences between the
predicted and expected data, for the modelling of the
Table 3: Results obtained with ε-SVR for the output
variables.
Output
variable
Method
parameters
MSE
training
MSE
testing
Model
id
x
C = 256; γ = 2;
RBF kernel;
= 0.1
0.004 0.004 S1
C=0.707; γ = 0.125;
Polynomial kernel
degree 2;
= 0.25
0.021 0.02 S2
C=0.353; γ = 0.125;
Polynomial kernel
degree 3;
= 0.25
0.022 0.02 S3
Mn
C = 512; γ = 2;
RBF kernel,
= 0.1
0.09 0.27 S4
C=2.828; γ = 0.125;
Polynomial kernel
degree 2;
= 3
7.5 2.4 S5
C=2.828; γ = 0.125;
Polynomial kernel
degree 3;
= 3
7.5 2.4 S6
Table 4: Results obtained with DE-SVM for the output
variables.
Para
meter
Method
parameters
MSE
training
MSE
testing
x
µ-SVM, RBF kernel
C = 0.28;γ = 3.341
0.0085 0.0075
Mn
µ-SVM, RBF kernel
C = 5.96; γ = 2.212
0.0010 0.0014
Table 5: The performance of the best models obtained
with the four methods.
Para-
meter
Model
r
2
training
r
2
testing
x
DN1
0.9617 0.9580
CN1
0.9768 0.9581
S1 0.96 0.9304
DE-SVM 0.9714 0.9656
Mn
DN6
0.9871 0.9800
CN6
0.9915 0.9731
S4 0.99 0.9813
DE-SVM 0.9936 0.9767
x parameter, a set of figures with testing data were
generated. Since different combinations of
temperature and initial value of the initiator were
tested, two significant examples are given below:
temperature of 368K and 10 mol/l of BPO initiator
(Figure 1) and temperature of 338K and 50 mol/l of
initiator (Figure 2).
Concerning the Mn modelling, the DE-SVM
approach is the best in terms of MSE testing. Similar
to the x parameter, a series of figures for two
temperature-initiator value combinations were
generated for it (Figures 3 and 4).
ArtificialIntelligenceModellingMethodologiesAppliedtoaPolymerizationProcess
47
Figure 1: Comparison between the predictions of x
obtained with the four methods and the expected data
when the process parameters are 368K (temperature) and
10 mol/l (initial value of initiator).
Figure 2: Comparison between the predictions of x
obtained with the four methods and the expected data
when the process parameters are 353K (temperature) and
50 mol/l (initial value of initiator).
Figure 3: Comparison between the predictions of Mn
obtained with the four methods and the expected data
when the process parameters are 348K (temperature) and
15mol/l (initial value of initiator).
Figure 4: Comparison between the predictions of Mn
obtained with the four methods and the expected data
when the process parameters are 383K (temperature) and
20 mol/l (initial value of initiator).
4 CONCLUSIONS
Four modelling methodologies were developed and
tested on a complex chemical process, i.e. free
radical polymerization of styrene. They include
ANN and SVM as models, structurally and
parametrically optimized with DE and CS. Although
both neural network and support vector machine
models are found suitable for the polymerization
process, selecting one of the techniques rely on the
user experience. However, it must be mentioned that
the combination SVM-DE deserves special attention
due to its accessibility and accuracy observed in the
results.
ACKNOWLEDGEMENTS
This work was supported by the “Partnership in
priority areas – PN-II” program, financed by ANCS,
CNDI - UEFISCDI, project PN-II-PT-PCCA-2011-
3.2-0732, No. 23/2012.
REFERENCES
Abdul Hamid, M. B. & Abdul Rahman, T. K., 2010. Short
Term Load Forecasting Using an Artificial Neural
Network Trained by Artificial Immune System
Learning Algorithm, In 12th International Conference
on Computer Modelling and Simulation (UKSim).
Ahmad, W., Narayanan, A. 2011, "Principles and Methods
of Artificial Immune System Vaccination of Learning
Systems," In Artificial Immune Systems, P. Liu, G.
Nicosia, T. Stibor, eds., Springer Berlin Heidelberg,
SIMULTECH2014-4thInternationalConferenceonSimulationandModelingMethodologies,Technologiesand
Applications
48
pp. 268-281.
Almeida, L. M., Ludermir, T. B., 2008a. An evolutionary
approach for tuning artificial neural network
parameters. In Proceedings of the Third International
Workshop on Hybrid Artificial Intelligence System
(HAIS’08).
Almeida, L. M.; Ludermir, T. B., 2008b. An improved
method for automatically searching near-optimal
artificial neural networks. In IEEE International Joint
Conference on Neural Networks (IJCNN’08).
Burges, C., 1998. A Tutorial on Support Vector Machines
for Pattern Recognition, Data Mining and Knowledge
Discovery 2, 121-167.
Cartwright, H, Curteanu, S., 2013. Neural networks
applied in chemistry. II. Neuro-evolutionary
techniques in process modeling and optimization.
Industrial & Engineering Chemistry Research, doi:
dx.doi.org/10.1021/ie4000954.
Castro, L.N., Timmis, J.I. 2003. Artificial immune
systems as a novel soft computing paradigm. Soft
Computing - A Fusion of Foundations, Methodologies
and Applications, 7, (8) 526-544.
Chang, C. C., Lin, C. J., 2011. LIBSVM : a library for
support vector machines. ACM Transactions on
Intelligent System and Technology, 2, (3), 27.
Chang, C-C, Lin, C-J. 2011. LIBSVM: a library for
support vector machines. ACM TIST2011, 2, (27), 1–
27.
Cozma, P., Mamaliga, I., Dragoi, E.N., Curteanu, S.,
Wukovits, W., Friedl, A., Gavrilescu, M., 2013.
Modelling and Optimization of CO2 Absorption in
Pneumatic Contactors using Artificial Neural
Networks Developed with Clonal Selection based
Algorithm, In 7th International Conference on
Environmental Engineering and Management
Integration Challenges for Sustainability.
Curteanu, S. 2003. Modeling and simulation of free
radical polymerization of styrene under semibatch
reactor conditions. Central European Journal of
Chemistry, 1, (1) 69-90.
Curteanu, S., Leon, F., Furtuna, R., Dragoi, E. N.,
Curteanu, N. Comparison between different methods
for developing neural network topology applied to a
complex polymerization process. In The 2010
International Joint Conference on Neural Networks
(IJCNN).
Dasgupta, D., Nino, F. 2009. Immunological computation.
Theory and Applications, New York, CRC Press.
Dragoi, E.N., Curteanu, S., Fissore, D. 2012a. Freeze-
drying modeling and monitoring using a new neuro-
evolutive technique. Chemical Engineering Science,
72, (0) 195-204.
Dragoi, E.N., Curteanu, S., Fissore, D. 2013b. On the Use
of Artificial Neural Networks to Monitor a
Pharmaceutical Freeze-Drying Process. Drying
Technology, 31, (1) 72-81.
Dragoi, E.N., Curteanu, S., Galaction, A.I., Cascaval, D.
2013a. Optimization methodology based on neural
networks and self-adaptive differential evolution
algorithm applied to an aerobic fermentation process.
Applied Soft Computing, 13, (1) 222-238.
Dragoi, E.N., Curteanu, S., Leon, F., Galaction, A.I.,
Cascaval, D. 2011. Modeling of oxygen mass transfer
in the presence of oxygen-vectors using neural
networks developed by differential evolution
algorithm.
Engineering Applications of Artificial
Intelligence, 24, (7) 1214-1226.
Dragoi, E.N., Curteanu, S., Lisa, C. 2012b. A neuro-
evolutive technique applied for predicting the liquid
crystalline property of some organic compounds.
Engineering Optimization, 44, (10) 1261-1277.
Dragoi, E.N., Suditu, G.D., Curteanu, S. 2012c. Modeling
methodology based on artificial immune system
algorithm and neural networks applied to removal of
heavy metals from residual waters. Environmental
Engineering and Management Journal, 11, (11) 1907-
1914.
Hsu, C. W., Chang, C. C., Lin, C. J. 2010. A practical
guide to support vector classification. Technical
report, Department of Computer Science, National
Taiwan University.
Lahiri, S.K., Ghanta, K.C. 2009. Artificial neural network
model with the parameter tuning assisted by a
differential evolution technique: The study of the hold
up of the slurry flow in a pipeline. Chemical Industry
and Chemical Engineering Quarterly, 15, (2) 103-117.
Noor, R.A.M, Ahmad, Z., Don, M.M., Uzir, M.H. 2010.
Modelling and control of different types of
polymerization processes using neural networks
technique: A review. The Canadian Journal of
Chemical Engineering, 88(6) 1065 – 1084.
Price, K., Storn, R., Lampinen, J. 2005. Differential
evolution. A practical approach to global optimization
Berlin, Springer.
Subudhi, B., Jena, D. 2009. An improved differential
evolution trained neural network scheme for nonlinear
system identification. International Journal of
Automation and Computing, 6, (2) 137-144.
Tao, L., Kong, X., Zhong, W., Qian, F. 2012. Modified
Self-adaptive Immune Genetic Algorithm for
Optimization of Combustion Side Reaction of p-
Xylene Oxidation. Chinese Journal of Chemical
Engineering, 20, (6) 1047-1052.
Wang, Z., Djuric, N., Crammer, K., Vucetic, S. 2011.
Trading representability for scalability: Adaptive
multi-hyperplane machine for nonlinear classification,
ACM SIGKDD Conference on Knowledge Discovery
and Data Mining.
Zhang, K., Lan, L., Wang, Z., Moerchen, F. 2012. Scaling
up kernel SVM on limited resources: A low-rank
linearization approach, International Conference on
Artificial Intelligence and Statistics(AISTATS).
ArtificialIntelligenceModellingMethodologiesAppliedtoaPolymerizationProcess
49