changes of a fixed group of goods/services commonly
consumed by the local community. The Consumer
Price Index (CPI) measures the average change in the
price paid by consumers for consumer goods and
services (Yaziz, Mohd, and Mohamed 2017).
Inflation is defined as a situation where generally the
price of goods has increased continuously. In order to
measure inflation, Statistics of Indonesia (BPS) use
the Consumer Price Index (CPI) (Bonar, Ruchjana,
and Darmawan 2017). Therefore predict the
Consumer Price Index is very important to do. This
research is expected to be widely used, both for local
government and for academics as study
material/research especially related to the economic
field and public policy.
In previous research, (Wanto, Zarlis, et al. 2017)
Conducting research to predict the Consumer Price
Index (CPI) of foodstuffs group using artificial neural
network backpropagation and Conjugate Gradient
Fletcher-Reeves. The research resulted in an accuracy
of 75% when using backpropagation method, the best
architecture used 12-15-1. While using the method
Fletcher-Reeves produce the level of 67% drain
which also use architectural model 12-15-1. The
drawback of this research is the result of less accurate
accuracy as it decreases, which is probably caused by
the inappropriate selection of network architecture.
2 RUDIMENTARY
2.1 Algoritma Backpropagation
Artificial Neural Network (ANN) is a computational
model, which is based on Biological Neural Network.
Artificial Neural Network is often called as Neural
Network (NN) (Sumijan et al. 2016).
Backpropagation (BP) algorithm was used to develop
the ANN model (Antwi et al. 2017). The typical
topology of BPANN (Backpropagation Artificial
Neural Network) involves three layers: input
layer,where the data are introduced to the network;
hidden layer, where the data are processed; and output
layer,where the results of the given input are
produced (Putra Siregar and Wanto 2017).
Backpropagation training method involves
feedforward of the input training pattern, calculation
and backpropagation of error, and adjustment of the
weights in synapses (Tarigan et al. 2017).
2.2 Algoritma Fletcher Reeves
The conjugate gradient method (CGM) is particularly
effcient and simple approaches with low storage,
good numerical performances and global convergent
properties for solving unconstrained optimization
problems (Keshtegar 2016). Conjugate gradient
method, as an efficient method, is used to solve
unconstrained optimization problems (Li, Zhang, and
Dong 2016). The conjugate gradient (CG) method
can be considered as an instance of the heavy ball
method with adaptive step size (Yao and Ning 2017).
In the above types, the weights update, for each
iteration, is made by the step size in the negative
gradient direction by learning rate. In the conjugate
gradient algorithms, this step size is modified by a
search function at every iteration such that the goal is
reached as early as possible within a few iterations
Fletcher-Reeves update (cgf) is much faster than
variable learning rate algorithms & resilient
backpropagation but requires a little more storage as
computations are more but suffers from the fact that
the results may vary from one problem to another
(Madhavan 2017).
2.3 Algoritma Resilient
The concept of resilient propagation was floated by
Riedmiller in 1993 (Riedmiller and Braun 1993),
which had been exploited in single (Igel and Husken
2003) and two dimension (Tripathi and Kalra 2011)
(Kantsila, Lehtokangas, and Saarinen 2004)
problems, where it proved its momentousness. This
paper proposes a quaternionic domain resilient
propagation algorithm (RPROP) for multilayered
feed-forward networks in quaternionic domain and
presents its exhaustive analysis through a wide
spectrum of benchmark problems containing three or
four dimension information and motion interpretation
in space.
The propagation of this procedure is based on the
sign of partial derivatives of error function instead of
its value as in back-propagation algorithm. The basic
idea of the proposed algorithm is to modify the
components of quaternionic weights by an amount Δ
(update value) with a view to decrease the overall
error and the sign of gradient of error function
indicates the direction of weight update. Without
increasing the complexity of algorithm, the proposed
RPROP algorithm is boosted by error-ependent
weight backtracking step, which accelerates the
training speed appreciably and provides better
approximation accuracy. The neural network
(ARENA et al. 1996) (Minemoto et al. 2016) and
backpropagation algorithm in quaternionic domain
(BP) (Cui, Takahashi, and Hashimoto 2013) has been
widely applied in problems dealing with three and
four dimensional information; recently its