when the suggested algorithm is suitable to be a
replacement for monitoring landslide deformation.
Wu et al. (Wu L, 2022) used a time series approach to
decompose cumulative landslide deformation into
periodic and trend deformation, and cubic
polynomials were used to predict the trend
deformation. Considering the periodic variation of
rainfall and reservoir level, the proposed model could
better capture the characteristics of the provided data
and improve prediction accuracy compared to GRU,
and C-GRU attains a lower mean error in squares and
represents an important increase in landslide accuracy
in forecasting.
In summary, domestic and international research
trends show that slope deformation is more
displacement-oriented, and slope displacement is
influenced by both internal and environmental
factors, and intelligent algorithms of machine
learning and deep learning among intelligent
algorithms are widely applied in the prediction of
slope displacement and obtain good development
results. However, many factors affect dam
deformation, and dam deformation prediction needs
to consider more comprehensive factors. Therefore,
in this paper, CNN is combined and optimized, and
traditional indicators such as infiltration line,
reservoir level and other related factors are
considered, and weather factors such as wind speed
and temperature are incorporated.
2 CNN MODEL CONSTRUCTION
2.1 Tailings Dam Displacement
Influencing Factors
Tailings dam displacement is driven by its own
geological structure, topography, external human
activities, climate, runoff and other conditions, so that
the originally stable slope can suddenly and strongly
deform. The factors affecting tailings dam
deformation can be divided into three categories: first,
internal factors, including infiltration line, reservoir
water level, dam settlement and other factors; second,
environmental factors, including weathering, rainfall,
temperature, etc.; third, human factors, including
mining operations.
2.2 CNN Model
Fundamental structure of CNN consists of input layer,
convolutional layer, pooling layer, fully connected
layer and output layer. Generally, multiple
convolutional layers and pooling layers are adopted,
and the convolutional layers and pooling layers are
set up alternately, which means that one
convolutional layer attaches to one pooling layer, and
the pooling layer attaches to another convolutional
layer following the pooling layer. The output feature
surface of the convolutional layer of each neuron is
locally connected to its input, and the corresponding
connection loadings are weighted and added to the
local input plus bias to obtain the input value of the
neuron.
2.3 Convolutional Layers
The convolutional layers of a CNN extract different
features of the input through convolution operations.
The first convolutional layer extracts low-level
features for edges, lines, and corners, while the high-
level convolutional layer extracts the high-level
features. Each convolutional layer in a CNN satisfies
the following relationship with respect to the size of
each output feature surface ( namely, the number of
neurons):
oM 1
iMapN CWindow
apN
CInterval
(1)
where iMapN is dimension of each input feature
surface; CWindow is dimension of the convolutional
kernel; CInterval is the length of the sliding step of
the convolutional kernel in the layer preceding it, and
in general, there is a need to make sure that Equation
(1) is integrable or the CNN network structure
requires additional processing. Amount of trainable
parameters within each convolutional layer CParams
satisfy equation (2)
(1CParams iMap CWindow oMap )
(2)
Where oMap is one of the number of output
eigenfaces of each convolutional layer; ioMap is one
of the number of input eigenfaces.1 denotes the
deviation, which is shared among the same output
eigenfaces.
Among the CNN structures, more depth and more
number of feature facets, the greater the feature space
that the network can represent, the stronger the
network learning ability, but at the same time, it will
also make the network computation more complex as
well as prone to overfitting. Thus, in practical
applications, the depth of the network, the number of
feature facets, the size of the convolution kernel and
convolution's sliding step should be appropriately
selected in order to obtain a good model while
shortening the time of training.
ANIT 2023 - The International Seminar on Artificial Intelligence, Networking and Information Technology
302