blue) demonstrates the model’s precision in making
short-term predictions. The performance over these
30 days is a testament to the model’s utility in a
practical trading context.
5.3 Synthesizing Insights from the
Graphs
When viewed together, Figure 2 and Figure 3 tell a
comprehensive story about the LSTM model’s
performance. Figure 2 confirms the model’s ability to
learn from historical data and make accurate
predictions during the training phase, while Figure 3
demonstrates that the model maintains this accuracy
when applied to the critical short-term prediction
window of the last 30 days.
The consistency across both the training and
testing phases, as seen in Figure 2, alongside the
precision in the short-term as seen in the 30-day
Figure 3, provides a compelling case for the model’s
efficacy. The detailed evaluation of the model’s
predictions against actual stock prices offers a
convincing argument for its application in the
financial industry, especially within the volatile
Chinese Baijiu market.
In the ensuing sections, we will delve deeper into
the statistical validation of the model’s performance
and explore its potential impacts on investment
strategies within the Chinese Baijiu industry.
5.4 Root Mean Square Error (RMSE)
RMSE measures the square root of the average
squared differences between the predicted and actual
values. This metric is particularly sensitive to large
errors, meaning that higher values of RMSE indicate
larger errors being made by the model. A lower
RMSE value is preferable as it indicates that the
model’s predictions are closer to the actual stock
prices. In the context of your LSTM model, a
comparatively low RMSE would suggest that the
model is capable of making predictions with a high
degree of precision (Willmott & Matsuura, 2005).
5.5 Mean Absolute Error (MAE)
MAE, on the other hand, calculates the average of the
absolute differences between the predicted and actual
values. Unlike RMSE, MAE treats all errors equally,
providing a straightforward measure of prediction
accuracy without excessively penalizing larger errors.
A smaller MAE value would indicate that on average,
the model’s predictions deviate less from the actual
values, which is desirable in a stock price prediction
model (Chai & Draxler, 2014).
5.6 Mean Absolute Percentage Error
(MAPE)
MAPE expresses the average absolute error as a
percentage of the actual values. This metric is
particularly useful in contexts where you need to
understand the size of the prediction errors about the
actual stock prices. MAPE is beneficial for
comparative analysis and for communicating the
model’s performance in percentage terms, which can
be intuitively understood by a wide range of
stakeholders. A lower MAPE indicates that the
model’s predictions are highly accurate in relative
terms.
When these statistical measures are considered
together, they offer a comprehensive picture of the
LSTM model’s predictive performance. For instance,
if the model boasts a low RMSE, it suggests that there
are no large individual prediction errors, which is
complemented by a low MAE indicating consistent
accuracy across all predictions. A low MAPE would
further confirm the model’s precision in relative
terms, giving confidence that the predictions are
generally close to the actual stock prices.
In conclusion, the statistical analysis using RMSE,
MAE, and MAPE provides a robust framework for
evaluating the LSTM model’s accuracy. For investors
and analysts in the Chinese Baijiu industry, these
metrics are crucial for determining the reliability and
practical utility of the model’s predictions in real-world
financial decision-making scenarios. If the LSTM
model achieves favorable scores across these metrics,
it underscores its potential as a valuable tool for
forecasting and potentially for guiding profitable
investment strategies.
5.7 How Different Parameters Affect
the Metric Performance
Table 1 presents the model performance metrics
under two different sets of training parameters: one
with a time-step of 30, batch size of 5, and epochs of
60, and another with a time-step of 60, batch size of
1, and a single epoch. The former parameter set yields
lower RMSE, MAE, and MAPE values for both
training and testing datasets, indicating more accurate
predictions. Conversely, the latter set results in higher
error metrics, suggesting suboptimal performance.
This contrast underscores the critical role of
parameter optimization in enhancing the LSTM
model’s predictive accuracy (Hamilton, 2020).