Figure 1: Illustration of the Fast Gradient Sign Method
(FGSM) showing how the gradient is calculated and used
to perturb the input data.
of the loss function concerning the input data and then
perturbs the input in the direction that maximizes the
loss. This method has been extended to iterative vari-
ants, such as the Basic Iterative Method (BIM) and the
Projected Gradient Descent (PGD) method, which re-
fine the adversarial example over multiple iterations,
making it even more challenging for the model to
maintain its accuracy (Kurakin et al., 2017).
In contrast, black-box attacks assume no knowl-
edge of the model’s internal workings. Instead, they
rely on querying the model to infer information about
its behavior and construct adversarial examples based
on the observed outputs. These attacks demonstrate
that even without direct access to the model, adver-
saries can still find ways to generate perturbations that
cause significant forecasting errors (Papernot et al.,
2017).
While adversarial attacks were initially studied in
the context of image classification, recent research
has shown that time series models are equally sus-
ceptible to these attacks (Fawaz et al., 2019). For
time series forecasting, adversarial attacks can ex-
ploit specific characteristics such as trends, season-
ality, and noise components, which are critical for
accurate prediction. For instance, Harford et al.
(2021) explored how adversarial attacks could distort
key patterns like seasonality and trends, leading to
significant prediction errors in financial time series.
They found that by introducing small, targeted per-
turbations at critical points in the time series—such
as around turning points or during periods of high
volatility—the model’s performance could degrade
dramatically. These findings suggest that even well-
trained models can be vulnerable to sophisticated ad-
versarial attacks, particularly in high-stakes domains
like finance or healthcare, where accurate forecasting
is crucial. Moreover, Zhang et al. (2021) demon-
strated that time series models could be highly sen-
sitive to adversarial noise, especially when the noise
is designed to mimic common real-world perturba-
tions such as sudden market shocks or unexpected
changes in seasonal patterns. This vulnerability high-
lights the importance of testing models against adver-
sarial scenarios that go beyond standard Gaussian or
white noise, which often fail to capture the complex-
ity of real-world challenges.
Several methods have been proposed to generate
adversarial attacks specifically tailored for time series
data:
• Gradient-Based Methods: These methods adapt
techniques like FGSM and PGD for time series
data by computing the gradient of the model’s
loss function with respect to the input time series.
For example, the Time Series Fast Gradient Sign
Method (TS-FGSM) perturbs data points where
the model is most sensitive, such as around inflec-
tion points or during transitions between different
regimes (Fawaz et al., 2019).
• Decision Boundary Attacks: This approach fo-
cuses on finding points along the decision bound-
ary where the model is most likely to misclassify
or make incorrect predictions. Chen et al. (2020)
propose a decision boundary attack that leverages
domain knowledge, such as seasonality and trend
information, to craft perturbations that are more
likely to fool time series models.
• Transfer-Based Attacks: In situations where the
adversary lacks access to the model, they might
employ a transfer-based attack. This involves
training a surrogate model that mimics the behav-
ior of the target model. Adversarial examples gen-
erated for the surrogate model can often transfer
to the target model, leading to errors (Papernot
et al., 2017).
• Pattern-Based Attacks: Recent work by Liu et
al. (2023) introduces pattern-based adversarial at-
tacks, where perturbations are designed to disrupt
specific patterns in the time series, such as sea-
sonal cycles or recurrent motifs. These attacks
are particularly effective against models that rely
heavily on recognizing and extrapolating such
patterns.
To counter adversarial attacks, several defense mech-
anisms have been proposed:
• Adversarial Training: This involves augment-
ing the training data with adversarial examples,
thereby teaching the model to recognize and re-
sist adversarial noise. This approach has proven
effective in increasing robustness against known
attack strategies, although it may be less effec-
tive against unknown or more sophisticated at-
tacks (Madry et al., 2018).
• Gradient Masking and Regularization: Tech-
niques like gradient masking make it harder for
Assessing Forecasting Model Robustness Through Curvature-Based Noise Perturbations
489