for control (PINC), we identified three primary limi-
tations: (I) Long training times for large and/or com-
plex state-space models, (II) limitations arising from
the zero-order-hold assumption for excitation, and
(III) the need for hyperparameter-sensitive loss bal-
ancing schemes.
In this study, we introduced the domain-decoupled
physics-informed neural network (DD-PINN) as a so-
lution to these limitations. We first formulated the
DD-PINN architecture, showcasing how it enables
calculation of gradients for the physics loss in closed
form, is compatible to higher-order excitation input,
and always has an initial-condition loss of zero. We
then compared the DD-PINN to the PINC in simula-
tion for three benchmark systems. The results demon-
strated that the DD-PINN significantly reduces train-
ing times while maintaining or surpassing the predic-
tion accuracy of the PINC. Thereby, the self-loop pre-
diction time of the DD-PINN is comparable to the
PINC.
The DD-PINN allows for fast and accurate learn-
ing of large and complex dynamical systems, which
were previously out of reach for the PINC. Its fast
prediction abilities create opportunities for enabling
MPC in larger dynamical systems, where traditional
methods like numerical integrators are too slow, and
training a PINC is not practical. Here, the data
efficiency of physics-informed machine learning re-
mains, making it possible to integrate sparse datasets
with the system-governing physical equations. Fu-
ture work could explore using DD-PINN for higher-
dimensional nonlinear systems to realize accurate
state estimation or model predictive control.
ACKNOWLEDGEMENTS
This work was partially funded by the Ger-
man Research Foundation (DFG, project numbers
405032969 and 433586601) and the Lower Saxo-
nian Ministry of Science and Culture in the program
zukunft.niedersachsen.
REFERENCES
Antonelo, E. A., Camponogara, E., Seman, L. O., Jor-
danou, J. P., de Souza, E. R., and H
¨
ubner, J. F. (2024).
Physics-informed neural nets for control of dynamical
systems. Neurocomputing, 579:127419.
Berg, J. and Nystr
¨
om, K. (2018). A unified deep arti-
ficial neural network approach to partial differential
equations in complex geometries. Neurocomputing,
317:28–41.
Bischof, R. and Kraus, M. (2021). Multi-objective loss
balancing for physics-informed deep learning. arXiv
preprint arXiv:2110.09813.
Chen, Z., Badrinarayanan, V., Lee, C.-Y., and Rabinovich,
A. (2018). GradNorm: Gradient normalization for
adaptive loss balancing in deep multitask networks. In
International conference on machine learning, pages
794–803. PMLR.
Chiu, P.-H., Wong, J. C., Ooi, C., Dao, M. H., and Ong, Y.-
S. (2022). CAN-PINN: A fast physics-informed neu-
ral network based on coupled-automatic–numerical
differentiation method. Computer Methods in Applied
Mechanics and Engineering, 395:114909.
Cuomo, S., Di Cola, V. S., Giampaolo, F., Rozza, G.,
Raissi, M., and Piccialli, F. (2022). Scientific machine
learning through physics–informed neural networks:
Where we are and what’s next. Journal of Scientific
Computing, 92(3):88.
de Curt
`
o, J. and de Zarz
`
a, I. (2024). Hybrid state esti-
mation: Integrating physics-informed neural networks
with adaptive ukf for dynamic systems. Electronics,
13(11):2208.
Deguchi, S. and Asai, M. (2023). Dynamic & norm-based
weights to normalize imbalance in back-propagated
gradients of physics-informed neural networks. Jour-
nal of Physics Communications, 7(7):075005.
Fehr, J., Kargl, A., and Eschmann, H. (2022). Identification
of friction models for MPC-based control of a power-
cube serial robot. arXiv preprint arXiv:2203.10896.
Gupta, G., Xiao, X., and Bogdan, P. (2021). Multiwavelet-
based operator learning for differential equations.
Advances in neural information processing systems,
34:24048–24062.
Hao, Z., Liu, S., Zhang, Y., Ying, C., Feng, Y., Su, H., and
Zhu, J. (2022). Physics-informed machine learning: A
survey on problems, methods and applications. arXiv
preprint arXiv:2211.08064.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Hendrycks, D. and Gimpel, K. (2016). Gaussian error linear
units (gelus). arXiv preprint arXiv:1606.08415.
Heydari, A. A., Thompson, C. A., and Mehmood, A.
(2019). Softadapt: Techniques for adaptive loss
weighting of neural networks with multi-part loss
functions. arXiv preprint arXiv:1912.12355.
Jagtap, A. D., Kawaguchi, K., and Karniadakis, G. E.
(2020). Adaptive activation functions accelerate
convergence in deep and physics-informed neu-
ral networks. Journal of Computational Physics,
404:109136.
Kapoor, T., Wang, H., N
´
unez, A., and Dollevoet, R. (2024).
Physics-informed neural networks for solving forward
and inverse problems in complex beam systems. IEEE
Transactions on Neural Networks and Learning Sys-
tems, 35(5):5981–5995.
Karniadakis, G. E., Kevrekidis, I. G., Lu, L., Perdikaris,
P., Wang, S., and Yang, L. (2021). Physics-informed
ICINCO 2024 - 21st International Conference on Informatics in Control, Automation and Robotics
64