Authors:
Abdullah Khan
1
;
Alexei Lisitsa
2
and
Alexei Vernitski
1
Affiliations:
1
Department of Mathematical Sciences, University of Essex, Essex, U.K.
;
2
Department of Computer Science, University of Liverpool, Liverpool, U.K.
Keyword(s):
Computational Topology, Gauss Diagrams, Realizable Diagrams, Machine Learning.
Abstract:
Recent research in computational topology found sets of counterexamples demonstrating that several recent mathematical articles purporting to describe a mathematical concept of realizable Gauss diagrams contain a mistake. In this study we propose several ways of encoding Gauss diagrams as binary matrices, and train several classical ML models to recognise whether a Gauss diagram is realizable or unrealizable. We test their accuracy in general, on the one hand, and on the counterexamples, on the other hand. Intriguingly, accuracy is good in general and surprisingly bad on the counterexamples. Thus, although human mathematicians and AI perceive Gauss diagrams completely differently, they tend to make the same mistake when describing realizable Gauss diagrams.