Figure 1: Overview of the Variational Quantum Circuit
Training Process with Weight Constraints.
tions of quantum gates are possible and lead to dif-
ferent training results. At the same time, the Hilbert
space is a closed space with possible values from
[0,2π]. It is still unclear how to best work with the
classically unusual domain restriction of the parame-
ters, set by the rotation operations. This problem is of
great relevance, as rotations, being periodic functions,
are not injective, and thus lead to ambiguous parame-
ter assignments and thus worse training performance.
Drawing inspiration from classical ML, where
data rescaling, or normalization techniques have
shown immense improvements in many cases (Singh
and Singh, 2019), we propose the introduction of re-
mapping training parameters in variational quantum
circuits as described in Figure 1. Concretely, we em-
ploy a set of well known fixed functions to unambigu-
ously map the weights to an interval of length 2π and
test their performance. For evaluation purposes, we
use a classification problem and a circuit architecture
suitable for the employed variational classifier. Our
experimental data shows, that the proposed weight re-
mapping leads to faster convergence in all tested set-
tings compared to runs with unconstrained weights.
In some cases, the overall test accuracy can also be
improved. In this work, we first describe the basics
of variational quantum circuits and related work. We
then explain the idea behind our approach and how
we setup up our experiments. Finally, we present and
discuss our results and end with a summary, limita-
tions and future work. All experiments and a PyTorch
implementation of the used weight re-mapping func-
tions can be found here
1
.
1
https://github.com/michaelkoelle/qw-map
2 VARIATIONAL QUANTUM
CIRCUITS
The most prominent function approximator used in
classical machine learning is the artificial neural net-
work: a combination of parameterized linear transfor-
mations and typically non-linear activation functions,
applied to neurons. The weights and biases used to
parameterize the linear transformations can be up-
dated using gradient based techniques like backprop-
agation, optimizing the approximation quality. Ac-
cording to Cybenko’s universal approximation theo-
rem, this model allows the approximation of arbitrary
functions with arbitrary precision.(Cybenko, 1989).
In a quantum circuit, information is stored in the
state of a qubit register
|
ψ
i
⟩
, i.e., normalized vectors
living in a Hilbert space H . In quantum mechanics,
a function mapping the initial state
|
ψ
i
⟩
onto the final
state
ψ
f
is expressed by a unitary operator U that
maps the inputs onto the outputs as in
ψ
f
= U
|
ψ
i
⟩
.
In contrast to classical outputs, quantum outputs can
only by obtained via so called measurements, which
yield an eigenstate corresponding with an expected
value of
ψ
f
O
ψ
f
, where O is typically chosen to
be the spin Hamiltonian in the z-axis.
In order to build a quantum function approximator
in form of a VQC, one typically decomposes the arbi-
trary unitary operator U into a set of quantum gates.
Analogously to Cybenko’s theorem, in the quantum
case it can be proved that any unitary operator act-
ing on multiple qubits can always be expressed by
the combination of controlled-not (CNOT) and ro-
tational (ROT) gates, which represent reflections or
rotations of the vector into the Hilbert space respec-
tively (Nielsen and Chuang, 2010). While CNOTs are
parameter-free gates, each rotation is characterized by
the three angles around the axes of the Bloch sphere.
These rotation parameters are the weights of the quan-
tum variational circuit. We can thus say that the final
state actually depends on the weights θ of the circuit,
and rewrite the output final value as
ψ
f
(θ)
O
ψ
f
(θ)
(1)
Starting from this theoretical basis, a function approx-
imator can be obtained once a suitable circuit struc-
ture, also called ansatz, has been chosen. Once this
is done and an objective function has been chosen,
the rotation weights can be trained in a quantum-
classical pipeline, as shown in Figure 1, completely
analogously to what is done with a neural network.
Similar to a classical neural network, where the
gradient is calculated using backpropagation, we can
differentiate the circuit with respect to the parame-
ters θ in a similar way using the parameter shift rule
ICAART 2023 - 15th International Conference on Agents and Artificial Intelligence
252