1. A(s(m), s(n))>
lpo
A(m,A(s(m), n)) 2b-A 2, 3, 4
2. A(s(m), s(n))>
lpo
m 1b-1 5
3. A(s(m), s(n))>
lpo
A(s(m),n) 2b-A 6, 7, 8
4. (s(m), s(n)) >
lex
lpo
(m,A(s(m), n)) lex-1 5
5. s(m) >
lpo
m 1a-1
6. A(s(m), s(n))>
lpo
s(m) 1a-1
7. A(s(m), s(n))>
lpo
n 1b-2 9
8. (s(m), s(n)) >
lex
lpo
(s(m),n) lex-2 9
9. s(n) >
lpo
n 1a-1
Figure 6: Shuffled Fitch style proof of the third claim.
new proof obligations under the existing list. The
name of the applied rule (in this case 2b-A), together
with the newly generated line numbers (2, 3, and 4)
can be written down immediately, because they won’t
change anymore. And this process is repeated. So in
the next step the second proof obligation is the first
without a proof justification. So the appropriate rule
is applied, a single new proof obligation is added, and
the proof justification (1b-1 5) is written down. This
process continues until all proof obligations have a
proof justification. See Figure 10 for the full construc-
tion.
Once the total proof is finished, the first thing that
is noticeable is that the proof is two steps shorter than
the previous proofs. This is due to the fact that reusing
lines 5 and 9 comes natural in this method, whereas in
the previous methods it would have been possible to
do that as well, but in a less natural way. For instance,
in tree style it is very uncommon to reuse results from
a different branch. The second noticeable thing is that
it really looks like the Fitch proof in Figure 3, except
for the fact that the order is no longer enforced by the
structure of the proof, but by the order in which partic-
ular proof obligations are justified. When introduced
above, the method stated that each time the first obli-
gation without a justification should be taken care of,
but that was an arbitrary choice for ease of explain-
ing. The last line, or a random line would have also
worked. This results in a shuffled order of a normal
Fitch proof, and hence the name.
4 PROOFS IN DIGITAL EXAMS
In the previous sections, the styles presented were
mainly introduced with the focus on the clarity and
usability when being written down on paper. And for
this it doesn’t really matter whether it is written on pa-
per as part of a homework assignment or as part of an
exam. They are all usable in these situations, although
the more formal notations are easier to check for ac-
tual correctness. The question, however, is whether
these styles are also usable in digital exams.
A few years ago digital exams were introduced
at the author’s institute. One of the reasons for this
was the increasing number of students, making it ever
more work to grade exams. And having a system
that provides a good way of at least partial automatic
grading of student submissions saves a lot of time.
There are several environments for organizing digital
exams, for instance Inspera Assessment (Nordic As-
sessment Innovators, 2020), RemindoToets (Paragin,
2020), TestVision (Teelen, 2020), Cirrus Assessment
(Cirrus BV, 2020), and WISEflow (UNIwise, 2020).
The system currently in use at the author’s institution
is the Cirrus Assessment software. Therefore, the rest
of this paper is about usability in Cirrus, but presum-
ably the results will also hold for other digital exami-
nation systems.
Cirrus Assessment is cloud based: it can be used
from campus, but also from home. It used to be
that all exams at the author’s institute were taken in
on campus lecture rooms in order to have controlled
circumstances. However, due to the COVID19 pan-
demic, many exams are nowadays taken at home as
well. In this paper it is not discussed which measures
are taken to control the circumstances when the stu-
dents are taking the exam at home. But it is impor-
tant to stress that due to this pandemic, many courses
that were scheduled to have a regular written exam on
campus, now had to be converted to a digital exam. So
there is a need for dealing with mathematical proofs
like the ones in this paper in digital assessments.
The Cirrus software allows several types of ques-
tions that allow automatic grading, like multiple-
choice questions, multiple-response questions, select
from a list, fill in a blank, or even matching ques-
tions. However, the system was clearly not created
with mathematical exams in mind. Although for some
mathematical questions it is really well possible to re-
design them a bit in order to check the same learning
objective as before, but now in a way that can be au-
tomatically graded, for many other questions that is
just not possible. Recently, Cirrus has added a par-
ticular ‘mathematical question’ that can be automat-
ically graded, based on the platform Sowiso (Sowiso
BV, 2020). This platform is connected to the com-
puter algebra system Maxima (Shelter, 1982), which
allows for randomization and complex evaluation of
the student’s answers. However, also this type of
question is not very suitable for many mathematical
problems. In particular, questions where diagrams
or figures should be created to answer the question
are difficult to answer. Fortunately, Cirrus imple-
LPO Proofs in Two Educational Contexts
275