The ufp of a floating-point number corresponds to the
binary exponent of its most significant digit. Conver-
sely, the ulp of a floating-point number corresponds
to the binary exponent of its least significant digit.
Note that several definitions of the ulp have been gi-
ven (Muller, 2005).
2.2 Related Work
Several approaches have been proposed to determine
the best floating-point formats as a function of the ex-
pected accuracy on the results. Darulova and Kuncak
use a forward static analysis to compute the propaga-
tion of errors (Darulova and Kuncak, 2014). If the
computed bound on the accuracy satisfies the post-
conditions then the analysis is run again with a smal-
ler format until the best format is found. Note that
in this approach, all the values have the same format
(contrarily to our framework where each control-point
has its own format). While Darulova and Kuncak de-
velop their own static analysis, other static techniques
(Goubault, 2013; Solovyev et al., 2015) could be used
to infer from the forward error propagation the suita-
ble formats. Chiang et al. (Chiang et al., 2017) have
proposed a method to allocate a precision to the terms
of an arithmetic expression (only). They use a for-
mal analysis via Symbolic Taylor Expansions and er-
ror analysis based on interval functions. In spite of
our linear constraints, they solve a quadratically con-
strained quadratic program to obtain annotations.
Other approaches rely on dynamic analysis. For
instance, the Precimonious tool tries to decrease the
precision of variables and checks whether the accu-
racy requirements are still fulfilled (Nguyen et al.,
2016; Rubio-Gonzalez et al., 2013). Lam et al in-
strument binary codes in order to modify their preci-
sion without modifying the source codes (Lam et al.,
2013). They also propose a dynamic search method to
identify the pieces of code where the precision should
be modified.
Finally other work focus on formal methods and
numerical analysis. A first related research direction
concerns formal proofs and the use of proof assistants
to guaranty the accuracy of finite-precision computa-
tions (Boldo et al., 2015; Harrison, 2007; Lee et al.,
2018). Another related research direction concerns
the compile-time optimization of programs in order
to improve the accuracy of the floating-point compu-
tation in function of given ranges for the inputs, wit-
hout modifying the formats of the numbers (Damou-
che et al., 2017a; P. Panchekha and Tatlock, 2015).
3 THE SALSA TOOL
In this section, we introduce our tool, Salsa, for nu-
merical accuracy optimization by program transfor-
mation. Section 3.1 presents the tool in general and,
in Section 3.2, we describe the module dedicated to
precision tuning.
3.1 Overview of Salsa
Salsa is a tool that improves the numerical accu-
racy of programs based on floating-point arithme-
tic (Damouche and Martel, 2017). It reduces partly
the round-off errors by automatically transforming
C-like programs in a source to source manner. We
have defined a set of intraprocedural transformation
rules (Damouche et al., 2016a) like assignments, con-
ditionals, loops, etc., and interprocedural transforma-
tion rules (Damouche et al., 2017b) for functions and
other rules deal with arrays. Salsa relies on static
analysis by abstract interpretation to compute vari-
able ranges and round-off error bounds. It takes as
first input ranges for the input variables of programs
id ∈ [a,b]. These ranges are given by the user or co-
ming from sensors. Salsa takes as second input a
program to be transformed. Salsa applies the requi-
red transformation rules and returns as output a trans-
formed program with better accuracy.
Salsa is composed of several modules. The first
module is the parser that takes the original program
in C-like language with annotations, puts it in SSA
form and then returns its binary syntax tree. The se-
cond module consists in a static analyzer, based on ab-
stract interpretation (Cousot and Cousot, 1977), that
infers safe ranges for the variables and computes er-
rors bounds on them. The third module contains the
intraprocedural transformation rules. The fourth mo-
dule implements the interprocedural transformation
rules. The last module is the Sardana tool, that we
have integrated in our Salsa and call it on arithmetic
expressions in order to improve their numerical accu-
racy.
When transforming programs we build larger
arithmetic expressions that we choose to parse in a
different ways to find a more accurate one. These
large expressions will be sliced at a given level of
the binary syntactic tree and assigned to intermedi-
ary variables named TMP. Note that the transformed
program is semantically different from the original
one but mathematically are equivalent. In (Damou-
che et al., 2017a), we have introduced a proof by in-
duction that demonstrate the correctness of our trans-
formation. In other words, we have proved that the
original and the transformed programs are equivalent.
Mixed Precision Tuning with Salsa
49