means that in a single configuration file the user can
define more than a single fuzzy model, allowing for
the definition and the simultaneous generation of mul-
tiple, complex models.
3.2.2 Automatic Code Generation
The generation of the C code is done by means of tem-
plates that are filled with the proper parameter values
to obtain a program that can be successfully built us-
ing standard ANSI C compilers.
In this project, the Jinja
3
text-based template en-
gine was used. Thanks to its syntax, it is highly inte-
grated with the Python programming language. Dif-
ferently from basic template engines, where specific
parts of the templates are filled with the right text,
Jinja offers more flexible and powerful options in-
cluding the embedding of pure Python objects and
programming statements directly in the template it-
self, and template inheritance. These features allow
a modular implementation of the framework, a clear
and concise definition of the templates, which trans-
late to a clear and readable C library.
The code generation follows a hierarchy of three
different levels: the models are stored in separate
files, one for each model, while the included logic
to process each type of fuzzy model (e.g. F-IND or
standard model) is included as a library only when
there is at least one model requiring it. Finally, com-
mon files shared among all models are created and
included, such as a main.c file containing the main
function. This latter provides to the user an exam-
ple on how each defined index can be initialized and
executed. Descriptions for each model are generated
from the TOML and provided as comments both in
each model file and in the sample main.c.
The automatic code generation handles all but
index calculation, which depends on the input and
therefore must be calculated online. In particular, the
automatic code generation uses eqs. (3) and (4) to cal-
culate the distance between membership functions as
in eq. (7) and evalutes the relative weights (eq. (8)).
Model and variable names defined in the configu-
ration file are used as identifiers in the code, to achieve
the desired level of readability of the generated code.
Some variable types are also automatically inferred
by the model properties.
3.3 The C Library
The goal of the proposed framework is to generate an
executable program that suitably runs a fuzzy infer-
ence system on resource constrained embedded hard-
3
Jinja template engine: http://jinja.pocoo.org/.
ware. Therefore, C programming language has been
chosen for its resource efficiency. The generated C
code is structured in several libraries containing the
procedures to compute the supported fuzzy indexes.
The source code of each index is written within a ded-
icated .c, along with the necessary parameter values
to implement the index. These values include the ref-
erences to input and output membership functions and
the normalized weights. All these values are statically
allocated in memory to optimize the memory usage
in the constrained environment that characterizes typ-
ical embedded applications. Dynamic allocation (the
family of malloc functions) is not used in the library
since several platforms do not support it. Most of the
required memory is used to store models parameters;
the index calculation only makes use of a few local
variables. As a side benefit, this approach allows to
estimate the required memory beforehand. Moreover,
thanks to eq. (2), we know that only up to two adja-
cent membership functions have non-null value, and
their values are related, since they sum up to 1. There-
fore, once the first “active” membership is identified,
there is no need to calculate nor to store the next mem-
bership values. As a result, exactly one value per in-
put variable is stored, and possible time consuming
calculations are eliminated; this is especially relevant
when complex functions need to be evaluated.
Valgrind analysis shown that around 90% of the
computation time is spent in detecting and evaluating
the membership functions having non-null value, and
to calculate the automatically generated rules. There-
fore, the optimization of the code focused on the eval-
uation of the rules (see eqs. (9) and (10)). An efficient
method to evaluate such equations is to store the in-
dexes of the first active membership function for each
variable – and their values – into one-dimensional ar-
rays, which are indexed using a properly encoded bit-
mask. In particular, the indexes of the first active MF
and their values for each variable are stored and in-
dexed using a properly encoded bit-mask. The advan-
tage is that the update of the bit-mask is done effi-
ciently.
Furthermore, when a variable is detected to be full
degree, half of the remaining rules, i.e., those includ-
ing the adjacent membership function (which has de-
gree equal to 0) can be skipped. An empirical evalua-
tion of this behaviour is reported in section 4.2.
As consequence, the worst case computation time
(WCET) corresponds to the evaluation of all the rules.