unknown, which make the transformation and
obtaining functional equivalence difficult. Since the
idea is to transform the model from one set up to
another, it is important to make sure that there is no
deviation in the functionality after migration. Before
going forward, to avoid confusion and for easier
explanation here onwards consider the legacy floating
point model as reference model and the model
developed with latest architecture as production
model. In Model Based Development (MBD), across
projects different tools are used for validating and
verifying the code generated. Let us explore how
different verification methods were fared in the
current case. Model compare tools can be used to
compare the models to identify the differences. Both
the production and reference models are different in
nature of development i.e, floating point and fixed
point respectively. Since in production model fixed
scaling is given to each variable and parameter in the
model, there will be differences in the properties of
each model block hence making it difficult to confirm
from model comparison that code works as expected.
As the model grows bigger, the model comparison
activity becomes more laborious and less efficient.
Due to the reasons mentioned above, model
comparison do not fare well in ensuring the functional
equivalence.
In order to do code comparison between both
production and reference model based codes, there is
no code available with the reference model artifacts.
Automated code can be generated for reference model
using a relevant compiler and it can be compared with
the code of production model. The code from
production model is generated in TargetLink
platform. When auto generated reference model code
is compared with the implicit automatically
optimized, typecasted TargetLink based code
(production code), the number of differences found is
high due to inclusion of scaling and property
differences. It is a laborious task for the engineers to
identify the deviations among the differences. Even
with the sincere efforts of the engineers there are high
chances of inefficiencies, and makes the whole idea
resource heavy, time intensive and inefficient. If test
specification is available with the reference model,
same test matrix can be used on both the production
and reference artifacts for software in loop unit
testing and result matchup. With some extra test cases
for the fixed point data range, resolution testing and
comparing the results functional equivalence can be
achieved. Since the test matrix and reference model
code are not available, software in loop testing and
result matchup is not possible. Vehicle testing is
costly and the test engineer has to drive, test for all
the functionalities which is very costly and
inefficient. Some of the software level bugs related to
scaling and resolution of data variables can go
without being identified in vehicle testing. Direct
vehicle testing also contradicts with the whole idea of
making the software vehicle ready with minimum
issues and pre-validated sufficiently for vehicle
testing. One interesting idea would be to prepare the
test specification for the reference model and the
production code can be validated by the same test
specification. Since the requirements are embedded in
the form of reference model, writing test cases using
reference model sounds like a possible solution. But
even with this solution how to completely validate the
production code that has calculations with fixed point
scaling with less efforts of resource is unanswered.
The results on the production code has to be analysed
for resolving the failed cases, which is time intensive
for engineers. Since there is no reference code, it is
not possible to do software in loop simulation on
reference artifacts and there are no reference results
to compare with the production code testing results.
As driver drives the car with the code but not with the
model it is very important for the engineer to
completely validate the production code, hence this
idea can be ignored.
The second case considered in the paper is a set of
reference model which is Non-AUTOSAR
TargetLink fixed point scaling environment model,
code and test matrix for unit testing. Production
model is AUTOSAR platform based model
transformed from reference model and code. Starting
with similar analysis like the one in first case, model
comparison will not work, as there will be a change
in the model architecture. Due to difference in
platform setup, it is difficult to find the differences in
the code as there will be restructuring in the code.
Since the test matrix is available for the reference
model the same can be used on the production code,
by comparing the results, we can make sure that
software is equivalent and hence functional
equivalence can be achieved. However, for unit
testing done with various coverage metrics, for
example like C1 coverage there is a chance that some
branches in code could be missed in testing. The
functional equivalence process should be developed
such that if there is any minor issue underlying, it
should not go unreported without being registered
with the engineer. In both the cases, even though the
context and settings are different the target is to
achieve the functional equivalence of the production
code with that of reference. In search of solution for
this problem we researched the literature and our