e.g. to capture events and to schedule actions by a par-
tial order
3
, the use of a pure local time with arbitrary
begin is absolutely sufficient – e.g. time t
0
= 0 may
simply indicate the system start. However, as soon
as time is of global relevance, e.g. if actions have to
take place synchronized on different systems, a com-
mon time base is often indispensable. This immedi-
ately raises the question about which time or which
system is used as reference. In any case, its provider
should be highly available and exhibit a high clock
stability and precision. Several methods exist for the
actual synchronization: Some are based on (regular)
time checks or on the measurement of the pairwise
drift between the involved systems. Others rely on
dedicated reference systems and allow the synchro-
nization based on centrally triggered events like e.g.
radio broadcasts (cf. GPS and the DCF77 protocol).
Finally, distributed methods are available for multi-
hop systems to successively achieve a common time
base, e.g. via Desynchronization (M
¨
uhlberger, 2013).
3 AN ADVANCED TIME
DISCRETIZATION APPROACH
Considering the aforementioned problems, which
originated from the integration of time-awareness into
digital systems, P1-P3 directly affect the environmen-
tal interaction and can be addressed by each system
individually. In contrast, P4 and P5 require some
information exchange with other systems. These
“peers”, however, are not necessarily available dur-
ing the entire system runtime. For this reason, P1-P3
are treated locally at the embedded systems level (e.g.
in the operating system kernel), while P4 and P5 must
be addressed more globally (e.g. in the network layer
or at application level).
At the embedded systems level, our approach re-
lies on a hardware timer component to provide a lo-
cal system timeline with a fixed temporal resolution.
The timeline management is integrated directly into
the OS kernel and accessible for all software layers
through the OS API. This unifies the usage by appli-
cation tasks and avoids execution time imponderabil-
ities through unpredictable code interleaving at run-
time. Based on our approach, the kernel automat-
ically captures a timestamp
˜
t
e
for each interrupt e,
and compensates the error’s asymmetry about
1
2
λ
C
which would result from using the na
¨
ıve approach
with I
1
= [0, λ
C
) as explained in Section 2. Therefore,
the kernel as a hardware abstraction layer provides
3
“Partial”, since the discretization of time may lead to
simultaneity (→ P3).
standardized and architecture dependent interrupt ser-
vice routines for introducing a constant and carefully
dimensioned delay ∆
TS
= ∆
IRQ
+∆
ISR
before actually
capturing the timer’s counter value after the IRQ oc-
currence. According to Eq. (5) we then have to reduce
the captured counter value by an adequate correction
value ∆
corr
: Selected properly, this correction finally
results in the symmetry about 0 for I
1
:= [−
1
2
λ,
1
2
λ).
While the timestamp measurement error E
t
e
will still
be equally distributed over I
1
, this interval is shifted,
and the average timestamp error is reduced from ini-
tially
1
2
λ down to 0. At the same time, the propaga-
tion and amplification of systematic errors for time-
dependent reactions will also be kept low and sym-
metric about 0, i.e. I
4
= I
1
+ I
3
= [−λ
C
, +λ
C
). Table
1 compares the error intervals of our compensation
approach with the na
¨
ıve technique.
How can this symmetry be guaranteed? In or-
der to deal with the related problems P1 and P2, we
propose a concept based on two synchronized clocks
with interdependent frequency. Thereby, we assume
the CPU frequency to be higher than the timer fre-
quency, while conversely, the system time is derived
from the quartz-stabilized CPU clock by an even inte-
ger divider. Commonly, both requests do not impose
an unreasonable restriction on the hardware/software
design: In fact, they are already satisfied in many
systems, since usually only a single central oscilla-
tor is used as base for all other system clocks. While
the CPU is commonly directly driven by this main
clock, other components apply power-of-two dividers
to derive their individual frequencies. Finally, and
for computationally constrained embedded systems in
particular, driving a local time with the maximum res-
olution would cause unnecessary CPU load
4
.
Besides the following formal description of our
approach, we also refer to the example in Figure 1
for a comprehensive understanding. Initially, we de-
note the CPU clock frequency as f and its period as
λ. The system time frequency is denoted as f
C
and its
period as λ
C
. In addition, we demand for
f
C
:=
f
α
and λ
C
:= α ·λ with α ∈
≥2
, α even.
(6)
If an interrupt e occurs at time t
0
e
, the corre-
sponding timer counter c
e
will not be copied before
some system inherent delay ∆
TS
has passed. For our
approach, we request this delay to take exactly ∆
c
CPU cycles as follows:
4
The system time must be accumulated in software at
every timer overflow. Especially for timers with small word
widths, this can quickly lead to a huge performance penalty.
For instance, a 16 Bit counter counting at f
C
= 1 MHz will
already overflow after every 65.536 ms.
SENSORNETS2014-InternationalConferenceonSensorNetworks
66