ent: the time it is generated and passed to the sched-
uler to request the CPU – called arrival time, the time
it has to wait to be allocated the CPU – called wait-
ing time, and turnaround time is the time the process
is actually using the CPU to process its tasks. And
in fact, when users run computer programs, it is im-
possible to know in advance when the process will
pause, stop (close) because that completely depends
on user behavior. Therefore a scheduling method of
rotation type (called Round Robin–RR) can be ap-
plied. The idea of this approach is that each process
will be given a fixed amount of time by the operat-
ing system q called quantum. Ideally, q is enough or
more than enough for the process to finish its task,
otherwise the process is forced to stop using the CPU
after the q interval and wait for the next CPU run, and
the next running turns are allocated CPU time up to q
(time unit) (Rajput and Gupta, 2012; Shyam and Nan-
dal, 2014).
If there are n processes in the queue waiting for
their turn to execute, the RR algorithm will not care
about queue order or priority like the FCFS/FIFO
or Priority algorithms as above, it will distribute to
each process 1/n of CPU time and each distribution
does not exceed the q quota. No process has to wait
more than q(n− 1) (CPU time). In fact, operating sys-
tems often choose the value of q in the range 1–10
(ms) (Tanenbaum and Bos, 2015).
To see the difference between the above algo-
rithms, let us consider an example as follows: sup-
pose there are five processes P
1
,P
2
,P
3
,P
4
, and P
5
are
open and pending (granted CPU time) respectively.
Assume the time required for processes to complete
their tasks is 10, 6, 2, 4, and 8 (ms) respectively;
The priority for each process is 3, 5, 2, 1, and 4
respectively (the higher the priority, the higher the
priority). We will evaluate the criterion turnaround
time–which is the average actual running time (CPU
usage) of the five processes mentioned above. With
the FCFS algorithm, the scheduler will run in the or-
der P
1
→ P
2
→ P
3
→ P
4
→ P
5
. P
1
runs for 10ms, P
2
runs for 16ms (because it has to wait for P
1
to fin-
ish), just like that we can calculate P
3
which takes
18ms, P
4
took 22ms and P
5
took 30ms. The average
turnaround time for all five processes to complete is
19.2ms. With the Priority algorithm, the running or-
der is P
2
→ P
5
→ P
1
→ P
3
→ P
4
. P
2
runs for 6ms,
P
5
runs for 14ms (because we have to wait for P
2
to finish), just like that we can calculate P
1
which
takes 18ms, P
3
took 22ms and P
4
took 30ms. The av-
erage turnaround time with this algorithm is 18ms.
With the RR algorithm, the running order of the pro-
cesses doesn’t matter, each P
i
will in turn take up 1/5
of the CPU time, and so on until the first process ter-
minates (which is P
)
.3), then each remaining process
is given 1/4 CPU in turn, and so on until the end, the
total turnaround time (running time) of these five pro-
cesses is (10 + 18 + 24 + 28 + 30) = 110ms, average
is 22ms.
Thus, it can be seen that the RR algorithm has a
longer running time than the two FCFS/FIFO and Pri-
ority algorithms, but this algorithm has the advantage
of evenly distributing the CPU usage limit for the pro-
cesses, which is quite important. This is important be-
cause in practice the schedulers in the operating sys-
tem will not know in advance when the user will open
the program, how long the open program will run,
or when the user will stop the running program. In
essence, the data structure used by RR is still in the
form of FIFO and still has processes that reserve the
right to run first (in the above example P
1
), but can
only run within the CPU allocation limit (above is 1/5
when all five processes have not finished running, and
increments from 1/4–1 each time one, two, three and
four processes terminate) (Noon et al., 2011).
3 ANALYSIS OF SCHEDULING
IMPROVED SOLUTIONS
BASED ON TREE DATA
STRUCTURES
With the analysis of some scheduling algorithms as
mentioned above, we see that these algorithms still
depends on the fixed priority value or equally divided
resources among the processes, so when installed in
the operating systems, it is possible to use uses binary
tree data structures and some improved data structures
from binary trees, such as Heap (Gabriel et al., 2016).
These algorithms based on binary tree data structures
have been proven to have a complexity of O(1) or
O(N) (Aas, ).
With built-in in the Linux kernel since version 2.4,
the O(N) scheduler uses the O(N) algorithm, where
the execution time is a function of the number of pro-
cesses, here is N. Or more accurately, the time of the
algorithm is a linear function of N, i.e. as N increases,
the time increases linearly. Scheduler O(N) can termi-
nate if N is continuously increasing. This scheduling
method is simple but will have poor performance on
systems running multiple CPUs (multiprocessors) or
multiple cores.
The O(1) scheduler, running in constant time as
the name suggests, has been integrated into the Linux
kernel since version 2.4, no matter how many pro-
cesses are running in the system, this scheduler can
guaranteed to finish in a fixed time. This makes O(1)
Analyze and Evaluate the Efficiency of the Tree-Based Process Scheduler
331