task depends on the assigned tester. Since the plat-
form needs to select one task from all the unassigned
network testing requests, the selection method could
directly impact the final result. Several examples of
the selection criteria include: (1) first come first serve;
(2) fit as many requests as possible to the testing ca-
pacity; or (3) randomly choose a task.
Another criteria of tester assignment is the packet
capacity usage of different categories. Since we as-
sign a whole request to one tester, we try to use the
packet capacities in a balanced way to avoid the sit-
uations in which a certain type of packet capacity is
used up while for other types a plenty of capacity is
remained. Under this case, we will assign a request
to a tester that will create the least imbalance in its
remaining capacities after satisfying the request. Be-
low we provide an example. Assume that we have
two testers T
1
and T
2
who have used their capacities
from high to low as follows: T
1
(71%, 68%, 72%) and
T
2
(66%, 69%, 67%). The imbalance is defined as the
largest difference between capacity usage in different
categories. So the imbalance of T
1
is 72% − 68% =
4%, and for T
2
is 3%. Now assume that a request
contains only high sensitivity packets. Because of the
difference in capacities of testers, it will use 2% of
T
1
’s capacity or 3% of T
2
’s capacity. Therefore, if we
assign it to T
1
, the new imbalance value will be ((71%
+ 2%) - 68% = 5%). While for T
2
the new value is
(69% - 67% = 2%). Therefore, to reduce imbalance
at testers after assignment, we will give the task to T
2
.
The Task Assignment Method 2 is a little bit dif-
ferent since we can assign the packets to multiple
testers. Therefore, a greedy algorithm will try to
assign each single testing packet to the tester who
charges the lowest price. Once that tester’s capac-
ity is reached, we can move on to the next cheapest
tester. Note that this approach tries to maximize the
profit from the current request for the platform. If
the first-come-first-serve method is always adopted,
it is possible that a certain type of packet capacity is
used up first, thus preventing us from admitting new
requests. For example, if all testing capacity of the
middle level sensitivity packets is used up, we will
not be able to admit any request that contains middle
sensitivity packets since we do not allow a request to
be partially satisfied.
To prevent this scenario from happening, we can
manage the remaining capacity of different types of
packets and try to maintain a balance. For example,
during the request assignment procedure we can set
up a threshold of the imbalance value between the re-
maining capacities of different categories. We will
not accept any request that will break the threshold.
Below we provide an example. Assume that we set
the imbalance threshold at 5%. Before admitting a
request, the capacity usage are 45% (low), 42% (mid-
dle), and 47% (high), respectively. Now if a task re-
quests 1% of middle sensitivity packet capacity and
2% of high, we will not admit it since the ending ca-
pacity usage will be 43% (middle) and 49% (high)
which will exceed the imbalance threshold 5%.
4 QUANTITATIVE RESULTS
In this part, we will present some quantitative results
of the proposed approaches. The experiments focus
on the achieved profit of different approaches, and the
practicability of the task assignment models.
4.1 Achievable Profits
Based on the discussion in previous sessions, we can
see that the task assignment problem is an NP prob-
lem. Therefore, in this section, we will compare the
maximum profit under some scenarios to the achiev-
able profit of the heuristic approaches. Restricted
by the search space size and required computation
power, we will experiment with some small scale
questions.
We assume that the prices that the platform
charges for each high, middle, and low sensitivity
packet are $12, $10, and $8, respectively. For each
tester, the capacities of high, middle, and low sensitiv-
ity packets follow uniform distribution in the ranges
of (900, 1100), (1800, 2200), (900, 1100), respec-
tively. The size of the network test requests also fol-
lows uniform distribution around the expected values.
They are divided into two groups. The first group
have the sizes that range from 20% to 90% of the
testers’ capacities, while the second group range from
10% to 45%. The charging prices of the testers uni-
formly distribute between 92% to 100% of the plat-
form prices. To calculate the maximum profit of the
assignment, we search for all possible combinations.
In this group of experiments, we consider three
task assignment mechanisms: first come first serve
(FCFS), random assignment, and exclusive search
(maximum profit). Here the FCFS mechanism will try
to satisfy the tasks based on their arriving order. The
random assignment mechanism picks from the pool
of unsatisfied tasks and assign it to the tester that will
generate the highest profit. In Figure 2, we show the
ratio between the profits of the heuristic mechanisms
and the maximum profit.
From the figure, we can see that the size of the
network testing requests has an large impact on the
achievable profit. For example, when the sizes of
Incentivisation of Outsourced Network Testing: View from Platform Perspective
503