During the phase of high load, about 6000 users
worked simultaneously within the required number
of clients. When the first users completed all loops,
these start to log off and the number of active users
decreases during the last phase.
Table 4: Benchmark phases.
Phase Active Users Duration
(1) Increasing Load
0001 - 5926 24 min
(2) High Load
5977 - 6000 09 min
(3) Decreasing Load
5866 - 0001 26 min
During the benchmark run, we monitored the
utilization of all CPUs on the application server in
order to ensure that the complete utilization range
was reached within the benchmark interval. The
CPU utilization of both the application server and
database server indicate the three phases listed in
Table 5. Since the database server of the used SAP
system comprises significantly more powerful
hardware components than the application server
(see Table 2), its total CPU utilization reached a
maximum of about 15% during the “high load”
phase. Therefore, our trained prediction model
cannot be used to predict the power consumption of
this server based on data gathered from higher
utilization rates. However, we utilized the
application server to its limit, thus, further physical
application servers would need to be added on
application layer of the SAP system, in order to
achieve higher database utilizations. In such cases of
system changes, a new prediction model needs to be
built. In the following section, we describe the
metrics that were monitored during the benchmark
run and further processed to be used for training the
prediction model.
4.2 Monitoring and Result Processing
As described in the previous section, we performed
the SAP SD benchmark in order to generate load.
During the three phases of the benchmark (see Table
4), we monitored the metrics listed in Table 6 for
each minute.
Table 5: Monitored Metrics.
Metric
Granularity Data Source
Power Consumption
Server IRMC Interface
CPU Time
Dialog Step Workload Monitor
Wait Time
Dialog Step Workload Monitor
Database Time
Dialog Step Workload Monitor
Database Requests
Dialog Step Workload Monitor
Transferred Kilobytes
Dialog Step Workload Monitor
Memory Used
Dialog Step Workload Monitor
An increasing amount of hardware vendors
provide power consumption information of their
servers via a standard interface for remote
administration, named Intelligent Platform
Management Interface (IPMI) (Harrell 2015; Intel
2015; Fujitsu 2015). For both servers that we used in
our experiment, we connected to the Integrated
Remote Management Controller (IRMC), which is a
similar interface developed by Fujitsu (Fujitsu
2015), and exported the mean consumed power in
Watt for each benchmark minute. The remaining
metrics are provided by the workload monitor which
is available within any SAP ERP system through the
transaction ST03 (Hienger and Luttig 2015). For
each dialog step performed by any user, the system
creates a record, which holds performance
information (including the ones listed in Table 6), a
timestamp and information about the related user,
application instance and client. Thus, for all
1,116,000 performed dialog steps, the above metrics
have been created and can be exported as a file in
the format of comma-separated-values (CSV).
Finally, we imported all metrics into a common
database schema, called “Power_Statistics”, which
we created inside the database of our ERP system.
The tables of the schema’s entity relationship (ER)
model are presented in Figure 2. The connectors
indicate columns that were joined for subsequent
analysis. All exported metrics from the SAP
workload monitor like CPU time and database
requests were imported into the table
“Transactions”. Information about consumed power
was imported into the table “Host”. Furthermore, we
added tables for storing the coefficients of the
prediction models and energy prices, which can be
obtained from any data source, including external
web services or the ERP system itself. After the
prediction models have been created (see Section
3.4), data can be queried in various dimensions by
means of database views. Under
http://mrcc.ovgu.de/fileadmin/media/documents/fujit
su_lab/Power_Statistics_Schema.zip, we provide
SQL files that can be used to create the
“Power_Statistics” schema including all tables and
views, of which we used some in Section 4.
4.3 Power Prediction Model Evaluation
Using the metrics listed in Table 6, we trained the
prediction models that are described in Section 3.4
for both the application and the database server.
Figure 3 shows (on the left) a high accuracy of the
application server’s model (Equation 1 in Section
3.4) by comparing the fitted vales with the actual,