Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter April 24, 2013

Request–Response Distributed Power Management in Cloud Data Centers

  • Jianxiang Li EMAIL logo and Youchun Zhang

Abstract

Power provision is coming to be the most important constraint to data center development. The efficient management of power consumption according to the loads of the data center is urgent. As the load for every application hosted in every server node (SN) of the data center and corresponding Service Level Agreement (SLA) requirements can be quite different, it is hard to deploy a power strategy at application. The asynchronies and abruptness characteristics of workload fluctuation make power management policymaking using periodic resource scheduling method invalid. In this article, the design and implementation of the request–response distributed power management scheme is elaborated. Bound by linear time complexity, the method proposed integrates dynamic voltage/frequency scaling, power-on–power-off, and virtual machine migration mechanisms and dynamically optimizes the power consumption of a cloud data center. The significant advantage of the scheme is that it does not need synchronous scheduling between all SNs. Simulation results showed that the scheme could effectively decrease the power consumption of the data center, with a tiny reduction in performance as centralized control methods.

1 Introduction

Due to the advantages in deployment, management, reliability, and cost, cloud computing services are growing rapidly in recent years, resulting in the establishment of large-scale cloud data centers containing thousands of server nodes (SNs) and consuming enormous amount of electrical energy.

Power provision is increasingly becoming the most important constraint to its development. In 2006, US data centers consumed an estimated 61 billion kilowatt-hours of energy for the total electricity cost of about 4.5 billion, enough to power 5.8 million average US households [3].

Meanwhile, most of the times, the resource utilization in the data center is very low. Data collected from more than 5000 production servers over a 6-month period have shown that although servers usually are not idle, the utilization rarely approaches 100%. This is because servers operate at 10–50% of their full capacity [2].

Improving resource utilization and optimizing power consumption are the major concerns for cloud data centers today. Most mechanisms are introduced to optimize the power consumption of computing devices according to the loads at any particular moment, such as dynamic voltage/frequency scaling (DVFS) [6], power-on–power-off [9], application migration, and consolidation. However, an effective asynchronous power management method, which synthetically makes use of these mechanisms, is still lacking.

The asynchronies and abruptness in workload fluctuation in the cloud data center make power management policymaking using periodic resource scheduling method invalid. To address this problem, in this article, a request–response distributed power management scheme (RRDPM) for the cloud data center has been suggested. The implementation details are also furnished.

The work presented here has two key contributions: first, the integration of RRDPM and DVFS, power-on–power-off, and virtual machine (VM) migration mechanisms were investigated to dynamically optimize the power consumption; second, the design and implementation of algorithms pertaining to RRDPM, which include the local decision algorithm of SNs and the global response decision algorithm on the power management server. Simulation results show that RRDPM can effectively decrease the power consumption in a data center with tiny reduction in performance. The rest of this article is organized as follows. Section 2 discusses related works. In Section 3, the RRDPM model is described. All the algorithms involved in the system are designed in Section 4. In Section 5, the effectiveness of RRDPM as evaluated through simulation experiments is briefed. The article concludes in Section 6.

2 Related Work

Much effort has been made to address power efficiency both in the computing device and in the data center. Albers [1] has reviewed the mechanisms that minimize energy consumption in a single computing device, which include power-down mechanisms, dynamic speed scaling, and power-efficient scheduling algorithms.

Pinheiro etal. [9] first put forward the idea of using a power-on–power-off mechanism to save power under a lighter load in cluster systems. Because predicting application performance is indispensable and difficult for satisfying the SLA requirements when improving power efficiency, it is proposed to predict the performance by keeping track of the demand for (not the utilization of) resources on all cluster nodes. This method is used as a reference to predict the resource demands of every VM and to ensure the performance requirements of applications packaged in VMs.

According to Chen etal. [5], the DVFS implementation for a server cluster can be broadly classified based on whether (i) the control is completely decentralized, where each node independently makes its scaling choice purely on local information or (ii) there is a coordinated (perhaps centralized) control mechanism that regulates the operation of each node. A decentralized DVS control is attractive from the implementation viewpoint, and a previous research [6] has shown that a coordinated voltage scaling approach can provide substantially higher savings.

For power management in the data center, Chen etal. [5] used a coordinated DVFS strategy where a controller was assigned for each application, and it periodically assigned the operating frequency/voltage for the servers running that application. The author assumed that each server continues to be entirely devoted to a single application until the time of server reallocation, but this is not true in the cloud computing environment. Because the load for every application hosted in every SN and the corresponding SLA requirements can be quite different, it is hard to deploy this power management strategy at the application granularity in the cloud computing environment.

Beloglazov and Buyya [2] used a centralized algorithm to periodically select VMs needed to migrate from the overloaded and underloaded SNs and compute the optimal migration destination SNs all over the data center for these VMs. The perfection in simulation results depended on the constant resource utilization of every VM in every period and the whole synchronism of all the VM’s workload fluctuation periods. In fact, a VM’s workload fluctuation is asynchronous in the data center.

In addition to the decrease in power consumption in a data center, recent works [7, 13] aimed to reduce the electric utility bill basing on the electricity price difference in different regions and different periods.

3 System Model

The cloud data center is composed of massive physical SNs, and every SN has many VMs running on it; VMs are the basic executive entities and the providers of services specified in the SLAs.

For more power consumption efficiency, SN periodically determines its optimal operating frequency, using DVFS according to the workload fluctuation. However, from the overview of the whole data center, we expect that a data center can adapt to power the fewest SNs operating at the highest frequency with the optimal resource utilization combination by means of VM migration and power-on–power-off to decrease the ratio of idle power wastage. However, this is an NP hard problem because loading SNs to a desired utilization level for each resource can be modeled as a multidimensional bin-packing problem where SNs are bins with each resource being one dimension of the bin [12].

Different from other’s solution, we put forward the RRDPM scheme. First, SNs periodically make their local optimal power decision every time TL according to current resource demands. There are five kinds of decision conclusion for an SN:

  • To increase or decrease an SN’s operating frequency owing to workload fluctuation within a narrow range;

  • To emigrate one VM from SN owing to overload or to emigrate all VMs from SN and turn off owing to underload;

  • To keep the SN’s current state when workload fluctuation is negligible.

Although SN decides to change its power state, it sends the power state transition request (PSTR) to the power management server node (PMN) to make the global optimal decision. If an SN requires the outside migration of its VM, the PMN will select an optimal destination SN to accommodate it. These SNs that require to decrease their CPU frequency or to emigrate all their VMs may be the potential receiver for VM migration; thus, the SN’s decision to decrease the operating frequency should be agreed by the PMN. On the contrary, when an SN decides to increase its CPU frequency, it can immediately carry out without asking for the PMN’s acknowledgment, which can assure a quick response to workload increase. The objective of the PMN making the global decision is to minimize the total power consumption of the integral data center. At last, according to responsive information, SNs execute their state transition.

An RRDPM has the following characteristics:

  1. Asynchronism: Whenever an SN detects workload fluctuation, it can immediately make a response. Unlike the methods used by Beloglazov and Buyya [2], RRDPM has no synchronization limitation of periodic migration scheduling. Typically, the interval TL between demand computations is 1 s, which means an RRDPM can grasp a tiny change in workloads.

  2. Low overhead: The PMN obtains an SN’s operating state information only through the SN’s PSTRs, without proactively getting SN information.

  3. Distributed computation: Distributed computation in SNs and PMN can alleviate PMN computation pressure unlike centralized control algorithms.

4 Implement-Related Algorithm

4.1 SN-Side Local Optimal Decision

4.1.1 Predict Resource Demands of VM Satisfying SLA

In the cloud computing model, every application is encapsulated in a VM of SN; the VM monitor (VMM) of SN supervises the resource utilization of every VM and is responsible for resource distribution.

To satisfy every application’s performance requirement, the VMM has to distribute enough resource share to every VM. Due to the dynamic change in workload, the VMM needs to periodically predict the resource demands of every VM based on their recent resource utilization information and performance measurement. According to Pinheiro etal. [9], the CPU demand is computed by reading the information from the /proc directory, and the network and disk demands are computed based on server internal information. To smooth out short bursts of activity, each of these demands is exponentially amortized over time using

We assume every application’s SLA depends on r kinds of computing resources (such as CPU, memory, etc.), and the set of VMs on SN, SNj, is denoted as Ωj. For VMi ∈ Ωj, the predicted demand for the kth resource is di,k (k = 1, 2,…, r). Obviously, the total demand for the kth resource on SNj is

4.1.2 SN-Side Local Power Optimal Decision

The higher the resource utilization of SN, the lower the idle power wastage will be; however, it also induces lower application performance, and thus, there is a power optimal resource utilization of SN where the energy per transaction is the minimum [12]. We assume that the power optimal resource utilization combination of SNj is

The objective of SN-side local power management is to operate the SN running in the vicinity of the optimal resource utilization combination point by means of DVFS or VM migration mechanism. Using DVFS, SN dynamically manipulates its operating frequency to control the CPU resource provision, whereas using VM migration to change set Ωj, we can load SNj to operate at a different utilization level.

We used Sj,k and Uj,k to denote the total provision and utilization, respectively, of the kth resource in SNj, Fj and fj, which in turn denote the alternative CPU frequency set and the current CPU operating frequency in SNj, respectively. For convenience, the frequency fj is normalized to the ratio to maximum frequency, i.e., 0 ≤ fj ≤ 1, and the resource index k = 1 denotes the CPU. Thus, the CPU utilization of SNj is

and the other resource utilizations are

We denote the combination of current resource utilization of SNj as

Periodically, according to the predicted resource utilizations, SN has to first check if it is overloaded or underloaded.

Assume

If δ is smaller than the minimum utilization threshold beyond a certain time threshold, then it means that SNj is underloaded. To reduce the idle power consumption, SNj should ask for the PMN to migrate all its VMs to other servers and wait to be shut down.

SNj is overloaded if there exists k satisfying Dj,kSj,k. To not violate the applications’ SLAs homed in SNj, SNj has to migrate some VMs to other servers. Because the cost of migrating the VM between SNs is high, we assume that every overloaded SN only migrate one VM once.

The closer the distance between Uj and optUj, the more efficient the power consumption will be. Thus, the optimal selection objective of VM to migration is to minimize the distance dis(Uj, optUj) between Uj and optUj.

We can model VM selection from overloaded SN as the following optimization problem:

where we define

and ωk(k = 1, 2,…, r) denote the weight coefficient of the kth resource utilization’s effect on power consumption. The weight coefficients, ωk, can be derived from experiments as in Ref. [12]. This optimization problem can be solved in O(|Ωj|) time through a simple comparison of different function values of selected |Ωj| VMs.

If an SNj neither overloaded nor underloaded, SNj should adjust its frequency to optimize its resource utilization. Accordingly, we model the optimal CPU frequency selection as the following optimization problem:

Similarly, the frequency selection optimization problem can be solved in O(|Fj|) time.

4.2 PMN-Side Global Optimal Decision

When a request from SN arrives to the PMN, the PSTR’s receiving daemon in the PMN inserts the request to the end of the queue according to the kind of request. Another daemon is responsible for responding to every request of the queues.

The PMN maintains three queues for three kinds of requests:

  • The queue of VMs to migrate from overloaded SNs QM;

  • The queue of underloaded SNs QU;

  • The queue of SNs to decrease their CPU frequency QD.

To quickly respond to overload, the PMN handles QM before the other two queues, and because the SNs in QD are the potential destination for VMs from overloaded and underloaded SNs, the PMN deals with QD at the last.

The PMN preferentially processes all requests in the higher priority queue (Algorithm 1). When responding to a request from QM (lines 3 and 4 in Algorithm 1, and Algorithm 2), the PMN first finds the optimal migration destination SN (Algorithm 3) in QD for every migrated VM vmi. If there is no appropriate SN existing in QD, then the PMN finds the destination node in QU. If vmi is very big and no active SNs are appropriate for vmi, the PMN would power on a new SN to accommodate vmi (line 9 in Algorithm 3).

When processing a request of the underloaded SN, SNj (lines 7 and 8 in Algorithm 1, and Algorithm 4), for each VM that could migrate, vm, the PMN first seeks the optimal destination SN in QD and QU in a sequence like Algorithm 3. If there is a VM with no proper SN to accommodate, the PMN will ignore the subsequent migration processing for SNj and remove this request from QU. There are two reasons: first, it is unwise for the PMN to migrate the VMs from an underloaded SN to a new idle SN; second, if the SN is still underloaded, it will again send the request to the PMN in the next period.

When dealing with a request to decrease the CPU frequency (lines 10–12 in Algorithm 1), the PMN simply agrees to this request, which is optimal, because at this time, there are no other two kinds of request.

Assume that there are N SNs in the cloud data center. Obviously, the response procedure of selecting the migration destination SN for every VM to migrate can be finished in O(N) time.

Algorithm 1.

The reponse procedure of the PMN.

Input: request queues QM, QD and QU
OutPut: the response information to server node
1while(true){
2while (!QM.ismpty()){
3vm=QM.getVMtoMigrate;
4Call Algorithm 2:
 processVMMigratingFromOverloadedSN(vm, QD, QU);
5}
6while (!QU.ismpty()){
7SN=QM.getSN
8Call Algorithm 4:
 processUnderloadedSN(SN, QD, QU);
9}
10while (!QD.ismpty()){
11SN=QM.getSN
12sendReponseInformation(SN can decrease its CPU frequency);
13}
}
Algorithm 2.

Processing VM migration request of overloaded SNs.

processVMMigratingFromOverloadedSN(vm, QD, QD)

Input: virtual machine vm, request queues QD and QD
Output: the optimal migration destination server node destSN
1destSN = optimalDestSelect(vm,QD);
2if (destSN != null)
3QD.delete(destSN);
4else{
5destSN = optimalDestSelect(vm,QU);
6if (destSN != null)
7QU.delete(destSN);
8else
9destSN = new server node;
10}
sendReponseInformation(Migrate vm to destSN);
Algorithm 3.

Optimal migration destination server selection.

optimalDestSelect(vmj,Q).

Input: virtual machine vmj, server nodes set Q
OutPut: the optimal migration destination server node destSN
1MinMetric = MaxValue;
2destSN = null;
3if (Q.isEmpty()) return null;
4foreach server node SNj in Q{
5fork=1:r
6if (Sj,k<Dj,k+dj,k) goto line 3;
7metric = dis(Uj(fj),optUj);
8if (metric<MinMetric) destSN=SNj;
9}
returndestSN;
Algorithm 4.

Processing underloaded SN shutdown request.

processUnderloadedSN(SNj, QD, QD)

Input: server node SNj, queues QD and QU
OutPut: the response information to SNj
1foreach migratable virtual machine vm in SNj{
2destSN = optimalDestSelect(vm,QD);
3if (destSN != null)
4QD.delete(destSN);
5else{
6destSN = optimalDestSelect(vm,QD);
7if (destSN != null)
8QU.delete(destSN);
9else{
10QU.delete(SNj);
11return;
12}
13}
14sendReponseInformation(SNj can migrate vm to destSN);
15}
16QU.delete(SNj);
17sendReponseInformation(SNj wait to shut down);
return;

5 Experimental Evaluation

In this section, we demonstrate the effectiveness of the RRDPM through a simulation experiment.

5.1 Experiment Setup

We used the CloudSim toolkit [4] to simulate the cloud computing environments. CloudSim core provides models of VM, SN, and resource schedule policies. Beloglazov and Buyya [2] further extended it to enable energy-aware simulations. However, CloudSim is a single-process event simulation that cannot simulate concurrent execution between SNs and the power management server. For this, we extended CloudSim to run two different processes, one is the main simulation of cloud computing and another is the simulation of PMN-side global power optimizing decision.

For the sake of unification, our experiments used the same VM types, SN types, and workload as Beloglazov and Buyya’s [2]. The VM type includes high-CPU medium instance (2500 MIPS, 0.85 GB); extra large instance (2000 MIPS, 3.75 GB); small instance (1000 MIPS, 1.7 GB); and micro instance (500 MIPS, 613 MB). The two kinds of SN are the HP ProLiant ML110 G4 (2 cores × 1860 MHz, 4 GB) and the HP ProLiant ML110 G5 (2 cores × 2660 MHz, 4 GB). The relationship of the power consumption and the CPU utilization is shown in Refs. [10, 11]. In the simulation, we set the possible CPU frequency ratio set F = {0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} for every type of SNs.

The workload data comes from CoMon traces and is provided within the CloudSim package by Beloglazov and colleagues [2, 8]. The workload traces synchronously and gathers the CPU utilization data of >1000 VMs once every 5 min. The characteristics of the data for each day are shown in Table 5 [2]. The synchronism and coarser time granularity characteristic of workload trace impose restrictions to evaluate our RRDPM performance.

Table 1.

Workload Data Characteristics (CPU Utilization) [2].

DateNumber of VMsMean (%)SD (%)Median (%)
03/03/2011105212.3117.096
06/03/201189811.4416.835
09/03/2011106110.7015.574
22/03/201115169.2612.785
25/03/2011107810.5614.146
03/04/2011146312.3916.556
09/04/2011135811.1215.096
11/04/2011123311.5615.076
12/04/2011105411.5415.156
20/04/2011103310.4315.214

In the simulation, we select the optimal CPU utilization as 0.7 according to Srikantaiah etal. [12]. It is obvious that the higher the maximum and minimum CPU utilization thresholds, the lesser the total power consumption in the data center and the lower the application performance become. Here we set the maximum CPU utilization threshold to 0.9 and the minimum CPU utilization threshold to 0.3.

We use the average percentage of SLA violation time to the active time as a metric to evaluate the level of SLA violations caused by the system.

In Ref. [2], the authors provided several methods to detect overloaded SNs and methods to select VMs to be migrated from the overloaded SNs. Because these methods have approximately the same performance and are all centralized, we only select one, IQR_MC_1.5 (the overloaded SNs detection threshold is the interquartile range and the VM selection uses the maximum correlation policy), as a reference of comparison with RRDPM. Another comparison reference is the DVFS method, which utilizes CPU frequency adaptation without VM migration to save the power consumption.

5.2 Simulation Results and Analysis

Using the workload data described in Table 1, we have simulated three methods (RRDPM, IQR_MC_1.5, and DVFS), and showed the results in Table 6. Without VM migration in the DVFS method, every SN undertakes the lighter workloads; thus, the SLA violation of the DVFS method for every day trace is zero (omitted in Table 2) and the energy consumption is maximal, which is a benchmark evaluating an energy-efficient algorithm. Compared with the DVFS method, RRDPM can greatly save approximately 75% energy consumption with the cost of approximately 3% SLA violation time. Relative to IRQ_MC_1.5, RRDPM decreases 55% SLA violation time and 90% VM migrations with the cost of increasing 50% energy consumption. There are several reasons why this comparison is conservative. First, the 5-min scheduling period of the workload trace, instead of the several seconds of the scheduling period, makes RRDPM spend more time in reaching the optimal resource utilizations, whereas for IRQ_MC_1.5 to reach the optimal state consumes only several seconds at the beginning of every scheduling period. Second, the RRDPM scheme can also use the adaptive utilization threshold to improve its performance. The relationship of energy consumption with simulation time further shows the difference between the distributed RRDPM and the centralized method IRQ_MC_1.5 (Figure 1). In Figure 1, the curve of RRDPM has a greater gradient slope at the beginning of the simulation and gradually becomes linear, whereas the cure of IRQ_MC_1.5 is always a straight line throughout the simulation. This is because the workload is very light, with an average CPU utilization of 12.31%, which causes the RRDPM to waste more idle power in the period from the beginning of the simulation to the time when the optimal resource utilization was reached.

Table 2.

Simulation Results.

DateRRDPMIQR_MC_1.5DVFS
Energy consumption (kW-h)SLA violation (%)Number of VM migrationEnergy consumption (kW-h)SLA violation (%)Number of VM migrationsEnergy consumption (kW-h)
03/03/2011209.292.252454144.765.5925,774786.58
06/03/2011149.852.932392108.375.6220,379620.68
09/03/2011188.922.513110124.576.0726,077697.74
22/03/2011242.752.553820149.366.0533,169999.38
25/03/2011197.542.922624129.515.6724,963766.65
03/04/2011277.782.723287199.686.3139,7091074.28
09/04/2011243.952.163266155.955.9731,485921.22
11/04/2011223.992.542953151.675.9330,427880.12
12/04/2011194.102.722693133.545.9226,129756.05
20/04/2011176.933.472589110.586.5629,122688.48
Figure 1. Energy Consumption of the Data Center.
Figure 1.

Energy Consumption of the Data Center.

To further demonstrate the RRDPM scheme’s effectiveness, Figure 2 depicts the CPU utilization distribution of all active SNs for the March 3, 2011, workload. The figure shows that most of the active SNs’ CPU utilizations are below 20% at the beginning of the simulation; after 25,000 s, the utilization of most active SNs increased to approximately 50%. The average CPU utilization and its standard deviation of active SNs are shown in Figure 3.

Figure 2. CPU Utilization Distribution of all Active SNs.
Figure 2.

CPU Utilization Distribution of all Active SNs.

Figure 3. Statistical Property of Active SNs CPU Utilization.
Figure 3.

Statistical Property of Active SNs CPU Utilization.

6 Conclusions

Power provision is becoming the most important constraint to cloud computing development. Improving the resource utilization and reducing idle power consumption are the fundamental issues in power optimization and management. The purposed RRDPM elucidated in this article synthesizes DVFS, power-on–power-off, and VM migration mechanisms to dynamically optimize the power consumption in a cloud data center. An excessive resource utilization induces poor service performance, whereas a very low resource utilization results in too much idle power consumption, and RRDPM uses resource demand prediction to make the request decision of the power state transition. Resource demand prediction every other second can easily detect workload abruptness and make the response to workload fluctuation more prompt. Simulation results show that RRDPM can effectively decrease the power consumption of a data center with a tiny reduction in performance. Compared with other methods, the significant advantage of RRDPM is that it is a distributed asynchronous method with a low computation overhead. Although the initial average resource utilization of SNs is very low, RRDPM requires more time to reach the optimal state. If convergence is shortened by implanting a centralized control method at the initial phase of RRDPM, the idle power consumption can be further decreased, and this the research plan set forth for future studies.


Corresponding author: Jianxiang Li, Department of Computer Science and Technology, Tsinghua University, Fit 4-106, Beijing 100084, China, e-mail:

Bibliography

[1] S. Albers, Energy-efficient algorithms, Commun. ACM. 53 (2010), 86–96.10.1145/1735223.1735245Search in Google Scholar

[2] A. Beloglazov and R. Buyya, Optimal Online Deterministic Algorithms and Adaptive Heuristics for Energy and Performance Efficient Dynamic Consolidation of Virtual Machines in Cloud Data Centers, Concurrency and Computation: Practice and Experience, 24 (2012), 1397–1420.10.1002/cpe.1867Search in Google Scholar

[3] R. Brown, E. Masanet, B. Nordman, B. Tschudi, A. Shehabi, J. Stanley, etal., Report to Congress on Server and Data Center Energy Efficiency: Public Law 109-431, Lawrence Berkeley National Laboratory, Berkeley, CA, 2008.Search in Google Scholar

[4] R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. F. D. Rose and R. Buyya, CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms, Software Pract. Exp.41 (2011), 23–50.10.1002/spe.995Search in Google Scholar

[5] Y. Chen, A. Das and W. Qin, Managing server energy and operational costs in hosting centers, ACM SigmetricPerformance Evaluation Review33 (2005), 303–314.10.1145/1071690.1064253Search in Google Scholar

[6] M. Elnozahy, M. Kistler and R. Rajamony, Energy-efficient server clusters, in: Proceedings of the Second Workshop on Power Aware Computing Systems, Lecture Notes in Computer Science, vol. 2325, pp.179–197, Springer, Berlin, Heidelberg, 2003.Search in Google Scholar

[7] Z. Liu, Greening geographical load balancing, in: Proceedings of the ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems, (2011), 233–244.10.1145/1993744.1993767Search in Google Scholar

[8] K. S. Park and V. S. Pai, CoMon: a mostly-scalable monitoring system for PlanetLab, ACM SIGOPS Oper. Syst. Rev.40 (2006), 65–74.10.1145/1113361.1113374Search in Google Scholar

[9] E. Pinheiro, R. Bianchini, E. V. Carrea and T. Heath, Load Balancing and Unbalancing for Power and Performance in Cluster-Based Systems, Technical Report DCS-TR-440, 2001.Search in Google Scholar

[10] SPECpower_ssj2008m (2009). http://www.spec.org/power_ssj2008/results/res2011q1/power_ssj2008-20110127-00342.html.Search in Google Scholar

[11] SPECpower_ssj2008 (2009). http://www.spec.org/power_ssj2008/results/res2011q1/power_ssj2008-20110124-00339.html.Search in Google Scholar

[12] S. Srikantaiah, A. Kansal and F. Zhao, Energy aware consolidation for cloud computing, in: Proceedings of the Conference on Power Aware Computing and Systems, Vol. 10, USENIX Association, 2008.Search in Google Scholar

[13] R. Urgaonkar, Optimal power cost management using stored energy in data centers, Proceedings of the ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems, (2011), 221–232.10.1145/1993744.1993766Search in Google Scholar

Received: 2013-3-21
Published Online: 2013-04-24
Published in Print: 2013-12-01

©2013 by Walter de Gruyter Berlin Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 3.6.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2013-0015/html
Scroll to top button