0
Research Papers

Expanded Assessment of a Practical Thermally Aware Energy-Optimized Load Placement Strategy for Open-Aisle, Air-Cooled Data Centers

[+] Author and Article Information
Dustin W. Demetriou

Research Assistant
Department of Mechanical and Aerospace Engineering,
e-mail: dustin.demetriou@gmail.com

H. Ezzat Khalifa

Fellow ASME
Professor
Mechanical and Aerospace Engineering,
e-mail: hekhalif@syr.edu

Syracuse University,
Syracuse, NY 12344

ψT is the ratio of the perforated tile airflow to the required rack airflow. The perforated tile airflow is the cold air that is supplied through perforated tile in the cold aisle; however, this is not the cold air ingested by the IT equipment. Typically, the ingested cold air is less because of unintentional “spilling” of cold air out of the cold aisle and/or short-circuiting of cold air the CRAH units [27].

Contributed by the Electronic and Photonic Packaging Division of ASME for publication in the Journal of Electronic Packaging. Manuscript received April 26, 2012; final manuscript received July 6, 2013; published online July 24, 2013. Assoc. Editor: Saurabh Shrivastava.

J. Electron. Packag 135(3), 030907 (Jul 24, 2013) (7 pages) Paper No: EP-12-1049; doi: 10.1115/1.4024945 History: Received April 26, 2012; Revised July 06, 2013

This paper expands on the work presented by Demetriou and Khalifa (Demetriou and Khalifa, 2013, “Thermally Aware, Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. 030906) that investigated practical IT load placement options in open-aisle, air-cooled data centers. The study found that a robust approach was to use real-time temperature measurements at the inlet of the racks to remove IT load from the servers with the warmest inlet temperature. By considering the holistic optimization of the data center load placement strategy and the cooling infrastructure optimization, for a range of data center IT utilization levels, this study investigated the effect of ambient temperatures on the data center operation, the consolidation of servers by completely shutting them off, a complementary strategy to those presented by Demetriou and Khalifa (Demetriou and Khalifa, 2013, “Thermally Aware, Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. 030906) for increasing the IT load beginning with servers that have the coldest inlet temperature and finally the development of load placement rules via either static (i.e., during data center benchmarking) or dynamic (using real-time data from the current thermal environment) allocation. In all of these case studies, by using a holistic optimization of the data center and associated cooling infrastructure, a key finding has been that a significant amount of savings in the cooling infrastructure's power consumption is seen by reducing the CRAH's airflow rate. In many cases, these savings can be larger than providing higher temperature chilled water from the refrigeration units. Therefore, the path to realizing the industry's goal of higher IT equipment inlet temperatures to improve energy efficiency should be through both a reduction in air flow rate and increasing supply air temperatures and not necessarily through only higher CRAH supply air temperatures.

FIGURES IN THIS ARTICLE
<>
Copyright © 2013 by ASME
Your Session has timed out. Please sign back in to continue.

References

United States Environmental Protection Agency, 2007, “Report to Congress on Data Center Energy Efficiency,” Public Law 109-431.
Salim, M., and Tozer, R., 2010, “Data Centers’ Energy Auditing and Benchmarking-Progress Update,” ASHRAE Trans., 116(1), pp. 109–117.
Koomey, J., 2011, “Growth in Data Center Electricity Use 2005 to 2010,” A report by Analytical Press, completed at the request of The New York Times.
ASHRAE, 2011, “Thermal Guidelines for Data Processing Environments—Expanded Data Center Classes and Usage Guidance,” American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc, Atlanta, GA.
Lui, Y., 2010, “Waterside and Airside Economizers Design Considerations for Data Center Facilities,” ASHRAE Trans., 116(1), pp. 98–108.
Schmidt, R., 2004, “Thermal Profile of a High-Density Data Center—Methodology to Thermally Characterize a Data Center,” ASHRAE Trans., 110(2), pp. 635–642.
Schmidt, R., Cruz, E., and Iyengar, M., 2005, “Challenges of Data Center Thermal Management,” IBM J. Res. Dev., 49, pp. 709–723. [CrossRef]
ASHRAE, 2008, “High Density Data Centers: Case Studies and Best Practices,” American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc, Atlanta, GA.
Shrivastava, S., Iyengar, M., Sammakia, B., Schmidt, R., and VanGilder, J., 2006, “Experimental-Numerical Comparison for a High Density Data Center: Hot Spot Heat Fluxes in Excess of 500 W/ft2,” Tenth Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM '06), San Diego, CA, May 30–June 2, pp. 402–411. [CrossRef]
Patel, C., Bash, C., Belady, L., Stahl, L., and Sullivan, D., 2001, “Computational Fluid Dynamics Modeling of High Compute Density Data Centers to Assure System Inlet Air Specifications,” Pacific Rim/ASME International Electronic Packaging Technical Conference and Exhibition (IPACK'01), Kauai, HI, July 8–13, ASME Paper No. IPACK2001-15622.
Patel, C., Sharma, C., Bash, C., and Beitelmal, A., 2002, “Thermal Considerations in Cooling Large Scale High Compute Density Data Centers,” Eighth Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM 2002), San Diego, CA, May 30–June 1, pp. 767–776. [CrossRef]
Abdelmaksoud, W., Dang, T. Q., Khalifa, H. E., Elhadidi, B., Schmidt, R., and Iyengar, M., 2010, “Experimental and Computational Study of Perforated Floor Tile in Data Centers,” 12th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm), Las Vegas, NV, June 2–5. [CrossRef]
Abdelmaksoud, W., Dang, T. Q., Khalifa, H. E., Elhadidi, B., Schmidt, R., and Iyengar, M., 2010, “Improved CFD Modeling of a Small Data Center Test Cell,” 12th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm), Las Vegas, NV, June 2–5. [CrossRef]
Iyengar, M., and Schmidt, R., 2007, “Analytical Modeling of Thermodynamic Characterization of Data Center Cooling Systems,” ASME J. Electron. Packag., 131(2), p. 021009. [CrossRef]
Hellmer, B., 2010, “Consumption Analysis of Teleco and Data Center Cooling and Humidification Options,” ASHRAE Trans., 116(1), pp. 118–133.
Demetriou, D. W., Khalifa, H. E., Iyengar, M., and Schmidt, R., 2011, “Development and Experimental Validation of a Thermo-Hydraulic Model for Data Centers,” HVAC&R Res., 17(4), pp. 540–555. [CrossRef]
Walsh, E., Breen, T., Punch, J., Shah, A., and Bash, C., 2010, “From Chip to Cooling Tower Data Center Modeling: Part I Influence of Server Inlet Temperature and Temperature Rise Across Cabinet,” 12th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm), Las Vegas, NV, June 2–5. [CrossRef]
Walsh, E., Breen, T., Punch, J., Shah, A., and Bash, C., 2010, “From Chip to Cooling Tower Data Center Modeling: Part II Influence of Chip Temperature Control Philosophy,” 12th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm), Las Vegas, NV, June 2–5. [CrossRef]
Sharma, R., Bash, C., Patel, C., Friedrich, R., and Chase, J., 2005, “Balance of Power: Dynamic Thermal Management for Internet Data Centers,” IEEE Internet Comput., 9(1), pp. 42–49. [CrossRef]
Moore, J., Chase, J., Ranganathan, P., and Sharma, R., 2005, “Making Scheduling ‘Cool’: Temperature-Aware Workload Placement in Data Centers,” Proceedings of USENIX Annual Technical Conference, Anaheim, CA, April 10–15, pp. 61–75.
Tang, Q., Gupta, S. K. S., and Varsamopoulou, G., 2007, “Thermal-Aware Task Scheduling for Data Centers Through Minimizing Heat Recirculation,” IEEE International Conference on Cluster Computing, Austin, TX, September 17–21, pp. 129–138. [CrossRef]
Tang, Q., 2009, “Thermally-Aware Scheduling in Environmentally Coupled Cyber-Physical Distributed Systems,” Ph.D. thesis, Arizona State University, Tempe, AZ.
Moore, J., 2005, “Automated Cost-Aware Data Center Management,” Ph.D. thesis, Duke University, Durham, NC.
Tang, Q., Gupta, S., and Varsamopoulos, G., 2008, “Energy-Efficient Thermal Aware Task Scheduling for Homogenous High-Performance Computing Data Centers: A Cyber-Physical Approach,” IEEE Trans. Parallel Distrib. Syst., 19(11), pp. 1458–1472. [CrossRef]
Mukherjee, T., Banerjee, A., Varsamopoulos, G., and Gupta, S., 2009, “Spatio-Temporal Thermal-Aware Job Scheduling to Minimize Energy Consumption in Virtualized Heterogeneous Data Centers,” Comput. Netw., 53(17), pp. 2888–2904. [CrossRef]
Demetriou, D. W., and Khalifa, H. E., 2013, “Thermally Aware Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. XXX. [CrossRef]
Khalifa, H. E., and Demetriou, D. W., 2011, “Energy Optimization of Air-Cooled Data Centers,” ASME J. Therm. Sci. Eng. Appl., 2, p. 041005. [CrossRef]

Figures

Grahic Jump Location
Fig. 1

Normalized cooling power for thermally aware, energy-based load placement

Grahic Jump Location
Fig. 2

Normalized cooling power at different ambient temperatures for thermally aware, energy-based load placement

Grahic Jump Location
Fig. 3

Optimum ψT at different ambient temperatures for thermally aware, energy-based load placement

Grahic Jump Location
Fig. 4

Idle versus shut off operation of virtualized IT at (a) 75% useful IT and (b) 50% useful IT

Grahic Jump Location
Fig. 5

Strategy for increasing IT load by turning on the coldest chassis at (a) 75% useful IT and (b) 50% useful IT

Grahic Jump Location
Fig. 6

Comparing the cooling energy consumption of dynamic versus static IT load placement

Grahic Jump Location
Fig. 7

Comparison of load placement arrangements for dynamic versus static IT load placement

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In