0
Review Article

A Brief Overview of Recent Developments in Thermal Management in Data Centers OPEN ACCESS

[+] Author and Article Information
Sami Alkharabsheh

Mechanical Engineering Department,
Binghamton University,
State University of New York,
Binghamton, NY 13902
e-mail: salkhar1@binghamton.edu

John Fernandes, Betsegaw Gebrehiwot, Dereje Agonafer

Mechanical and Aerospace
Engineering Department,
University of Texas at Arlington,
Arlington, TX 76019

Kanad Ghose

Computer Science Department,
Binghamton University,
State University of New York,
Binghamton, NY 13902

Alfonso Ortega

Department of Mechanical Engineering,
Villanova University,
Villanova, PA 19085

Yogendra Joshi

The George W. Woodruff
School of Mechanical Engineering,
Georgia Institute of Technology,
Atlanta, GA 30332

Bahgat Sammakia

Mechanical Engineering Department,
Binghamton University,
State University of New York,
Binghamton, NY 13902

Contributed by the Electronic and Photonic Packaging Division of ASME for publication in the JOURNAL OF ELECTRONIC PACKAGING. Manuscript received December 23, 2014; final manuscript received August 16, 2015; published online September 10, 2015. Editor: Y. C. Lee.

J. Electron. Packag 137(4), 040801 (Sep 10, 2015) (19 pages) Paper No: EP-14-1117; doi: 10.1115/1.4031326 History: Received December 23, 2014; Revised August 16, 2015

Data centers are mission critical facilities that typically contain thousands of data processing equipment, such as servers, switches, and routers. In recent years, there has been a boom in data center usage, leading their energy consumption to grow by about 10% a year continuously. The heat generated in these data centers must be removed so as to prevent high temperatures from degrading their reliability, which would cost additional energy. Therefore, precise and reliable thermal management of the data center environment is critical. This paper focuses on recent advancements in data center modeling and energy optimization. A number of currently available and developmental thermal management technology in data centers are broadly reviewed. Computational fluid dynamics (CFD) for raised-floor data centers, experimental measurements, containment systems, economizer cooling, hybrid cooling, and device level cooling are all thoroughly reviewed. The paper concludes with a summary and presents areas of potential future research, which are based on the holistic integration of workload prediction and allocation, and thermal management using smart control systems.

Computer systems are complex to design, manage, and maintain, and require a special environment in which to operate. Additionally, high-performance data centers consume a large amount of power and the system has to be cooled to avoid overheating. The information technology (IT) equipment used in the system is expensive and often requires a level of high security and confidentiality.

The power consumption of data centers has been increasing for more than a decade [1]. This is driven by the enormous developments in revolutionary technologies and applications (e-commerce, Big Data, and cloud computing) and other growing online services, such as banking, weather forecasting, and many others. For example, cloud computing worldwide is expected to grow at a 36% compound annual growth rate (CAGR) through the year 2016, reaching a market size of $19.5B [2].

Power consumption and future data center trends are first pointed out in “Report to Congress on Server and Data Center Energy Efficiency,” prepared by the United States Environmental Protection Agency (EPA) in 2007 [1]. A later report by Koomey in 2010 [3] reveals that the use of electricity in data centers during 2010 was about 1.1–1.5% and 1.7–2.2% that of all the electricity used globally and within the United States, respectively. Datacenter Dynamics global census [4], which was conducted from 2011 to 2012, reports that power requirements grew by 63% globally, from 24 GW to 38 GW over the course of the year. The following forecasting studies expect that power consumption in data centers will continue to increase in the coming years. According to a recent report by GE Capital in 2013, the total data center capacity of the U.S. is anticipated to increase at a 3.6% CAGR between 2012 and 2016 [5]. Another forecasting study by Datacenter Dynamics in 2014 [6] shows that North America alone accounts for 10 × 106 square meters of data centers, which makes it the world's largest data center market, with up to 3.5% growth per year. The report forecasts that its growth will continue at about the same rate until 2016, with a large part of it fueled by new colocation facility construction.

Despite the extensive focus on data center energy efficiency, data centers still exhibit high PUE. A recent survey reveals that more than 55% of the sample taken had a PUE greater than 1.8 [7]. The sample contains 1,100 global data center end users, the majority of which are located in the U.S., as shown in Fig. 1. These results would not be realized in other regions where low PUE values are usually a function of climate, which stems from the use of free cooling. Notably, internet data centers do not require the same level of reliability as mission critical facilities like banks, government data centers, etc.

Data centers are considered mission critical facilities. A mission critical system indicates that any failure in the system operation, due to the disability in equipment, software, process, etc., results in the failure of business operations [8]. Maintaining a mission critical system requires satisfying elements, such as continuous operation of the system in circumstances where one or more units fail (redundancy), the ability of the system to continue to operate even if a single inappropriate operation occurs (fault tolerance), and the ability to provide unlimited capacity if required [9]. It is not rare for a data center power outage to happen. According to a survey that is conducted by the Emerson Network Power and Ponemon Institute in 2010 [10], 95% of the sample has unplanned outages. The outage can cost a business an average of $300,000 in just 1 hr ($5000 per minute). Respondents average 2.48 complete data center shutdowns over a two-yr period, with an average duration of 107 mins. The root cause analysis shows that among the top seven causes of unplanned outage are heat-related/computer room air conditioner (CRAC) failure situations.

Data centers are the medium through which heat is removed from the IT equipment. Within them heat is first generated primarily at a chip level. Heat is dissipated and transferred to the external environment within the data center. Typically it is transferred through multiscale subsystems, starting with the chip level to the server level, the rack level, and then the data center room level, as shown in Fig. 2. However, data center cooling is initiated at a data center level and is then directed back through the multiscale system to remove heat from the chip level. This thermal management scheme is inefficient in a multitude of ways. A far better approach is to use thermal technologies to address challenges at all levels from chip to system, which can significantly improve overall energy efficiency [11].

Data centers share many common characteristics in their cooling system configuration, as shown for the typical data center in Fig. 3. A raised-floor forms a plenum for the cold air supplied from the cooling units. Server racks are used to house the IT equipment and provide the necessary structure for cooling through the front and rear perforated doors. The racks are arranged in a cold aisle–hot aisle configuration. This configuration separates the supplied cold air stream from the exhaust warm air stream. The cooling is provided by air conditioning units designed specifically for computer rooms. Two types of air conditioning units are commonly used. One type, called computer room air handler (CRAH), is a chilled water unit, which uses chilled water from a central chiller plant to cool the data center's cooling air. The other type, called CRAC, is a direct expansion vapor compression unit, which cools the air using a refrigerant [12]. Typically, CRAC units are used in smaller data centers because they are generally less efficient than CRAH units [12]. CRAC/CRAH units both contain blowers to provide the cooling air with the required momentum to overcome flow obstructions and reach the IT equipment inlet. Internal fans of different sizes are provided for the IT equipment depending on its size and operational power. These fans, which are considered part of the cooling infrastructure, draw the cooling air from the room to help cool the internal components of the IT equipment.

There are many inefficiencies associated with the cooling scheme in data centers. Air mixing is one of the primary contributors. The cold aisle–hot aisle configuration does not completely isolate the cold air streams in the cold aisle and the hot air streams in the rack exhaust. This happens for two reasons: hot air recirculation and cold air bypass. The first, top, and edge hot air recirculation indicates that the hot air enters into the cold aisle from the top of the racks and the front end of the cold aisle closest to the cooling units. The recirculating warm air mixes with the cold air and increases the inlet temperature in a manner that is difficult to predict. Therefore, air mixing affects the reliability of the IT equipment. Avoiding this would require the cooling system to have a complicated control system. The second, cold air bypass, occurs when the cold air from the perforated tiles overshoots the racks and returns to the cooling unit at a lower temperature. This can also occur due to floor leakage between the cold aisle and the cooling unit. The resulting low cooling unit extract temperature reduces the efficiency of the cooling system by narrowing the temperature difference between the extract and the supply. Flow recirculation and bypass occur due to the pressure distribution in the plenum and the location of the cooling units in proximity to the cold aisle. As cold air flows from the CRAC/CRAH toward the perforated tiles, most of it enters the cold aisle through the tiles furthest from the cooling unit [13]. Cold air bypass occurs when the excess cold air goes to the end of the cold aisle. Thus, racks at the front end of the cold aisle are in need of air, which is satisfied by drawing hot air from their tops and edges. Supply heat index and return heat index (RHI) are two dimensionless numbers used to quantify hot air recirculation and cold air bypass, respectively [14]. Hot air recirculation and cold air bypass are shown to cause an approximated 17 °C increase in temperature between the CRAC outlet and certain locations in the cold aisle for a 63 kW single-cold aisle research data center [15].

The current literature review provides an overview of previous work and contributions in data center numerical modeling. The first part of the review includes an introduction to different approaches of modeling data centers numerically, with emphasis on CFD modeling in data centers. Experimental studies are reviewed in the second part, followed by a review of recent technologies in thermal management of data center. Attempts at modeling data centers dynamically are then reviewed. Section 5 then sheds light on potential future research and challenges.

Most of the challenges inherent in the thermal management of data centers stem from inefficient and unpredictable air flow patterns. Different modeling methods are used to simulate air transport in data centers [16]. Among the most common approaches to numerically modeling data centers are reduced order models (ROM), flow network modeling (FNM), and CFD.

ROM, also known as compact models, are defined as models “that use a number of internal states to produce predefined output data, given a prescribed set of inputs and parameters” [16]. The compact modeling term does not imply lumped modeling, in which for given inlet conditions the outlet results can be predicted using a black box model. The compact model in ROM includes the internal state in order to predict the output [16,17]. In general, ROM are used in data center simulations for their low computational cost and acceptable accuracy. Rambo and Joshi [18,19] used proper orthogonal decomposition (POD) to create approximate solutions of steady, multiparameter Reynolds-Averaged Navier–Stokes simulations. The model is developed by considering a ten-server rack. The authors predict an order of a 102–103 reduction in the model size with an accuracy of 90% of the true solutions. Somani and Joshi [20] presented an algorithm named ambient intelligence-based load management, which improves the data center heat dissipation capacity. The algorithm is trained using the inlet temperature of racks and distributes workloads according to the data center's thermal environment. The numerical results show 50% enhancements in the heat dissipation capacity. Samadiani et al. [21] used POD based on a reduced order thermal model of an open design to create an energy efficient design of future air-cooled data centers. Ghosh and Joshi [22] have recently used POD-based reduced order modeling to predict transit air temperatures in an air-cooled data center. The POD-based model is tested for its ability to predict real-time data and simplify the process followed to implement dynamic control in data centers. The major assumptions in their model are the use of rack level experimental data and a single parameter model with time.

FNM is a technique of analyzing flow distribution across a range of components in a system connected through flow paths and represented as a network. It is used to determine overall characteristics. Compared to CFD, it is a relatively simplified method that can produce accurate predictions within a short computation time. FNM has widespread applications in the electronics industry for the design of cooling solutions and efficient packaging for components at a system-level. Introducing FNM into the product development cycle permits the expeditious testing of shortlisted designs to produce a focused list of alternatives, which in turn enables the application of CFD as a detailed analysis [23]. FNM in cooling applications has been employed to study the effect of bypass on plate fin heat sink performance [24]. This analysis is accurate, simple, and rapid. At a much higher level, Steinbrecher et al. [25] applied this technique to study flow distribution for an effective design of an air-cooled server. Modifications, such as the introduction of in-flow resistance and additional side venting, are implemented to air flow through specific components for a reliable operation.

Commercial fnm code [26] is employed in an efficient design of a distributed-flow cold plate by ensuring uniform distribution across parallel passages [27]. In the absence of commercially available tools, analytical methods such as the Hardy Cross method for balancing heads can be employed for relatively simple studies [28]. Fernandes et al. [29] demonstrated an application of the aforementioned technique for an improved flow distribution across parallel sections within a cold plate. This is done by varying channel thicknesses without expensive and time-intensive optimization tools. Similarly, Radmehr and Patankar [30] investigated the viability of FNM for designing a liquid-cooling system that can ensure uniform flow distribution across three microchannel heat sinks. Ellsworth [31] demonstrated this technique at the rack level for manifold sizing and pump operating speeds, which is done to minimize pump power consumption in an IBM Power 775 Supercomputer for a variety of configurations.

Computational fluid dynamic models provide detailed descriptions of transport in data centers. However, these models are considered computationally intensive and are often oversimplified, which affects their accuracy. CFD models of data centers are initially introduced in 2001. Patel et al. [32] presented a CFD model for a prototype data center, for which they used flovent 2.1 software. The authors consider modeling a data center to be no different than modeling a computer. Results from the study capture the hot spots and provide inlet temperatures for the IT equipment. In the same year, Schmidt et al. [33] compared experimental measurements through a raised-floor data center's perforated tiles with two-dimensional computational models. The objective of this study is to establish a validated CFD model that is useful as a benchmark for further parametric studies on the airflow rate distribution from the perforated tiles. Given the promising results from the two initial studies, CFD models are extensively used to study data centers in the years that follow.

CFD Modeling of Data Centers.

With regard to classifying CFD modeling efforts in data centers, there is more than one approach that can be taken. For instance, Joshi and Kumar [16] classified data center CFD models into four categories: raised-floor plenum airflow modeling to predict perforated tile flow rates, thermal effects of rack layout and power distribution, alternative supply and return schemes, and thermal performance metrics. In their work, CFD modeling is divided based upon the element in focus. Legacy data centers are composed of raised-floor plenum and perforated tiles models, server racks models, cooling unit models, and entire data center room level models. In addition, their review discusses the application of CFD as an effective design for indirect liquid-cooling solutions, which can aid in the thermal management of high power modules or devices.

Raised-Floor Plenum and Perforated Tiles Models.

Early models of data centers focus on modeling the raised-floor and perforated tiles to accurately predict data center boundary conditions. Additionally, the airflow distribution from the perforated tiles is responsible for airflow nonuniformity in the cold aisle (hot air recirculation and cold air bypass). The raised-floor and perforated tiles are shown in Fig. 3. Kang et al. [34] developed a method of selecting the best perforated tile design that achieves the desired flow distribution. The authors use simple flow network models in their analysis, while assuming a uniformly pressurized plenum. Karki et al. [35] provided techniques that control airflow distribution from the perforated tiles. Their analysis uses a customized CFD package tileflow to simulate a modular data center shown in Fig. 4(a). The solution domain is restricted to the plenum. They vary the plenum height and the tiles perforation ratio parametrically to understand their impact on the tile airflow rate, as shown in Figs. 4(b) and 4(c). They find that increasing the plenum depth and decreasing the tile open area increase the uniformity of the flow in the tiles. The authors also find that thin partitions (under-floor obstructions) enhance the flexibility of directing airflow distribution. Patankar and Karki [13] used the same model to study air distribution from the perforated tiles considering additional factors. Their study finds that larger raised-floor heights give a more uniform air distribution than smaller floor heights, and that a smaller open area also makes the airflow distribution more uniform. VanGilder and Schmidt [36] have extensively studied the factors needed to obtain a uniform airflow distribution from the perforated tiles by examining 240 case studies. Similar to Karki et al. [35], they use only the plenum as the solution domain and a zero pressure boundary condition to represent the above floor region. The analysis uses flovent CFD package. Their conclusions agree with those of Patankar and Karki [13]. Radmehr et al. [37] used the same approach to demonstrate the process of determining the floor leakage as a percentage of the total floor area and total cooling flow rate. The authors indicate that the flow leakage in a typical data center consumes 5–15% of the available cooling flow rate and encompasses 0.2–0.35% of the floor area.

More recent research examines the tile flow's effect on rack inlet temperatures. Predicting the flow pattern from the tiles with numerical modeling is challenging because of the jet effect associated with the perforations and the hot air entrainment at a tile level. Driven by a discrepancy between experimental and CFD results of more than 5 °C in the rack inlet and outlet temperatures, Abdelmaksoud et al. [38] investigated different modeling approaches for perforated tiles to capture air entrainment at tile level. The authors discuss how to include mass and momentum conservation in tiles and rack door models. They recommend the use of a momentum source model or quadrants model to address this issue. Arghode et al. [39] and Arghode and Joshi [40] further refine the tiles model by including the geometrical details of the perforation, such as the pore size, pore shape, and pore distribution. They report an enhancement on the flow from the tiles compared with the experimental results and suggest a modification for the body-force model that is initially introduced by Abdelmaksoud et al. [41].

Rack Models.

Rack level models are investigated in order to more accurately predict transport in room simulations. Early work by Patel et al. [32] proposed a simple model for the rack using cuboids, power sources, and the airflow source. The average error in the inlet temperature varies between 7% and 12% at different elevations. This simple model is used extensively in studies that are performed during the following years by Shrivastava et al. [42] and Tan et al. [43], specifically. In a study of a 900 ft2 data center cell, which is performed by Xuanhang et al. [44], the effect of the modeling details of a rack is studied by comparing CFD simulations and experimental data. The rack is modeled with different levels of complexity: (a) a black box model without airflow simulation inside the rack, (b) a detailed rack model (rack frame, doors, and mounting rails) and four server simulators modeled as black boxes, and (c) full details of the server simulators modeled in a detailed rack model. The authors have found that rack level modeling does not have a significant influence on the room level measurements. The same results are concluded in a recent study by Zhai et al. [45], in which the authors design a rack level experiment and use velocity and temperature measurements to validate the black box model and another more detailed model.

Other research focuses on how various elements of rack structures impact IT equipment cooling. Very few papers focus on the effect of the door mesh perforation ratio, which is of open area to total area, on the cooling of IT equipment. North [46] calculated the loss coefficient for doors with a perforation ratio of 40–80% based on flow bench testing. The results show that the difference in impedance between 64% and 80% perforated doors is very small and that both perforation ratios have the same effect on the airflow rate for the IT equipment. Schmidt et al. [47] compared the loss coefficient of a typical perforated rear door of a rack with that of a rear door heat exchanger system. The results show that their design for a rear door heat exchanger does not increase the flow impedance when compared to that of a typical perforated rear door. The impact of a cable management arm (CMA) and rack doors on the airflow through one rack unit server is studied by Coxe [48]. The author has concluded that the rack doors and the CMA each impose a 3% airflow reduction with a 6% cumulative airflow reduction that has a small, but measureable effect on the temperatures of the system components. The impact of just the CMA is studied by Rubenstein [49]. The author has found that the CMA contribution to the overall flow impedance of the rack system is less than 2% for a flow rate range of 100–300 CFM. Alkharabsheh et al. [50] studied the impact of all the rack structure elements on IT equipment cooling. They use compact resistance models to represent the doors, the CMA, and the internal resistance of the server, as shown in Fig. 5(a). The authors find that the rack doors and cable management systems have only a small impact on the impedance of a rack. However, they have concluded that rack structure affects the air stream inside a rack, which leads to internal recirculation and an increase in the inlet temperature, which is shown in Figs. 5(b) and 5(c). They recommend using internal blockages between the sides of the rack and the servers to prevent recirculation, as shown in Fig. 5(d).

Rack models have also been investigated to identify an IT equipment configuration that most efficiently utilizes air flow from the tiles. This process is challenging given the nonuniformity of the tile flow rate. Radmehr et al. [51] studied the effect of a high velocity jet on the IT equipment airflow at different heights. Their findings show that at lower elevations the server is most affected. Recently, Gosh et al. [52] have studied the effect of changing server distribution on the cooling flow rate through the rack. Their work finds that the tile cooling airflow is best utilized when the servers are powered uniformly and are clustered inside a rack.

Liquid-cooling systems are also a focus of rack level models. Schmidt et al. [47] studied the implementation of rear door heat exchangers to cool data centers. The authors indicate that the only change in the rack design includes replacing the typical perforated rear door with a heat exchanger that has impedance close to that of the original door. Almoli et al. [53] have also developed a strategy to implement a rear door heat exchanger model in CFD simulations.

CRAH/CRAC Models.

CRAC models are vital to accurately predicting airflow rate and temperature boundary conditions. However, the number of studies that address the CRAC model is limited compared with those done for other data center components. An example of a CRAH unit is shown in Fig. 3. Samadiani et al. [54] and Patankar [55] addressed the importance of modeling the internal resistance of a CRAC unit. They find that the internal resistance affects pressure drop, and consequently, the flow rate. Another study by Ibrahim et al. [56] shows the importance of modeling the CRAC unit's thermal characteristic, which essentially affects the supply temperature based on the CRAC extract temperature. It is found that neglecting the thermal characteristic curve underestimates the inlet temperatures in room level simulations. Recently, Alkharabsheh et al. [57] have studied different models to simulate the CRAC cooling coil in CFD simulations. The authors have found that the cross flow heat exchanger model provides a good representation of the cooling coil typically used in CRAC units. They also have found that the CRAC cooling coil has a significant impact on the rate of change in temperature during transient situations.

Room Level Models.

Room level models demonstrate the interaction between different data center components. They are commonly used to optimize the data center layouts and cooling system schemes needed to maintain the inlet temperature within the recommended limits and achieve energy efficiency. An example of a data center room is shown in Fig. 3.

Early studies focus on optimizing the layout of raised-floor data centers for energy efficiency. These studies use simple models that represent data center components to develop a preliminary understanding of their overall thermal systems. Schmidt [58] and Schmidt and Cruz [5962] conducted several numerical studies by varying the server heat load, cooling unit locations, and perforated tile locations, removing racks adjacent to high power racks, and reducing the IT equipment flow rate. The authors then observe the factors' influence on the inlet temperatures. Patel et al. [63] simulated different data center layouts to develop a cooling approach that can be applied in any data center. A more realistic physical model is achieved in this study by simulating the variable capacity air conditioning resources. Schmidt and Cruz [64] re-examined the simple models in which tile airflow rate is assumed to be uniform using the newly developed tile airflow rate model. The authors find that a slight variation of less than 8% in tile flow rate can noticeably reduce the inlet temperatures by up to 10 °C depending on the rack power. This indicates the importance of physics-based models for accurately predicting airflow rate. In 2005, Schmidt et al. [65] reviewed the different data center layouts and showed different cooling system schemes that can be used in data centers. Bhopte et al. [66] developed an optimized design for a data center using a multivariable analysis for the plenum depth, cold aisle location, and height of the room. The inlet temperatures are reduced and the hot spots become less prominent, as shown in Fig. 6. Their extensive numerical investigation of various data center layouts has resulted in generic guidelines for energy efficient data centers [6770]. Nagarathinam et al. [71] compared parametric method and multivariable methods using response surface for data center optimization. They use open area, raised-floor height, ceiling height, and location of CRACs as the optimization variables. They conclude that the multivariable method is computationally more efficient and the resulting optimized design exhibits a better thermal performance compared to that of parametric optimization method.

Room level simulations are also used to compare a nonraised floor system with an overhead system by Sorell et al. [72] and Iyengar et al. [73]. The authors have found that the underfloor system exhibits less recirculation, while the overhead cooling units are more energy efficient. They recommend performing a CFD analysis in order to decide what cooling scheme to use, as conclusions of the studies may vary by data center.

An accurate prediction of flow rate is important when obtaining physics-based representations of data centers. A fixed flow rate approximation is widely used when modeling data center cooling units and IT equipment. This approximation is based on the assumption that the pressure drop inside these devices is much greater than in the room, which means that the flow rate from these units does not change significantly. Recent studies attempt to model variable flow devices, especially as data centers adopt air flow solutions that are sensitive to pressure drops. Patankar [55] models various factors that affect the airflow below and above the floor, one such factor being the flow through perforated tiles. The author also considers the pressure drop that the CRAC unit must overcome while providing cold air to the room. This includes: the internal static pressure in the 250–500 Pa range, the external static pressure in the 100–200 Pa range, and the pressure drop across the perforated tiles, which is about 12 Pa. Samadiani et al. [54] studied the effect of the computer room air conditioner/handler (CRAC/H) on the tile flow distribution. The results show that a slight change of the operating point on the CRAC/H blower characterization curve leads to significant changes in the flow rate. Demetriou and Khalifa [74] studied the effect of buoyancy on the temperature patterns in the data centers. Their analysis shows that buoyancy influences the exhaust recirculation to the inlet of the racks. In recent research, Alkharabsheh et al. [7577] investigated the sources of pressure drops in CFD simulations and models the fan curves of the CRAC/H and IT equipment. The authors use manufacturer data and experimental measurements to calibrate the internal pressure drop of the CRAC/H and IT equipment. By observing changes in CRAC, IT equipment fan speed, and in the floor tile openness ratio, the authors determine that fan curves help to simulate more practical data center scenarios and provide an accurate representation of containment systems. Alkharabsheh et al. [78] validated the room level simulations using airflow measurements from a real data center. The room level model achieved a 5% deviational error compared to the airflow measurements.

Dynamic Models.

Dynamic modeling is the focus of more recent research. Dynamic control, a viable energy saving solution, has been a motivating factor for many of the researchers investigating dynamic modeling. Data centers are dynamic environments with continuous variations in computing demand during their operation. Energy savings can be achieved by a dynamic control system that couples the cooling systems and computing systems [79]. Kummert et al. [80] studied the impact of perturbations, such as chiller failure on the water and air temperatures in a data center system in the UK. They have simplified their model by splitting the data center room into slices that are each served by an individual room air conditioning unit. The authors conclude that variations in chiller operation or an increase in the thermal inertia, which are due to the addition of buffer vessels, can maintain air temperatures near normal operating levels. Gondipalli et al. [81] addressed the importance of transient modeling in a case study that varies the server power by taking a triangle shape profile. They find that steady-state analysis is not sufficient for reliable design of data centers due to transient fluctuations in inlet temperatures, which could exceed the allowable limit. Beitelmal and Patel [82] performed transient simulations to study the impact of CRAC failure on temperature variations within a data center. The study highlights the mal-provisioning of CRAC units; when failure occurs the inlet temperatures reach unacceptable levels within 80 s. The study does not account for the thermal mass of rack units or CRAC units, which are proven to be crucial by Ibrahim et al. [56]. Sharma et al. [83] focused on dynamically managing data centers and discussed ways of allocating workloads for more uniform temperature distributions within them. Patel et al. [84] proposed the idea of smart cooling in data centers. The authors consider fully controlling a data center environment through attributes like distributed sensing, variable air conditioning, data aggregation, and a control system that links all of them together. The authors also investigate what they call “smart tiles,” which are variable opening plenum tiles that provide flexibility when controlling the cooling distribution. Their work finds that energy savings can be obtained by directing cooling resources where and when needed. Bash et al. [79] adopted the idea of smart cooling to dynamically control the thermal environment of a data center located in Hewlett-Packard Laboratories in Palo Alto, CA. The data center itself is designed to provide a production IT environment while being used as a test lab for research purposes. A distributed sensor network is placed in it to manipulate the CRAC supply, which is regulated using a proportional–integral–derivative controller. The experimental results show the variation of CRAC fan speed and CRAC supply temperature with time and are used to compare conventional control methods to the proposed ones. A promising 58% reduction in energy consumption is shown in one of their test cases. Khankari [85,86] developed a heat transfer model to estimate the rate of temperature increase in a data center and the time required to reach the maximum allowable temperature in the event of an accidental power shutdown. Using the same model, they conducted parametric studies based on room geometry and IT equipment in order to recommend how to reduce the heating rate during power failure accidents.

A study conducted by Sundaralingam et al. [87] uses a thermodynamic modeling approach. In their work, the authors find a deviation between the thermodynamic model results and the experimental results during a chiller failure, which they attribute to an inaccurate estimation of data center HC. The authors predict the total data center HC by matching the thermodynamic model with experimental data. The focus of other recent studies [8890] has been the characterization of server HC. These studies use server level measurements of air temperatures to estimate server HC without considering the temperatures of internal components. The results are intended to be used for developing server level compact models that can be incorporated in room level models, such that, given inlet boundary conditions of the IT equipment, the outlet air temperature can be predicted (black box models).

In another study, Alkharabsheh et al. [57,91] presented a dynamic model that simulates the HC of IT equipment and a CRAH unit. The authors used a lumped mass model to find that the IT equipment and CRAC HC have a significant effect on the transient response of a data center. They used experimental data to calibrate the server HC. After that they embedded the server model in a full room model, as shown in Figs. 7(a) and 7(b). They concluded that the server HC has a significant impact on the transient response and must be included in transient simulation for accurate estimation of the thermal time constant.

Other research studies examine the impact of CACs during CRAH failure situations. Shrivastava et al. [92] observed an increase in the available time before the rack inlet reaches its critical temperature. This conclusion is also reported by Alkharabsheh et al. [77] using numerical simulations.

In summary, CFD modeling of data centers is more common when compared to the use of other modeling approaches. The comprehensive thermal field is beneficial to data center researchers as opposed to the limited data that other methods provide. The accuracy and feasibility of detailed modeling in a multiscale data center system are important challenges. The research starts at predicting the airflow from the tiles. As researchers become more knowledgeable of the data center complex thermal field, the research is geared to develop semi-empirical models for tiles, racks, and cooling units to produce room level simulations. The room level models are mainly used to run parametric studies for data center optimization. Another milestone in data center CFD modeling is the development of the dynamic models. The dynamic models obtain their importance from the dramatic practical scenarios that happen in real-life data centers, as with cooling system failure. Additionally, they are an important contributor in developing a holistic monitoring and control system, which is seen as the key for future data centers' energy efficiency. A summary of this section is shown in Table 1.

For the purpose of experimentation, building data centers can be very expensive. It is also challenging to conduct experiments in an operational data center because it is mission critical for most users. Researchers can conduct thermal profiling experiments by taking measurements in data centers without affecting its system parameters. The measured data can be used to make various conclusions about the data center thermal field. Schmidt [93] used measurements to show the effect of underfloor blockage on reducing the CRAC flow rate, the effect of leakage on reducing the available cooling airflow rate, and the increase in inlet temperature compared to the CRAC supply temperature. Schmidt and Iyengar [94] also used thermal profiling to compare real data centers with different layouts in order to optimize their design. Karlsson and Moshfegh [95] used thermal cameras to study thermal fields in data centers. They observed that the upper portion of the rack exhibits a higher temperature than the lower portion. Boucher et al. [96] studied the effect of varying the CRAC flow rate on the inlet temperature. Their objective is to build a control scheme that reduces the energy consumed by the CRAC. Beitelmal et al. [97] also used experimental data to introduce adaptive vent tiles for control purposes. Similarly, Chen et al. [98,99] have collected extensive experimental data to develop predictive statistical models for implementation in control schemes.

Due to the domain size and the multiscale nature of room level data center simulations, it is impossible to model every element in detail. Alternatively, simplified models that are less detailed, but semi-empirical are used and verified using experimental data in order to reach a reasonable level of modeling that is accurate and computationally feasible. Data center experimental measurements can be divided based on the size of the experiment, from device to room level. They can be also divided into hydraulic and thermal testing. Most of the data center experiments available in literature are device level. Abdelmaksoud et al. [38,100] and Arghode et al. [39,101] focused on experiments on tiles and racks, Karki et al. [35] focused on plenum and tile level experiments, and Bhopte et al. [102] focused on the effect of blockages in the plenum. Other studies examine thermal server level experiments, as in studies by Ibrahim et al. [88], Erden et al. [89,90], and VanGilder et al. [103]. For example, Abdelmaksoud et al. [38] build an experimental setup to measure the jet velocity at different elevations, as shown in Fig. 8(a). They test symmetric and asymmetric perforated tiles at two different open area ratios, as shown in Fig. 8(b). A single hot-wire anemometer probe is used to measure the velocity with an estimated error of 3%. The result for tile B shows that small jets issuing from the perforated holes have merged into a common jet. The velocity at the center of the tile is found to be 2.2 m/s. This value is significantly higher than the 1.25 m/s based on the fully open tile. The results help to develop the body-force model for an accurate prediction of downstream velocity of a tile using CFD simulations. The complexity of the required model for a tile is found to depend on the tile's perforation ratio.

The earliest study to make a comparison between room level simulations and experiments is conducted by Shrivastava et al. [42]. Their CFD simulation results show good agreement in low and modest power locations and a discrepancy in high power cold aisles. Very few controlled room level experiments have been conducted in data centers. Iyengar et al. [104] used the experimental measurements in a small cell data center to verify their CFD simulations. The maximum discrepancy is found at the rack exhaust. The results of this experiment have prompted further modeling to investigate the cause of the temperature increase [44]. Fakhim et al. [105] compared simulation results with experimental measurements. Using a validated model, they conducted parametric studies to enhance the thermal conditions by changing the CRAC layout, cold aisle containment, and ceiling return systems. In recent works, room level experiments are conducted by Arghode et al. [39], followed by room level simulations for a contained cold aisle data center [106].

While experimental measurements are already useful because they help calibrate and verify device to room level models, they also provide the data that generate guidelines for operating energy efficient data centers. Measurements in the field of liquid cooling have been abundant for around half a century. One of the earliest demonstrations of indirect liquid cooling at the module level is found in the IBM 3081 mainframe computer from 1982, which incorporates water-cooled TCMs [107,108]. A cold plate is reported to cool hundreds of chips mounted on a glass ceramic substrate of 2000 W total [109,110]. While the aforementioned publications are the first to introduce this novel cooling system solution into the industry, they also provide valuable information on design and testing, thermal protection schemes, and performance analysis and variation over its lifetime. In recent years, there have been more innovative designs for indirect cooling with a single-phase flow. Lei et al. [111] demonstrated that multilayer copper minichannel heat sinks cooled with distilled water can potentially reduce thermal resistance and pressure drop for an overall volumetric flow rate. It is also observed that when the number of copper layers increases more than five these performance improvements diminished rapidly. In another study by Dede [112], it was shown that introducing hierarchical branching channels in a manifolded microchannel cold plate enables high power densities to be cooled in the order of 200 W/cm2. The design details for the cold plate are shown in Fig. 9(a). Figure 9(b) shows a schematic for the experimental setup for thermal-fluid performance measurements. A 50/50 ethylene–glycol/water coolant is used in this experiment. The presented cold plate design shows good thermal characteristics, but advanced diffusion bonding techniques are needed, which can be challenging for high volume production.

Over time, the focus of research has shifted more toward facility level design, given the need to reduce overall data center power consumption through energy efficient cooling. Of particular interest is the option to reduce or completely eliminate mechanical cooling by conditioning the facility liquid primarily through water-side economization (either adiabatic or dry/sensible). This “chiller-less” data center design is exemplified by a 15 kW hybrid-cooled rack that is sustained by facility liquid with a temperature modulated by an external dry cooler arrangement [113117]. Cooling comprises 3.5% of IT power consumption (or a partial PUE of 1.035) during a 22 hr experimental run. For a data center capacity of 1 MW, an annual savings in the order of $90,000–$240,000 (assuming an energy cost of 4–11 cents/kWh) is projected. This stands in comparison to a conventional refrigeration-based system. In the field of immerging technologies, Eiland et al. [118] demonstrated the efficiency of oil immersion cooling for an open compute web server. The authors reported that supplying light mineral oil at low flow rates and high inlet temperatures minimizes the cooling (pumping) power as a fraction of IT power consumption. This occurs because the coolant's viscosity reduces significantly over the operating range, allowing pumping power to reduce by 40% between 30 °C and 50 °C. Similarly, Tuma [119] discussed the benefits of passive cooling, which is the immersion of servers in a two-phase dielectric fluid, from an economic and environmental standpoint. This method requires a redesign of the data center layout and its construction, which may not be ideal for most data centers owners. To summarize, different emerging technologies have seen rapid advancements since the incipience of liquid-based cooling, which has led to an overall redesign of traditional data center architecture. Consequently, these solutions have provided the means for effective thermal management of high power devices, most notably, doing so while being extremely energy efficient. A summary of the experimental measurements in data center literature is shown in Table 2.

Containment Systems.

In the last few years, new strategies have been adopted to diminish airflow and inlet temperature nonuniformities in data centers [120124]. One of the most promising solutions is enclosing the cold or hot aisle in what is known as a CAC or HAC system, respectively, as shown in Fig. 10. Enclosing the cold aisle isolates the cold and hot air streams, which allows the rest of the room to become a large hot air return. Gondipalli et al. [125] have conducted CFD simulations that reveal CRAC units cannot provide a sufficient airflow rate required for IT equipment. They optimize the design by placing an opening in the roof of the containment, which provides the necessary air for the room. The fixed flow boundary conditions in the CRAC limits the number of practical scenarios that they can study in their model. Emerson Network Power [126] proposed a CAC coupled with a control system to maintain the optimum value of pressure inside the cold aisle and achieve energy efficiency at different operational conditions. Pervila and Kangasharju [127] conducted experiments to understand the effect of CAC on airflow, electricity consumption, operating temperatures, and cooling requirements. The authors concluded that using “low-cost” CAC reduces the cooling requirements by 20%. Schmidt et al. [128] used CFD simulations to estimate savings from the CRAC unit if a containment system is used. They conclude that 59% of the energy consumed in the cooling unit can be saved. Similarly, Shrivastava et al. [129] used CFD simulations to quantify the savings while implementing cold aisle containment, hot aisle containment, and a chimney cabinet. They use the CRAH set point temperature as the criterion to compare the energy consumption of each system. Compared to a conventional cooling system, the CAC is estimated to save 25% of the cost of annual cooling. This value can be further enhanced to approximately 40% by using a vertical exhaust duct or the CAC on all rows of racks. Demetriou and Khalifa [130] used a simplified thermodynamic model to optimize an air-cooled data center in their work. Xu et al. [131] related outdoor temperatures to the achieved energy savings of a containment system. Their results show that if the outdoor temperature is higher than the inlet temperature and less than the IT equipment exhaust temperature, a CAC saves more energy. Sundaralingam et al. [132] conducted an experimental study to characterize CACs in a research data center with an area of 52 m2 (600 sq ft). The research data center layout is shown in Fig. 11(a). They characterize three containment systems: fully contained aisle, doors only, and ceiling only, as shown in Fig. 11(b). All containment configurations are tested with overprovision and underprovision operational conditions. The inlet temperatures are measured at the racks inlets, as shown in Fig. 11(c) for the overprovision case. The two temperature contours for each containment configuration in Fig. 11(c) represent the inlet of the two row of racks (row of racks 1–7 and row of racks 8–14), as shown by the viewing direction in Fig. 11(a). When the cold aisle is overprovisioned, it indicates a higher airflow rate is provided from the CRAH than what is needed by the IT equipment. They conclude with recommended guidelines, such as overprovisioning the cold aisle containments when possible and using a ceiling only containment system over doors only if full containment is not an available option. The experimental data are also used to validate numerical simulations in another study by Arghode et al. [106]. In this study, they used a modified body-force model for the tiles to accurately capture the temperature field at the rack inlet. They have also studied a partially contained system with only its tops panels, the results of which show improvement from a completely open cold aisle. Muralidharan et al. [133] conducted experiments to compare a CAC to an open aisle system. They vary the CRAC set point temperature and fan speed while reporting the rack inlet temperature as a criterion. Their work finds that energy savings can be obtained by operating at a high CRAC set point temperature and low CRAC fan speed while maintaining the inlet temperature within ASHRAE guidelines. Additionally, Shrivastava and Ibrahim [92] conducted an experiment that shows a newly discovered benefit of CAC, which appears in the event of a cooling system failure. Their research demonstrated that the ride-through time increases in a CRAH failure situation. This conclusion agrees with a numerical study by Alkharabsheh et al. [77], which shows that the IT equipment fans can enable a recirculation of the flow through the plenum and the failed CRAH. Also, Alkharabsheh et al. [77] in their study used numerical simulations to further demonstrate the benefits of partially contained cold aisle systems (doors only and ceiling only) in failure situations.

The effect of IT equipment leakage is studied by Kennedy [134]. This research reveals that the servers leak by about 23–135% of their designed flow rate. The leakage is noticeable when the server is not operational, as the cold air can still escape through the server enclosure due to the pressure difference. Leakage through the surface of containment is observed by Arghode et al. [39], for which cold air overprovisioning is suggested as a solution.

Alkharabsheh et al. [78,135] have developed an experimentally validated model for a CAC. The authors use airflow rate measurements to calibrate an uncontained data center for pressure drops, and then use airflow rate and temperature measurements to calibrate a CAC. They find that modeling CACs requires careful calibration of the pressure drops in cooling units and servers. Additionally, they show that a detailed simulation of containment surface and rack structure is important for an accurate simulation of a CAC and for exploring the impact of leakage on the inlet temperature, as shown in Figs. 12(a) and 12(b). They achieved a 0.99 °C average temperature difference between the simulations and the measurements, as shown in Fig. 12(c). The results in Fig. 12(c) are at elevation of 2.42 m from the concrete slab. Their results also show that overprovision does not prevent leakage, a 10 °C temperature increase is reported at higher rack elevation for 10% overprovision, as shown in Fig. 12(d). Alkharabsheh et al. [136] then developed a simplified model for a containment system that accounts for leakage, with the aim of studying its effect on inlet temperature. They find that the pressure inside the contained cold aisle is equal to the pressure in the room after a 15% leakage area ratio. They also find that the inlet temperature increases as the leakage area ratio increases until it meets a temperature equal to that of an uncontained cold aisle system.

Economizer Cooling.

Air-side economizers and direct/indirect evaporative cooling are currently used in many data centers. Despite how common these technologies are, papers describing CFD models for their application are very limited. Of the studies available is one by Gebrehiwot et al. [137], where they modeled an IT pod and an indirect/direct evaporative cooling unit, as shown in Fig. 13. The evaporative cooling units are modeled compactly, which accounts for the pressure drop across and saturation effectiveness of both the direct and indirect evaporative cooling units. The results show good agreement between the CFD model and analytical calculations. In addition to this, the author identifies a face velocity variation in the air filters with some regions having air speeds as high as 700 fpm, which is much higher than the recommended 500 fpm for which the filters are designed. The paper discusses various air flow distribution improvement ideas, which are expected to extend the life and improve the performance of the filters.

A detailed numerical model of a direct evaporative cooling unit in the form of a fogging system is studied by Vasani and Agonafer [138]. Various air inlet conditions, nozzle orientation, and counts are examined using the CFD tool ansys fluent. In their work, the authors vary a fogging system's nozzle operating pressure, inlet velocity, and nozzle orientation to control and improve the evaporation efficiency and outlet air conditions uniformity.

Most data center level CFD models study what happens inside the data center. Only a limited number of papers are published that model the effect of a data center's surrounding environment and the location of its air inlet and outlet on the cooling and reliability of IT equipment. CFD modeling a data center's surroundings becomes especially important when air-side economizer systems are used for cooling. Entrainment of contaminants, which can be carried by the wind from nearby cooling towers, diesel generators, transportation corridors, industrial facilities, etc., is modeled using CFD tools by Seger and Solberg [139].

Hybrid-Cooled Systems.

Advancements in the semiconductor industry and server computing load allocation have increased the power footprint of racks in modern data centers. They have also made it challenging for air-cooling systems to maintain IT equipment within permissible working temperatures. In light of this, research shows that the thermal properties of liquids make liquid-cooling systems a potential alternative to those that use air [140]. While the idea of cooling electronic systems using liquids is not novel [140,141], potential leaks and evaporative problems in liquid loops have historically made it risky to locate them close to the electronics equipment, which has greatly restricted its application in real data centers.

Hybrid-cooling systems have been introduced to take advantage of the tempting cooling capabilities of a liquid-cooling system and diminish concerns of liquid loops running in proximity to electronics equipment. This can be achieved by placing air to liquid coolers in the vicinity of server racks, thus keeping air in direct contact with the electronic equipment for cooling [142]. These systems are designed to assist the conventional air-cooling system using CRACs [143]. There are several types of liquid-cooling racks, such as closed-liquid racks, overhead, in-row cooling, and rear door heat exchangers. The closed-liquid rack is designed to be thermally isolated from the room and have no impact on the air flow within the room. The overhead, in-row, and rear door cooling all exchange heat and air flow with the room, but heat in these types is removed near the heat load, which reduces stress on the room level air-cooling systems [120].

More specifically, the rear door cooling solution is under extensive focus in literature [47,141,144148]. In terms of setup, a rear door heat exchanger is attached to the back of a rack. The hot air goes through the heat exchanger and gets cooled before it recirculates into the cold aisle [47], as shown in Fig. 14. Schmidt et al. [47] described the design of the rear door heat exchanger and performed a room level analysis to investigate its thermal impact on a data center. The results show that the rear door heat exchanger system reduces the impact of hot air recirculation by decreasing the rack outlet temperature. Additionally, by adopting the rear door heat exchanger system, there is a significant improvement in the total cost of ownership for both the first time and annual cost of data center. Looking at this further, Mulay et al. [144] performed extensive parametric studies to demonstrate the effect of the IBM rear door heat exchanger system. The authors find that utilizing the rear heat exchanger reduces the gradient of the rack inlet temperature along the rack and reduces the demand on cooling from the CRAC units. A new rear heat exchanger design using R410 refrigerant instead of water is introduced by Tsukamoto et al. [146]. By installing rear door heat exchangers on four of the 21 racks and then using the data center level case study with experimental measurements, the authors observed a 13% reduction in cooling energy from that of a typical air-cooled facility. Schmidt and Iyengar [145] studied the effect of different cooling system failure modes on the inlet temperature, which is done to maintain the inlet temperatures according to ASHRAE requirements, using steady-state room level simulations. They find that using rear door heat exchangers can keep the data center temperature within the allowable limits when the CRACs have failed.

Device Level Liquid-Cooling Solutions.

Liquid cooling has re-emerged as a viable method of thermal management for high density interconnects devices. At the device level, CFD is an indispensable tool for the effective design and evaluation of solutions, such as cold plates and heat exchangers. Unlike in air-cooling simulation, where the multilevel and length scale nature of the problem often require a simplification of models, CFD analyses of liquid cooling can be conducted in a fairly detailed fashion while sustaining high accuracy even at the module level. Fernandes et al. [149] used a multidesign variable optimization using commercially available tools for a cold plate with a fixed pumping power, as shown in Fig. 15. Goth et al. [150] showed that CFD also permits a performance evaluation of cold plates when assembled with a given module by predicting chip temperature contours. Brunschwiler et al. [151] previewed a novel cross-flow cold plate and deployed a hybrid model that characterizes the solution using a commercial CFD tool. In their work, they determined flow impedance and an effective heat transfer coefficient for a varying number of mesh layers (of copper sheets that form the cold plate).

In general, the re-emergence of liquid cooling, advancement of CFD tools, and continued increase in available processing power have helped promote more detailed computational analyses and advocate novel designs of module-level liquid-cooling solutions. A summary of the recent technologies in data centers is shown in Table 3.

Optimizing performance and energy consumption in data centers requires a holistic integration of workload prediction, allocation, and thermal management using smart control systems. This is best accomplished by developing a single holistic expert system that is capable of sensing vital data within the data center and self-optimizing its performance in real time. To be successful, the expert system must be capable of learning and adjusting to workload variation, environmental changes, or even changes in hardware, such as IT or critical infrastructure. However, this is a challenging undertaking due to the inherent complex multivariate design issues that arise from coupling workload allocation, thermal management, and control systems. There are also multiple scaling issues in data centers. These issues exist because each individual data center can range in size and complexity, comprising relatively small rooms that serve a single business to massive multiacre megawatt facilities that provide IT on demand as a utility. Optimizing energy consumption in a data center is therefore highly dependent upon the size and nature of the data center itself.

Scaling Issues.

Among the many issues that can exist in data centers are those of scaling. Both temporal and spatial scales have a direct impact on thermal management design. Spatial issues are particularly important since the time constant for cooling systems is dependent to the distance between the coolant's point of entry into the room and the location of the equipment that needs to be cooled. That distance can vary from a few meters to tens of meters. For example, in an air-cooled data center, additional cold air that is directed at a cold aisle will take additional time to reach that aisle if it is located far from the cooling unit. On the other hand, the workload assigned to that cold aisle starts heating up the servers almost instantly. Similar considerations arise with water cooling. It is therefore important to fully understand different time scales throughout the data center. Other scaling issues have to do with the specific design of the cooling system that exists in a data center. For an air-cooled data center, if the cold aisle is contained then its behavior and performance will be significantly different from a noncontained cold aisle. In a contained cold aisle, different IT units in that cold aisle compete for air that is delivered through the floor tiles. At any given time, the air supply is limited, which means that if some IT units heat up and demand more air then they can starve adjacent units of the air that they need. This is particularly true if the IT units are not identical, or if some of them have larger air moving devices. A typical example of this is when a blade server unit is placed adjacent to a regular server or a switch box.

With these considerations it is imperative to develop models capable of accurately predicting the thermal performance of data centers, but even more so to run those models in real time along with the workload allocation algorithms. Potentially, the models can then be used to train neural network models that can be run in real time. These neural network models can also be trained using streaming data from sensors placed strategically at critical points throughout the data center. The resulting models and data can continue to improve the accuracy of the modeling approach, and in turn, help efficiently operate the data center.

The heat generated by electronic equipment in data centers has consistently increased due to developments in the semiconductor industry and miniaturization. On top of this, data centers are continuously growing, compelled by enormous developments in revolutionary technologies (e-commerce, Big Data, and cloud computing) and other growing online services. The sustainable and reliable operation of data centers is addressed through the application of recent cooling technologies. A comprehensive summary of recent research efforts in data center thermal management is presented in this paper. Numerical modeling with emphasis on CFD, experimental measurements, and recent cooling technologies (containment systems, economizer cooling, hybrid-cooling, and device level liquid cooling) are extensively reviewed. In general, the reported research focuses on reducing the rack inlet temperature and the energy consumed by the cooling system. The research is identified based on the time of development and motivation for each milestone. All of the reported technologies are still in the developmental stages and many researches are still making progress. There are many challenges facing thermal management in data centers, such a workload variation, environment changes, and scaling issues (data centers vary in size, complexity, and business objective). Therefore, for future research to successfully optimize performance and energy consumption in data centers, it must provide a holistic integration of workload prediction, allocation, and thermal management using smart control systems.

U.S. Environmental Protection Agency (EPA), 2007, “ Report to Congress on Server and Data Center Energy Efficiency, Public Law 109-43,” U.S. Environmental Protection Agency, Washington, DC.
Columbus, L., 2012, “ Predicting Enterprise Cloud Computing Growth,” Forbes, accessed January 31, 2014, http://www.forbes.com/sites/louiscolumbus/2013/09/04/predicting-enterprise-cloud-computing-growth/
Koomey, J. , 2011, Growth in Data Center Electricity Use 2005 to 2010, Analytics Press, Oakland, CA.
Venkatraman, A. , 2013, “ Global Census Shows Datacenter Power Demand Grew 63% in 2012,” DatacenterDynamics (DCD) Intelligence, London.
Abramovitz, B., 2013, “ Industry Research Monitor Data Centers,” General Electric Capital, Norwalk, CT.
McNevin, A., 2014, “ 15% Growth Forecast for North America Colocation Market 2014,” DatacenterDynamics, London.
Stansberry, M. , and Kudritzki, J. , 2013, “ Uptime Institute 2012 Data Center Industry Survey,” Uptime Institute, New York.
Wikipedia, 2014, “Mission Critical,” accessed January 31, 2014, http://en.wikipedia.org/wiki/Mission_critical
“Mission Critical Facilities Management Principals of Design, Operations, and Maintenance,” 2012, Last accessed January 31, 2014, http://www.construction.org/clientuploads/resource_center/facilities_management/MissionCriticalFacilities.pdf
Ponemon Institute, 2010, “ National Survey on Data Center Outages,” Ponemon Institute, Traverse City, MI.
Zuo, Z. J. , Hoover, L. R. , and Phillips, A. L. , 2002, An Integrated Thermal Architecture for Thermal Management of High Power Electronics, Millpress, Rotterdam, The Netherlands.
Salim, M. , and Tozer, R. , 2010, “ Data Centers' Energy Auditing and Benchmarking: Progress Update,” ASHRAE Trans., 116(1), pp. 109–117.
Patankar, S. , and Karki, K. , 2004, “ Distribution of Cooling Airflow in a Raised-Floor Data Center,” ASHRAE Trans., 110(2), pp. 629–634.
Sharma, R. K. , Bash, C. E. , and Patel, C. D. , 2002, “ Dimensionless Parameters for Evaluation of Thermal Design and Performance of Large-Scale Data Centers,” AIAA Paper No. 2002-3091.
Muralidharan, B. , Shrivastava, S. , Ibrahim, M. , Alkharabsheh, S. A. , and Sammakia, B. G. , 2013, “ Impact of Cold Aisle Containment on Thermal Performance of Data Center,” ASME Paper No. IPACK2013-73201.
Joshi, Y. , and Kumar, P. , 2012, Energy Efficient Thermal Management of Data Centers, Springer, New York.
Rambo, J. , and Joshi, Y. , 2007, “ Modeling of Data Center Airflow and Heat Transfer: Stat of the Art and Future Trends ,” Distrib. Parallel Databases, 21(2–3), pp. 193–225. [CrossRef]
Rambo, J. , and Joshi, Y. , 2006, “ Reduced-Order Modeling of Multiscale Turbulent Convection: Application to Data Center Thermal Management,” Ph.D. disseration, Georgia Institute of Technology, Atlanta, GA.
Rambo, J. , and Joshi, Y. , 2005, “ Reduced Order Modeling of Steady Turbulent Flows,” ASME Paper No. HT2005-72143.
Somani, A. , and Joshi, Y. , 2009, “ Data Center Cooling Optimization: Ambient Intelligence Based Load Management (AILM),” ASME Paper No. HT2009-88228.
Samadiani, E. , 2009, “ Energy Efficient Thermal Management of Data Centers Via Open Multi-Scale Design,” Ph.D. disseration, Georgia Institute of Technology, Atlanta, GA.
Ghosh, R. , and Joshi, Y. , 2013, “ Error Estimation in POD-Based Dynamic Reduced-Order Thermal Modeling of Data Centers,” Int. J. Heat Mass Transfer, 57(2), pp. 698–707. [CrossRef]
Belady, C. , Kelkar, K. , and Patankar, S. , 1999, “ Improving Productivity of Electronic Packaging With Flow Network Modeling (FNM),” Electron. Cool., 5(1), pp. 36–40.
Radmehr, A. , Kelkar, K. , Kelly, P. , Patankar, S. , and Kang, S. , 1999, “ Analysis of the Effect of Bypass on Performance of Heat Sinks Using Flow Network Modeling (FNM),” 15th Annual IEEE Semiconductor Thermal Measurement and Management Systems (SEMI-THERM), San Diego, CA, Mar. 9–11, pp. 42–47.
Steinbrecher, R. , Radmehr, A. , Kelkar, K. , and Patankar, S. , “ Use of Flow Network Modeling (FNM) for the Design of Air-Cooled Servers,” Innovative Research Inc., Minneapolis, MN, http://inres.com/assets/files/macroflow/MF08-Air-Cooled-Server.pdf
Innovative Research, 2003, MacroFlow, Innovative Research, Plymouth, MN.
Kelkar, K. , and Patankar, S. , “ Analysis and Design of Liquid-Cooling Systems Using Flow Network Modeling (FNM),” ASME Paper No. IPACK2003-35233.
Cross, H. , 1936, “ Analysis of Flow in Networks of Conduits or Conductors,” University of Illinois Bulletin, University of Illinois at Urbana-Champaign, Urbana, IL, Report No. 286.
Fernandes, J. , Ghalambor, S. , Docca, A. , Aldham, C. , Agonafer, D. , Chenelly, E. , Chan, B. , and Ellsworth, M. , 2013, “ Combining Computational Fluid Dynamics (CFD) and Flow Network Modeling (FNM) for Design of a Multi-Chip Module (MCM) Cold Plate,” ASME Paper No. IPACK2013-73294.
Radmehr, A. , and Patankar, S. , 2004, “ A Flow Network Analysis of a Liquid Cooling System That Incorporates Microchannel Heat Sinks,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM '04), Las Vegas, NV, June 1–4, pp. 714–721.
Ellsworth, M. , 2014, “ Flow Network Analysis of the IBM Power 775 Supercomputer Water Cooling System,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM), Orlando, FL, May 27–30, pp. 715–722.
Patel, C. D. , Bash, C. E. , Belady, C. , Stahl, L. , and Sullivan, D. , 2001, “ Computational Fluid Dynamics Modeling of High Compute Density Data Centers to Assure System Inlet Air Specifications,” Pacific Rim Technical Conference and Exposition of Packaging and Integration of Electronic and Photonic Systems (IPACK), Kauai, HI, July 8–13, ASME Paper No. IPACK2001-15622.
Schmidt, R. R. , Karki, K. C. , Kelkar, K. M. , Radmehr, A. , and Patnkar, S. V. , 2001, “ Measurements and Predictions of the Flow Distribution Through Perforated Tiles in Raised Floor Data Centers,” Pacific Rim Technical Conference and Exposition of Packaging and Integration of Electronic and Photonic Systems (IPACK), Kauai, HI, July 8–13, ASME Paper No. IPACK2001-15728.
Kang, S. , Schmidt, R. , Kelkar, K. M. , Radmehr, A. , and Patankar, S. V. , 2001, “ A Methodology for the Design of Perforated Tiles in Raised Floor Data Centers Using Computational Flow Analysis,” IEEE Trans. Compon. Packag. Technol., 24(2), pp. 177–183. [CrossRef]
Karki, K. , Patankar, S. , and Radmehr, A. , 2003, “ Techniques for Controlling Airflow Distribution in Raised-Floor Data Centers,” ASME Paper No. IPACK2003-35282.
VanGilder, J. , and Schmidt, R. , 2005, “ Airflow Uniformity Through Perforated Tiles in a Raised-Floor Data Center,” ASME Paper No. IPACK2005-73375.
Radmehr, A. , Schmidt, R. , Karki, K. , and Patankar, S. , 2005, “ Distributed Leakage Flow in Raised-Floor Data Centers,” ASME Paper No. IPACK2005-73273.
Abdelmaksoud, W. A. , Khalifa, H. E. , Dang, T. Q. , Elhadidi, B. , Schmidt, R. R. , and Iyengar, M. , 2010, “ Experimental and Computational Study of Perforated Floor Tile in Data Centers,” 12th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), Las Vegas, NV, June 2–5.
Arghode, V. K. , Kumar, P. , Joshi, Y. , Weiss, T. , and Meyer, G. , 2013, “ Rack Level Modeling of Air Flow Through Perforated Tile in a Data Center,” ASME J. Electron. Packag., 135(3), p. 030902. [CrossRef]
Arghode, V. , and Joshi, Y. , 2013, “ Modeling Strategies for Air Flow Through Perforated Tiles in a Data Center,” IEEE Trans. Compon. Packag. Technol., 3(5), pp. 800–810. [CrossRef]
Abdelmaksoud, W. , Dang, T. , Khalifa, H. E. , Schmidt, R. , and Iyengar, M. , 2012, “ Perforated Tile Models for Improving Data Center CFD Simulation,” Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 60–67.
Shrivastava, S. K. , Iyengar, M. , Sammakia, B. G. , Schmidt, R. , and Vangilder, J. W. , 2006, “ Experimental-Numerical Comparison for a High-Density Data Center: Hot Spot Fluxes in Excess of 500 W/ft2 ,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM '06), San Diego, CA, May 30–June 2, pp. 402–411.
Tan, S. P. , Toh, K. C. , and Wong, Y. W. , 2007, “ Server-Rack Air Flow and Heat Transfer Interactions in Data Centers,” ASME Paper No. IPACK2007-33672.
Zhang, X. S. , VanGilder, J. W. , Iyengar, M. , and Schmidt, R. R. , 2008, “ Effect of Rack Modeling Detail on the Numerical Results of a Data Center Test Cell,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM 2008), Lake Buena Vista, FL, May 28–31, pp. 1183–1190.
Zhai, J. Z. , Hermansen, K. A. , and Al-Saadi, S. , 2012, “ The Development of Simplified Rack Boundary Conditions for Numerical Data Center Models,” ASHRAE Trans., 118(2), pp. 436–449.
North, T. , 2011, “ Understanding How Cabinet Door Perforation Impacts Airflow,” BICSI News, Sept./Oct., pp. 36–42.
Schmidt, R. , Chu, R. , Ellsworth, M. , Iyengar, M. , Porter, D. , Kamath, V. , and Lehman, B. , 2005, “ Maintaining Datacom Rack Inlet Air Temperatures With Water Cooled Heat Exchanger,” ASME Paper No. IPACK2005-73468.
Coxe, K. , 2009, “ Rack Infrastructure Effects on Thermal Performance of a Server,” Dell Enterprise Thermal Engineering, White Paper.
Rubenstein, B. , 2008, “ Cable Management Arm Airflow Impedance Study,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM 2008), Orlando, FL, May 28–31, pp. 577–582.
Alkharabsheh, S. A. , Sammakia, B. G. , and Murray, B. T. , 2014, “ Experimental Characterization of Pressure Drop in a Server Rack,” IEEE Inter Society Conference on Thermal Phenomena (ITHERM), Orlando, FL, May 27–30, pp. 547–556.
Radmehr, A. , Karki, K. C. , and Patankar, S. V. , 2007, “ Analysis of Airflow Distribution Across a Front-to-Rear Server Rack,” ASME Paper No. IPACK2007-33574.
Ghosh, R. , Sundaralingam, V. , and Joshi, Y. , 2012, “ Effect of Rack Server Population on Temperatures in Data Centers,” Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 30–37.
Almoli, A. , Thompson, A. , Kapur, N. , Summers, J. , Thompson, H. , and Hannah, G. , 2012, “ Computational Fluid Dynamic Investigation of Liquid Rack Cooling in Data Centres,” Appl. Energy, 89(1), pp. 150–155. [CrossRef]
Samadiani, E. , Rambo, J. , and Joshi, Y. , 2010, “ Numerical Modeling of Perforated Tile Flow Distribution in a Raised-Floor Data Center,” ASME J. Electron. Packag., 132(2), p. 021002. [CrossRef]
Patankar, S. V. , 2010, “ Airflow and Cooling in a Data Center,” ASME J. Heat Transfer, 132(7), p. 073001. [CrossRef]
Ibrahim, M. , Bhopte, S. , Sammakia, S. , Murray, B. , Iyengar, M. , and Schmidt, R. , 2010 “ Effect of Thermal Characteristics of Electronic Enclosures on Dynamic Data Center Performance,” ASME Paper No. IMECE2010-40914.
Alkharabsheh, S. , Sammakia, B. , Shrivastava, S. , and Schmidt, R. , 2014, “ Dynamic Models for Server Rack and CRAH in a Room Level CFD Model of a Data Center,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), Orlando, FL, May 27–30, pp. 1338–1345.
Schmidt, R. , 2001, “ Effect of Data Center Characteristics on Data Processing Equipment Inlet Temperatures,” ASME Paper No. IPACK2001-15870.
Schmidt, R. , and Cruz, E. , 2002, “ Raised Floor Computer Data Center: Effect on Rack Inlet Temperatures of Chilled Air Exiting Both the Hot and Cold Aisles,” IEEE Inter Society Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM 2002), San Diego, CA, June 1, pp. 580–594.
Schmidt, R. , and Cruz, E. , 2002, “ Raised Floor Computer Data Center: Effect on Rack Inlet Temperatures When High Powered Racks are Situated Amongst Lower Powered Racks,” ASME Paper No. IMECE2002-39652.
Schmidt, R. , and Cruz, E. , 2003, “ Raised Floor Computer Data Center: Effect on Rack Inlet Temperatures When Adjacent Racks are Removed,” ASME Paper No. IPACK2003-35240.
Schmidt, R. , and Cruz, E. , 2003, “ Raised Floor Computer Data Center: Effect of Rack Inlet Temperatures When Rack Flowrates are Reduced,” ASME Paper No. IPACK2003-35241.
Patel, C. D. , Sharma, R. , Bash, C. E. , and Beitelmal, A. , 2002, “ Thermal Considerations in Cooling Large Scale High Compute Density Data Centers,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM 2002), San Diego, CA, San Diego, CA, June 1, pp. 767–776.
Schmidt, R. , and Cruz, E. , 2004, “ Cluster of High-Powered Racks Within a Raised-Floor Computer Data Center: Effect of Perforated Tile Flow Distribution on Rack Inlet Air Temperatures,” ASME J. Electron. Packag., 126(4), pp. 510–519. [CrossRef]
Schmidt, R. , Cruz, E. , and Iyengar, M. , 2005, “ Challenges of Data Center Thermal Management,” IBM J. Res. Dev., 49(4.5), pp. 709–723. [CrossRef]
Bhopte, S. , Agonafer, D. , Schmidt, R. , and Sammakia, B. , 2006, “ Optimization of Data Center Room Layout to Minimize Rack Inlet Air Temperature,” ASME J. Electron. Packag. 128(4), pp. 380–387. [CrossRef]
Bhopte, S. , Sammakia, B. , Schmidt, R. , Iyenger, M. , and Agonafer, D. , 2006, “ Effect of Under Floor Blockages on Data Center Performance,” Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM '06), San Diego, CA, May 30–June 2, pp. 426–433.
Hannaford, P. , 2006, “ Ten Cooling Solutions to Support High-Density Server Deployment,” White Paper, American Power Conversion, West Kingston, RI, Report No. WP-42 v5.
Greenberg, S. , Mills, E. , Tschudi, B. , Rumsey, P. , and Myatt, B. , 2006, “ Best Practices for Data Centers: Lessons Learned From Benchmarking 22 Data Centers,” ACEEE Summer Study on Energy Efficiency in Buildings, Pacific Grove, CA, Aug. 13–18, pp. 76–87.
Schmidt, R. , and Iyengar, M. , 2007, “ Best Practices for Data Center Thermal and Energy Management: Review of Literature,” ASHRAE Trans., 113(1), pp. 206–218.
Nagarathinam, S. , Fakhim, B. , Behnia, M. , and Armfield, S. , 2013, “ A Comparison of Parametric and Multivariable Optimization Techniques in a Raised-Floor Data Center,” ASME J. Electron. Packag., 135(3), p. 030905. [CrossRef]
Sorell, V. , Escalante, S. , and Yang, J. , 2005, “ Comparison of Overhead and Underfloor Air Delivery Systems in a Data Center Environment Using CFD Modeling,” ASHRAE Trans., 111(2), pp. 756–764.
Iyengar, M. , Schmidt, R. , Sharma, A. , McVicker, G. , Shrivastava, S. , Sri-Jayantha, S. , Amemiya, Y. , Dang, H. , Chainer, T. , and Sammakia, B. , 2005, “ Thermal Characterization of Non-Raised Floor Air Cooled Data Centers Using Numerical Modeling,” ASME Paper No. IPACK2005-73387.
Demetriou, D. W. , and Khalifa, H. E. , 2011, “ Evaluation of a Data Center Recirculation Non-Uniformity Metric Using Computational Fluid Dynamics,” ASME Paper No. IPACK2011-52005.
Alkharabsheh, S. , Sammakia, B. , Shrivastava, S. , and Schmidt, R. , 2013, “ Utilizing Practical Fan Curves in CFD Modeling of a Data Center,” IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), San Jose, CA, Mar. 17–21, pp. 211–215.
Alkharabsheh, S. , Sammakia, B. , Shrivastava, S. , Ellsworth, M. , David, M. , and Schmidt, R. , 2013, “ Numerical Steady State and Dynamic Study Using Calibrated Fan Curves for CRAC Units and Servers,” ASME Paper No. IPACK2013-73217.
Alkharabsheh, S. , Sammakia, B. , Shrivastava, S. , and Schmidt, R. , 2013, “ A Numerical Study for Contained Cold Aisle Data Center Using CRAC and Server Calibrated Fan Curves,” ASME Paper No. IMECE2013-65145.
Alkharabsheh, S. , Sammakia, B. , and Shrivastava, S. , 2015, “ Experimentally Validated CFD Model for a Data Center With Cold Aisle Containment,” ASME J. Electron. Packag., 137(2), p. 021010. [CrossRef]
Bash, C. E. , Patel, C. D. , and Sharma, R. K. , 2006, “ Dynamic Thermal Management of Air Cooled Data Centers,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM '06), San Diego, CA, May 30–June 2, pp. 445–452.
Kummert, M. , Dempster, W. M. , and McLean, K. , 2009, “ Transient Thermal Analysis of a Data Centre Cooling System Under Fault Conditions,” 11th International Building Performance Simulation Association Conference and Exhibition, Glasgow, UK, July 27–30.
Gondipalli, S. , Ibrahim, M. , Bhopte, S. , Sammakia, B. , Murray, B. , Ghose, K. , Iyengar, M. , and Schmidt, R. , 2010, “ Numerical Modeling of Data Center With Transient Boundary Conditions,” 12th IEEE Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM), Las Vegas, NV, June 2–5.
Beitelmal, A. H. , and Patel, C. D. , 2007, “ Thermo-Fluids Provisioning of a High Performance High Density Data Center,” Distrib. Parallel Databases, 21(2–3), pp. 227–238. [CrossRef]
Sharma, R. K. , Bash, C. E. , Patel, C. D. , Friedrich, R. J. , and Chase, J. S. , 2005, “ Balance of Power: Dynamic Thermal Management for Internet Data Centers,” Internet Comput., 9(1), pp. 42–49. [CrossRef]
Patel, C. , Bash, C. , Sharma, R. , Beitelmal, M. , and Friedrich, R. , 2003, “ Smart Cooling of Data Centers,” ASME Paper No. IPACK2003-35059.
Khankari, K. , 2010, “ Thermal Mass Availability for Cooling Data Centers During Power Shutdown,” ASHRAE Trans., 116(Pt. 2), pp. 205–217.
Khankari, K. , 2011, “ Rate of Heating Analysis of Data Centers During Power Shutdown,” ASHRAE Trans., 117(Pt. 1), pp. 212–221.
Sundaralingam, V. , Isaacs, S. , Kumar, P. , and Joshi, Y. , 2011, “ Modeling Thermal Mass of a Data Center Validated With Actual Data Due to Chiller Failure,” ASME Paper No. IMECE2011-65573.
Ibrahim, M. , Shrivastava, S. , Sammakia, B. , and Ghose, K. , 2012, “ Thermal Mass Characterization for a Server at Different Fan Speeds,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 457–465.
Erden, H. S. , Khalifa, H. E. , and Schmidt, R. R. , 2013, “ Transient Thermal Response of Servers Through Air Temperature Measurements,” ASME Paper No. IPACK2013-73281.
Erden, H. S. , Khalifa, H. E. , and Schmidt, R. R. , 2014, “ Determination of the Lumped-Capacitance Parameters of Air-Cooled Servers Through Air Temperature Measurements,” ASME J. Electron. Packag., 136(3), p. 031005. [CrossRef]
Alkharabsheh, S. , Sammakia, B. , Shrivastava, S. , and Schmidt, R. , 2014 “ Implementing Rack Thermal Capacity in a Room Level CFD Model of a Data Center,” IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), San Jose, CA, Mar. 9–13, pp. 188–192.
Shrivastava, S. , and Ibrahim, M. , 2013, “ Benefit of Cold Aisle Containment During Cooling Failure,” ASME Paper No. IPACK2013-73219.
Schmidt, R. , 2004, “ Thermal Profile of a High-Density Data Center: Methodology to Thermally Characterize a Data Center,” ASHRAE Trans., 110(2), pp. 635–642.
Schmidt, R. , and Iyengar, M. , 2005, “ Effect of Data Center Layout on Rack Inlet Air Temperatures,” ASME Paper No. IPACK2005-73385.
Karlsson, J. F. , and Moshfegh, B. , 2005, “ Investigation of Indoor Climate and Power Usage in a Data Center,” Energy Build., 37(10), pp. 1075–1083. [CrossRef]
Boucher, T. D. , Auslander, D. M. , Bash, C. E. , Federspiel, C. C. , and Patel, C. D. , 2005, “ Viability of Dynamic Cooling Control in a Data Center Environment,” ASME J. Electron. Packag., 128(2), pp. 137–144. [CrossRef]
Beitelmal, M. H. , Wang, Z. , Felix, C. , Bash, C. , Hoover, C. , and McReynolds, A. , 2009, “ Local Cooling Control of Data Centers With Adaptive Vent Tiles,” ASME Paper No. InterPACK2009-89035.
Chen, K. , Auslander, D. M. , Bash, C. E. , and Patel, C. D. , 2006, “ Local Temperature Control in Data Center Cooling: Part I, Correlation Matrix,” HP Enterprise Software and Systems Laboratory, Report No. HPL-2006-42.
Chen, K. , Bash, C. E. , Auslander, D. M. , and Patel, C. D. , 2006, “ Local Temperature Control in Data Center Cooling: Part II, Statistical Analysis,” HP Enterprise Software and Systems Laboratory, Report No. HPL-2006-43.
Abdelmaksoud, W. A. , Dang, T. Q. , Khalifa, H. E. , and Schmidt, R. R. , 2013, “ Improved Computational Fluid Dynamics Model for Open-Aisle Air-Cooled Data Center Simulations,” ASME J. Electron. Packag., 135(3), p. 030901. [CrossRef]
Arghode, V. K. , and Joshi, Y. , 2015, “ Experimental Investigation of Air Flow Through a Perforated Tile in a Raised Floor Data Center,” ASME J. Electron. Packag., 137(1), p. 011011. [CrossRef]
Bhopte, S. , Sammakia, B. , Iyengar, M. , and Schmidt, R. , 2007, “ Experimental Investigation of the Impact of Under Floor Blockages on Flow Distribution in a Data Center Cell,” ASME Paper No. IPACK2007-33540.
Vangilder, J. W. , Pardey, Z. M. , Zhang, X. , and Healey, C. , 2013, “ Experimental Measurement of Server Thermal Effectiveness for Compact Transient Data Center Model,” ASME Paper No. IPACK2013-73155.
Iyengar, M. , Schmidt, R. , Hamann, H. , and Vangilder, J. , 2007, “ Comparison Between Numerical and Experimental Temperature Distributions in a Small Data Center Test Cell,” ASME Paper No. IPACK2007-33508.
Fakhim, B. , Behnia, M. , Armfield, S. W. , and Srinarayana, N. , 2011, “ Cooling Solutions in an Operational Data Centre: A Case Study,” Appl. Therm. Eng., 31(14–15), pp. 2279–2291. [CrossRef]
Arghode, V. K. , and Joshi, Y. , 2014, “ Room Level Modeling of Air Flow in a Contained Data Center Aisle,” ASME J. Electron. Packag., 136(1), p. 011011. [CrossRef]
Simons, R. , Moran, K. , Antonetti, V. , and Chu, R. , 1982, “ Thermal Design of the IBM 3081 Computer,” National Electronic Packaging and Production Conference, Anaheim, CA, Feb. 23–25, pp. 124–141.
Chu, R. , Hwang, U. , and Simons, R. , 1982, “ Conduction Cooling for an LSI Package: A One Dimensional Approach,” IBM J. Res. Dev., 26(1), pp. 45–54. [CrossRef]
Hwang, U. , and Moran, K. , 1990, “ Cold Plate for IBM Thermal Conduction Module Electronic Modules,” Heat Transfer in Electronic and Microelectronic Equipment, Vol. 29, A. E. Bergles, ed., Hemisphere, New York, pp. 495–508.
Delia, D. , Gilgert, T. , Graham, N. , Hwang, U. , Ing, P. , Kan, J. , Kemink, R. , Maling, G. , Martin, R. , Moran, K. , Reyes, J. , Schmidt, R. , and Steinbrecher, R. , 1992, “ System Cooling Design for the Water-Cooled IBM Enterprise System/9000 Processors,” IBM J. Res. Dev., 36(4), pp. 791–803. [CrossRef]
Lei, N. , Skandakumaran, P. , and Ortega, A. , 2006, “ Experiments and Modeling of Multilayer Copper Minichannel Heat Sinks in Single-Phase Flow,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM '06), San Diego, CA, May 30–June 2, pp. 9–18.
Dede, E. , 2014, “ Single-Phase Microchannel Cold Plate for Hybrid Vehicle Electronics,” IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), San Jose, CA, Mar. 9–13, pp. 118–124.
Iyengar, M. , David, M. , Parida, P. , Kamath, V. , Kochuparambil, B. , Graybill, D. , Schultz, M. , Gaynes, M. , Simons, R. , Schmidt, R. , and Chainer, T. , 2012, “ Server Liquid Cooling With Chiller-Less Data Center Design to Enable Significant Energy Savings,” IEEE Semiconductor Thermal Measurement and Management Sysposium (SEMI-THERM), San Jose, CA, Mar. 18–22, pp. 212–223.
David, M. , Iyengar, M. , Parida, P. , Simons, R. , Schultz, M. , Gaynes, M. , Schmidt, R. , and Chainer, T. , 2012, “ Experimental Characterization of an Energy Efficient Chiller-Less Data Center Test Facility With Warm Water Cooled Servers,” 28th Annual IEEE Semiconductor Thermal and Measurement and Management Symposium (SEMI-THERM), San Jose, CA, Mar. 18–22, pp. 232–237.
Parida, P. , David, M. , Iyengar, M. , Schultz, M. , Gaynes, M. , Kamath, V. , Kochuparambil, B. , and Chainer, T. , 2012, “ Experimental Investigation of Water Cooled Server Microprocessors and Memory Devices in an Energy Efficient Chiller-Less Data Center,” 28th Annual IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), San Jose, CA, Mar. 18–22, pp. 224–231.
Iyengar, M. , David, M. , Parida, P. , Kamath, V. , Kochuparambil, B. , Graybill, D. , Schultz, M. , Gaynes, M. , Simons, R. , Schmidt, R. , and Chainer, T. , 2012, “ Extreme Energy Efficiency Using Water Cooled Server Inside a Chiller-Less Data Center,” 13th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 137–149.
David, M. , Iyengar, M. , Parida, P. , Simons, R. , Schultz, M. , Gaynes, M. , Schmidt, R. , and Chainer, T. , 2012, “ Impact of Operating Conditions on a Chiller-Less Data Center Test Facility With Liquid Cooled Servers,” 13th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 562–573.
Eiland, R. , Fernandes, J. , Vallejo, M. , Agonafer, D. , and Mulay, V. , 2014, “ Flow Rate and Inlet Temperature Considerations for Direct Immersion of a Single Server in Mineral Oil,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), Orlando, FL, May 27–30, pp. 706–714.
Tuma, P. , 2010, “ The Merits of Open Bath Immersion Cooling of Datacom Equipment,” IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), Santa Clara, CA, Feb. 21–25, pp. 123–131.
Patterson, M. K. , and Fenwick, D. , 2008, “ The State of Data Center Cooling: A Review of Current Air and Liquid Cooling Solutions,” Intel, Digital Enterprise Group, White Paper.
Blough, B. , ed., 2011, “ Qualitative Analysis of Cooling Architectures for Data Centers,” The Green Grid, Beaverton, OR, Report No. 30.
Niemann, J. , 2008, “ Hot Aisle vs. Cold Aisle Containment,” American Power Conversion, West Kingston, RI, White Paper No. 135.
Niemann, J. , Brown, K. , and Avelar, V. , 2010, “ Hot-Aisle vs. Cold-Aisle Containment for Data Centers,” American Power Conversion, West Kingston, RI, White Paper No. 135, rev. 1.
Tozer, R. , and Salim, M. , 2010, “ Data Center Air Management Metrics-Practical Approach,” 12th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM), Las Vegas, NV, June 2–5, pp. 1–8.
Gondipalli, S. , Sammakia, B. , Bhopte, S. , Schmidt, R. , Iyengar, M. , and Murray, B. , 2009, “ Optimization of Cold Aisle Isolation Designs for a Data Center With Roofs and Doors Using Slits,” ASME Paper No. InterPACK2009-89203.
Emerson Network Power, 2010, “ Combining Cold Aisle Containment With Intelligent Control to Optimize Data Center Cooling Efficiency,” Emerson Network Power, Columbus, OH, White Paper.
Pervila, M. , and Kangasharju, J. , 2011, “ Cold Air Containment,” 2nd ACM SIGCOMM Workshop on Green Networking (GreenNets '11), Toronto, ON, Canada, Aug. 15–19, pp. 7–12.
Schmidt, R. , Vallury, A. , and Iyengar, M. , 2011, “ Energy Savings Through Hot and Cold Aisle Containment Configurations for Air Cooled Servers in Data Centers,” ASME Paper No. IPACK2011-52206.
Shrivastava, S. K. , Calder, A. R. , and Ibrahim, M. , 2012, “ Quantitative Comparison of Air Containment Systems,” 13th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 68–77.
Demetriou, D. W. , and Khalifa, H. E. , 2011, “ Energy Modeling Of Air-Cooled Data Centers—Part l: The Optimization of Enclosed Aisle Configurations,” ASME Paper No. IPACK2011-52003.
Xu, Y. , Gao, Z. , and Deng, Y. , 2013, “ Analyzing the Cooling Behavior of Hot and Cold Aisle Containment in Data Centers,” 4th IEEE International Conference on Emerging Intelligent Data and Web Technologies (EIDWT), Xi'an, China, Sept. 9–11, pp. 685–689.
Sundaralingam, V. , Arghode, V. K. , Joshi, Y. , and Phelps, W. , 2015, “ Experimental Characterization of Various Cold Aisle Containment Configurations for Data Centers,” ASME J. Electron. Packag., 137(1), p. 011007. [CrossRef]
Muradlidharan, B. , Ibrahim, M. , Shrivistava, S. , Alkharabsheh, S. , and Sammakia, B. , 2013, “ Advantages of Cold Aisle Containemnt (CAC) System and Its Leakage Quantification,” ASME Paper No. IPACK2013-73201.
Kennedy, J. , 2012, “ Ramification of Server Airflow Leakage in Data Centers With Aisle Containment,” Tate Access Floors, Jessup, MD, White Paper.
Alkharabsheh, S. A. , Muralidharan, B. , Ibrahim, M. , Shrivastava, S. , and Sammakia, B. , 2013, “ Open and Contained Cold Aisle Experimentally Validated CFD Model Implementing CRAC and Servers Fan Curves on a Testing Data Center,” ASME Paper No. IPACK2013-73214.
Alkharabsheh, S. A. , Shrivastava, S. K. , and Sammakia, B. G. , 2013, “ Effect of Containment System Perforation on Data Center Flow Rates and Temperatures,” ASME Paper No. IPACK2013-73216.
Gebrehiwot, B. , Dhiman, N. , Rajagopalan, K. , Agonafer, D. , Kannan, N. , Hoverson, J. , and Kaler, M. , 2013, “ CFD Modeling of Indirect/Direct Evaporative Cooling Unit for Modular Data Center Applications,” ASME Paper No. IPACK2013-73302.
Vasani, R. , and Agonafer, D. , 2014, “ Numerical Simulation of Fogging in a Square Duct—A Data Center Perspective,” 30th Annual IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), San Jose, CA, Mar. 9–13, pp. 45–52.
Seger, D. , and Solberg, A. , “ Economizer Performance: Applying CFD Modeling to the Data Center's Exterior,” SearchDataCenter.com, accessed October 26, 2014, http://searchdatacenter.techtarget.com/tip/Economizer-performance-Applying-CFD-modeling-to-the-data-centers-exterior
Beaty, D. L. , 2004, “ Liquid Cooling: Friend or Foe,” ASHRAE Trans., 110(2), pp. 643–652.
Ellsworth, M. J. , Campbell, L. A. , Simons, R. E. , Iyengar, M. , and Schmidt, R. R. , 2008, “ The Evolution of Water Cooling for IBM Large Server Systems: Back to the Future,” 11th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM 2008), Lake Buena Vista, FL, May 28–31, pp. 266–274.
ASHRAE TC 9.9, 2011, “ Thermal Guidelines for Liquid Cooled Data Processing Environments,” American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE), White Paper.
Heydari, A. , and Sabounchi, P. , 2004, “ Refrigeration Assisted Spot Cooling of a High Heat Density Data Center,” Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM '04), Las Vegas, NV, June 1–4, pp. 601–606.
Mulay, V. , Agonafer, D. , and Schmidt, R. , 2008, “ Liquid Cooling for Thermal Management of Data Centers,” ASME Paper No. IMECE2008-68743.
Schmidt, R. , and Iyengar, M. , 2009, “ Server Rack Rear Door Heat Exchanger and the New ASHRAE Recommended Environmental Guidelines,” ASME Paper No. InterPACK2009-89212.
Tsukamoto, T. , Takayoshi, J. , Schmidt, R. , and Iyengar, M. , 2009, “ Refrigeration Heat Exchanger Systems for Server Rack Cooling in Data Centers,” ASME Paper No. InterPACK2009-89258.
Iyengar, M. , Schmidt, R. , Kamath, V. , and Kochuparambil, B. , 2011, “ Experimental Characterization of Server Rack Energy Use at Elevated Ambient Temperatures,” ASME Paper No. IPACK2011-52207.
Iyengar, M. , Schmidt, R. , and Caricari, J. , 2010, “ Reducing Energy Usage in Data Centers Through Control of Room Air Conditioning Units,” 12th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), Las Vegas, NV, June 2–5, pp. 1–11.
Fernandes, J. , Ghalambor, S. , Agonafer, D. , Kamath, V. , and Schmidt, R. , 2012, “ Multi-Design Variable Optimization for a Fixed Pumping Power of a Water-Cooled Cold Plate for High Power Electronics Applications,” 13th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 684–692.
Goth, G. , Arvelo, A. , Eagle, J. , Ellsworth, M. , Marston, K. , Sinha, A. , and Zitz, J. , 2012, “ Thermal and Mechanical Analysis and Design of the IBM Power 775 Water Cooled Supercomputing Central Electronics Complex,” 13th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 700–709.
Brunschwiler, T. , Rothuizen, H. , Paredes, S. , Michel, B. , Colgan, E. , and Bezama, P. , 2009, “ Hotspot-Adapted Cold Plates to Maximize System Efficiency,” 15th IEEE International Workshop on Thermal Investigations of ICs and Systems (THERMINIC 2009), Leuven, Belgium, Oct. 7–9, pp. 150–156.
Copyright © 2015 by ASME
View article in PDF format.

References

U.S. Environmental Protection Agency (EPA), 2007, “ Report to Congress on Server and Data Center Energy Efficiency, Public Law 109-43,” U.S. Environmental Protection Agency, Washington, DC.
Columbus, L., 2012, “ Predicting Enterprise Cloud Computing Growth,” Forbes, accessed January 31, 2014, http://www.forbes.com/sites/louiscolumbus/2013/09/04/predicting-enterprise-cloud-computing-growth/
Koomey, J. , 2011, Growth in Data Center Electricity Use 2005 to 2010, Analytics Press, Oakland, CA.
Venkatraman, A. , 2013, “ Global Census Shows Datacenter Power Demand Grew 63% in 2012,” DatacenterDynamics (DCD) Intelligence, London.
Abramovitz, B., 2013, “ Industry Research Monitor Data Centers,” General Electric Capital, Norwalk, CT.
McNevin, A., 2014, “ 15% Growth Forecast for North America Colocation Market 2014,” DatacenterDynamics, London.
Stansberry, M. , and Kudritzki, J. , 2013, “ Uptime Institute 2012 Data Center Industry Survey,” Uptime Institute, New York.
Wikipedia, 2014, “Mission Critical,” accessed January 31, 2014, http://en.wikipedia.org/wiki/Mission_critical
“Mission Critical Facilities Management Principals of Design, Operations, and Maintenance,” 2012, Last accessed January 31, 2014, http://www.construction.org/clientuploads/resource_center/facilities_management/MissionCriticalFacilities.pdf
Ponemon Institute, 2010, “ National Survey on Data Center Outages,” Ponemon Institute, Traverse City, MI.
Zuo, Z. J. , Hoover, L. R. , and Phillips, A. L. , 2002, An Integrated Thermal Architecture for Thermal Management of High Power Electronics, Millpress, Rotterdam, The Netherlands.
Salim, M. , and Tozer, R. , 2010, “ Data Centers' Energy Auditing and Benchmarking: Progress Update,” ASHRAE Trans., 116(1), pp. 109–117.
Patankar, S. , and Karki, K. , 2004, “ Distribution of Cooling Airflow in a Raised-Floor Data Center,” ASHRAE Trans., 110(2), pp. 629–634.
Sharma, R. K. , Bash, C. E. , and Patel, C. D. , 2002, “ Dimensionless Parameters for Evaluation of Thermal Design and Performance of Large-Scale Data Centers,” AIAA Paper No. 2002-3091.
Muralidharan, B. , Shrivastava, S. , Ibrahim, M. , Alkharabsheh, S. A. , and Sammakia, B. G. , 2013, “ Impact of Cold Aisle Containment on Thermal Performance of Data Center,” ASME Paper No. IPACK2013-73201.
Joshi, Y. , and Kumar, P. , 2012, Energy Efficient Thermal Management of Data Centers, Springer, New York.
Rambo, J. , and Joshi, Y. , 2007, “ Modeling of Data Center Airflow and Heat Transfer: Stat of the Art and Future Trends ,” Distrib. Parallel Databases, 21(2–3), pp. 193–225. [CrossRef]
Rambo, J. , and Joshi, Y. , 2006, “ Reduced-Order Modeling of Multiscale Turbulent Convection: Application to Data Center Thermal Management,” Ph.D. disseration, Georgia Institute of Technology, Atlanta, GA.
Rambo, J. , and Joshi, Y. , 2005, “ Reduced Order Modeling of Steady Turbulent Flows,” ASME Paper No. HT2005-72143.
Somani, A. , and Joshi, Y. , 2009, “ Data Center Cooling Optimization: Ambient Intelligence Based Load Management (AILM),” ASME Paper No. HT2009-88228.
Samadiani, E. , 2009, “ Energy Efficient Thermal Management of Data Centers Via Open Multi-Scale Design,” Ph.D. disseration, Georgia Institute of Technology, Atlanta, GA.
Ghosh, R. , and Joshi, Y. , 2013, “ Error Estimation in POD-Based Dynamic Reduced-Order Thermal Modeling of Data Centers,” Int. J. Heat Mass Transfer, 57(2), pp. 698–707. [CrossRef]
Belady, C. , Kelkar, K. , and Patankar, S. , 1999, “ Improving Productivity of Electronic Packaging With Flow Network Modeling (FNM),” Electron. Cool., 5(1), pp. 36–40.
Radmehr, A. , Kelkar, K. , Kelly, P. , Patankar, S. , and Kang, S. , 1999, “ Analysis of the Effect of Bypass on Performance of Heat Sinks Using Flow Network Modeling (FNM),” 15th Annual IEEE Semiconductor Thermal Measurement and Management Systems (SEMI-THERM), San Diego, CA, Mar. 9–11, pp. 42–47.
Steinbrecher, R. , Radmehr, A. , Kelkar, K. , and Patankar, S. , “ Use of Flow Network Modeling (FNM) for the Design of Air-Cooled Servers,” Innovative Research Inc., Minneapolis, MN, http://inres.com/assets/files/macroflow/MF08-Air-Cooled-Server.pdf
Innovative Research, 2003, MacroFlow, Innovative Research, Plymouth, MN.
Kelkar, K. , and Patankar, S. , “ Analysis and Design of Liquid-Cooling Systems Using Flow Network Modeling (FNM),” ASME Paper No. IPACK2003-35233.
Cross, H. , 1936, “ Analysis of Flow in Networks of Conduits or Conductors,” University of Illinois Bulletin, University of Illinois at Urbana-Champaign, Urbana, IL, Report No. 286.
Fernandes, J. , Ghalambor, S. , Docca, A. , Aldham, C. , Agonafer, D. , Chenelly, E. , Chan, B. , and Ellsworth, M. , 2013, “ Combining Computational Fluid Dynamics (CFD) and Flow Network Modeling (FNM) for Design of a Multi-Chip Module (MCM) Cold Plate,” ASME Paper No. IPACK2013-73294.
Radmehr, A. , and Patankar, S. , 2004, “ A Flow Network Analysis of a Liquid Cooling System That Incorporates Microchannel Heat Sinks,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM '04), Las Vegas, NV, June 1–4, pp. 714–721.
Ellsworth, M. , 2014, “ Flow Network Analysis of the IBM Power 775 Supercomputer Water Cooling System,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM), Orlando, FL, May 27–30, pp. 715–722.
Patel, C. D. , Bash, C. E. , Belady, C. , Stahl, L. , and Sullivan, D. , 2001, “ Computational Fluid Dynamics Modeling of High Compute Density Data Centers to Assure System Inlet Air Specifications,” Pacific Rim Technical Conference and Exposition of Packaging and Integration of Electronic and Photonic Systems (IPACK), Kauai, HI, July 8–13, ASME Paper No. IPACK2001-15622.
Schmidt, R. R. , Karki, K. C. , Kelkar, K. M. , Radmehr, A. , and Patnkar, S. V. , 2001, “ Measurements and Predictions of the Flow Distribution Through Perforated Tiles in Raised Floor Data Centers,” Pacific Rim Technical Conference and Exposition of Packaging and Integration of Electronic and Photonic Systems (IPACK), Kauai, HI, July 8–13, ASME Paper No. IPACK2001-15728.
Kang, S. , Schmidt, R. , Kelkar, K. M. , Radmehr, A. , and Patankar, S. V. , 2001, “ A Methodology for the Design of Perforated Tiles in Raised Floor Data Centers Using Computational Flow Analysis,” IEEE Trans. Compon. Packag. Technol., 24(2), pp. 177–183. [CrossRef]
Karki, K. , Patankar, S. , and Radmehr, A. , 2003, “ Techniques for Controlling Airflow Distribution in Raised-Floor Data Centers,” ASME Paper No. IPACK2003-35282.
VanGilder, J. , and Schmidt, R. , 2005, “ Airflow Uniformity Through Perforated Tiles in a Raised-Floor Data Center,” ASME Paper No. IPACK2005-73375.
Radmehr, A. , Schmidt, R. , Karki, K. , and Patankar, S. , 2005, “ Distributed Leakage Flow in Raised-Floor Data Centers,” ASME Paper No. IPACK2005-73273.
Abdelmaksoud, W. A. , Khalifa, H. E. , Dang, T. Q. , Elhadidi, B. , Schmidt, R. R. , and Iyengar, M. , 2010, “ Experimental and Computational Study of Perforated Floor Tile in Data Centers,” 12th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), Las Vegas, NV, June 2–5.
Arghode, V. K. , Kumar, P. , Joshi, Y. , Weiss, T. , and Meyer, G. , 2013, “ Rack Level Modeling of Air Flow Through Perforated Tile in a Data Center,” ASME J. Electron. Packag., 135(3), p. 030902. [CrossRef]
Arghode, V. , and Joshi, Y. , 2013, “ Modeling Strategies for Air Flow Through Perforated Tiles in a Data Center,” IEEE Trans. Compon. Packag. Technol., 3(5), pp. 800–810. [CrossRef]
Abdelmaksoud, W. , Dang, T. , Khalifa, H. E. , Schmidt, R. , and Iyengar, M. , 2012, “ Perforated Tile Models for Improving Data Center CFD Simulation,” Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 60–67.
Shrivastava, S. K. , Iyengar, M. , Sammakia, B. G. , Schmidt, R. , and Vangilder, J. W. , 2006, “ Experimental-Numerical Comparison for a High-Density Data Center: Hot Spot Fluxes in Excess of 500 W/ft2 ,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM '06), San Diego, CA, May 30–June 2, pp. 402–411.
Tan, S. P. , Toh, K. C. , and Wong, Y. W. , 2007, “ Server-Rack Air Flow and Heat Transfer Interactions in Data Centers,” ASME Paper No. IPACK2007-33672.
Zhang, X. S. , VanGilder, J. W. , Iyengar, M. , and Schmidt, R. R. , 2008, “ Effect of Rack Modeling Detail on the Numerical Results of a Data Center Test Cell,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM 2008), Lake Buena Vista, FL, May 28–31, pp. 1183–1190.
Zhai, J. Z. , Hermansen, K. A. , and Al-Saadi, S. , 2012, “ The Development of Simplified Rack Boundary Conditions for Numerical Data Center Models,” ASHRAE Trans., 118(2), pp. 436–449.
North, T. , 2011, “ Understanding How Cabinet Door Perforation Impacts Airflow,” BICSI News, Sept./Oct., pp. 36–42.
Schmidt, R. , Chu, R. , Ellsworth, M. , Iyengar, M. , Porter, D. , Kamath, V. , and Lehman, B. , 2005, “ Maintaining Datacom Rack Inlet Air Temperatures With Water Cooled Heat Exchanger,” ASME Paper No. IPACK2005-73468.
Coxe, K. , 2009, “ Rack Infrastructure Effects on Thermal Performance of a Server,” Dell Enterprise Thermal Engineering, White Paper.
Rubenstein, B. , 2008, “ Cable Management Arm Airflow Impedance Study,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM 2008), Orlando, FL, May 28–31, pp. 577–582.
Alkharabsheh, S. A. , Sammakia, B. G. , and Murray, B. T. , 2014, “ Experimental Characterization of Pressure Drop in a Server Rack,” IEEE Inter Society Conference on Thermal Phenomena (ITHERM), Orlando, FL, May 27–30, pp. 547–556.
Radmehr, A. , Karki, K. C. , and Patankar, S. V. , 2007, “ Analysis of Airflow Distribution Across a Front-to-Rear Server Rack,” ASME Paper No. IPACK2007-33574.
Ghosh, R. , Sundaralingam, V. , and Joshi, Y. , 2012, “ Effect of Rack Server Population on Temperatures in Data Centers,” Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 30–37.
Almoli, A. , Thompson, A. , Kapur, N. , Summers, J. , Thompson, H. , and Hannah, G. , 2012, “ Computational Fluid Dynamic Investigation of Liquid Rack Cooling in Data Centres,” Appl. Energy, 89(1), pp. 150–155. [CrossRef]
Samadiani, E. , Rambo, J. , and Joshi, Y. , 2010, “ Numerical Modeling of Perforated Tile Flow Distribution in a Raised-Floor Data Center,” ASME J. Electron. Packag., 132(2), p. 021002. [CrossRef]
Patankar, S. V. , 2010, “ Airflow and Cooling in a Data Center,” ASME J. Heat Transfer, 132(7), p. 073001. [CrossRef]
Ibrahim, M. , Bhopte, S. , Sammakia, S. , Murray, B. , Iyengar, M. , and Schmidt, R. , 2010 “ Effect of Thermal Characteristics of Electronic Enclosures on Dynamic Data Center Performance,” ASME Paper No. IMECE2010-40914.
Alkharabsheh, S. , Sammakia, B. , Shrivastava, S. , and Schmidt, R. , 2014, “ Dynamic Models for Server Rack and CRAH in a Room Level CFD Model of a Data Center,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), Orlando, FL, May 27–30, pp. 1338–1345.
Schmidt, R. , 2001, “ Effect of Data Center Characteristics on Data Processing Equipment Inlet Temperatures,” ASME Paper No. IPACK2001-15870.
Schmidt, R. , and Cruz, E. , 2002, “ Raised Floor Computer Data Center: Effect on Rack Inlet Temperatures of Chilled Air Exiting Both the Hot and Cold Aisles,” IEEE Inter Society Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM 2002), San Diego, CA, June 1, pp. 580–594.
Schmidt, R. , and Cruz, E. , 2002, “ Raised Floor Computer Data Center: Effect on Rack Inlet Temperatures When High Powered Racks are Situated Amongst Lower Powered Racks,” ASME Paper No. IMECE2002-39652.
Schmidt, R. , and Cruz, E. , 2003, “ Raised Floor Computer Data Center: Effect on Rack Inlet Temperatures When Adjacent Racks are Removed,” ASME Paper No. IPACK2003-35240.
Schmidt, R. , and Cruz, E. , 2003, “ Raised Floor Computer Data Center: Effect of Rack Inlet Temperatures When Rack Flowrates are Reduced,” ASME Paper No. IPACK2003-35241.
Patel, C. D. , Sharma, R. , Bash, C. E. , and Beitelmal, A. , 2002, “ Thermal Considerations in Cooling Large Scale High Compute Density Data Centers,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM 2002), San Diego, CA, San Diego, CA, June 1, pp. 767–776.
Schmidt, R. , and Cruz, E. , 2004, “ Cluster of High-Powered Racks Within a Raised-Floor Computer Data Center: Effect of Perforated Tile Flow Distribution on Rack Inlet Air Temperatures,” ASME J. Electron. Packag., 126(4), pp. 510–519. [CrossRef]
Schmidt, R. , Cruz, E. , and Iyengar, M. , 2005, “ Challenges of Data Center Thermal Management,” IBM J. Res. Dev., 49(4.5), pp. 709–723. [CrossRef]
Bhopte, S. , Agonafer, D. , Schmidt, R. , and Sammakia, B. , 2006, “ Optimization of Data Center Room Layout to Minimize Rack Inlet Air Temperature,” ASME J. Electron. Packag. 128(4), pp. 380–387. [CrossRef]
Bhopte, S. , Sammakia, B. , Schmidt, R. , Iyenger, M. , and Agonafer, D. , 2006, “ Effect of Under Floor Blockages on Data Center Performance,” Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM '06), San Diego, CA, May 30–June 2, pp. 426–433.
Hannaford, P. , 2006, “ Ten Cooling Solutions to Support High-Density Server Deployment,” White Paper, American Power Conversion, West Kingston, RI, Report No. WP-42 v5.
Greenberg, S. , Mills, E. , Tschudi, B. , Rumsey, P. , and Myatt, B. , 2006, “ Best Practices for Data Centers: Lessons Learned From Benchmarking 22 Data Centers,” ACEEE Summer Study on Energy Efficiency in Buildings, Pacific Grove, CA, Aug. 13–18, pp. 76–87.
Schmidt, R. , and Iyengar, M. , 2007, “ Best Practices for Data Center Thermal and Energy Management: Review of Literature,” ASHRAE Trans., 113(1), pp. 206–218.
Nagarathinam, S. , Fakhim, B. , Behnia, M. , and Armfield, S. , 2013, “ A Comparison of Parametric and Multivariable Optimization Techniques in a Raised-Floor Data Center,” ASME J. Electron. Packag., 135(3), p. 030905. [CrossRef]
Sorell, V. , Escalante, S. , and Yang, J. , 2005, “ Comparison of Overhead and Underfloor Air Delivery Systems in a Data Center Environment Using CFD Modeling,” ASHRAE Trans., 111(2), pp. 756–764.
Iyengar, M. , Schmidt, R. , Sharma, A. , McVicker, G. , Shrivastava, S. , Sri-Jayantha, S. , Amemiya, Y. , Dang, H. , Chainer, T. , and Sammakia, B. , 2005, “ Thermal Characterization of Non-Raised Floor Air Cooled Data Centers Using Numerical Modeling,” ASME Paper No. IPACK2005-73387.
Demetriou, D. W. , and Khalifa, H. E. , 2011, “ Evaluation of a Data Center Recirculation Non-Uniformity Metric Using Computational Fluid Dynamics,” ASME Paper No. IPACK2011-52005.
Alkharabsheh, S. , Sammakia, B. , Shrivastava, S. , and Schmidt, R. , 2013, “ Utilizing Practical Fan Curves in CFD Modeling of a Data Center,” IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), San Jose, CA, Mar. 17–21, pp. 211–215.
Alkharabsheh, S. , Sammakia, B. , Shrivastava, S. , Ellsworth, M. , David, M. , and Schmidt, R. , 2013, “ Numerical Steady State and Dynamic Study Using Calibrated Fan Curves for CRAC Units and Servers,” ASME Paper No. IPACK2013-73217.
Alkharabsheh, S. , Sammakia, B. , Shrivastava, S. , and Schmidt, R. , 2013, “ A Numerical Study for Contained Cold Aisle Data Center Using CRAC and Server Calibrated Fan Curves,” ASME Paper No. IMECE2013-65145.
Alkharabsheh, S. , Sammakia, B. , and Shrivastava, S. , 2015, “ Experimentally Validated CFD Model for a Data Center With Cold Aisle Containment,” ASME J. Electron. Packag., 137(2), p. 021010. [CrossRef]
Bash, C. E. , Patel, C. D. , and Sharma, R. K. , 2006, “ Dynamic Thermal Management of Air Cooled Data Centers,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM '06), San Diego, CA, May 30–June 2, pp. 445–452.
Kummert, M. , Dempster, W. M. , and McLean, K. , 2009, “ Transient Thermal Analysis of a Data Centre Cooling System Under Fault Conditions,” 11th International Building Performance Simulation Association Conference and Exhibition, Glasgow, UK, July 27–30.
Gondipalli, S. , Ibrahim, M. , Bhopte, S. , Sammakia, B. , Murray, B. , Ghose, K. , Iyengar, M. , and Schmidt, R. , 2010, “ Numerical Modeling of Data Center With Transient Boundary Conditions,” 12th IEEE Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM), Las Vegas, NV, June 2–5.
Beitelmal, A. H. , and Patel, C. D. , 2007, “ Thermo-Fluids Provisioning of a High Performance High Density Data Center,” Distrib. Parallel Databases, 21(2–3), pp. 227–238. [CrossRef]
Sharma, R. K. , Bash, C. E. , Patel, C. D. , Friedrich, R. J. , and Chase, J. S. , 2005, “ Balance of Power: Dynamic Thermal Management for Internet Data Centers,” Internet Comput., 9(1), pp. 42–49. [CrossRef]
Patel, C. , Bash, C. , Sharma, R. , Beitelmal, M. , and Friedrich, R. , 2003, “ Smart Cooling of Data Centers,” ASME Paper No. IPACK2003-35059.
Khankari, K. , 2010, “ Thermal Mass Availability for Cooling Data Centers During Power Shutdown,” ASHRAE Trans., 116(Pt. 2), pp. 205–217.
Khankari, K. , 2011, “ Rate of Heating Analysis of Data Centers During Power Shutdown,” ASHRAE Trans., 117(Pt. 1), pp. 212–221.
Sundaralingam, V. , Isaacs, S. , Kumar, P. , and Joshi, Y. , 2011, “ Modeling Thermal Mass of a Data Center Validated With Actual Data Due to Chiller Failure,” ASME Paper No. IMECE2011-65573.
Ibrahim, M. , Shrivastava, S. , Sammakia, B. , and Ghose, K. , 2012, “ Thermal Mass Characterization for a Server at Different Fan Speeds,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 457–465.
Erden, H. S. , Khalifa, H. E. , and Schmidt, R. R. , 2013, “ Transient Thermal Response of Servers Through Air Temperature Measurements,” ASME Paper No. IPACK2013-73281.
Erden, H. S. , Khalifa, H. E. , and Schmidt, R. R. , 2014, “ Determination of the Lumped-Capacitance Parameters of Air-Cooled Servers Through Air Temperature Measurements,” ASME J. Electron. Packag., 136(3), p. 031005. [CrossRef]
Alkharabsheh, S. , Sammakia, B. , Shrivastava, S. , and Schmidt, R. , 2014 “ Implementing Rack Thermal Capacity in a Room Level CFD Model of a Data Center,” IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), San Jose, CA, Mar. 9–13, pp. 188–192.
Shrivastava, S. , and Ibrahim, M. , 2013, “ Benefit of Cold Aisle Containment During Cooling Failure,” ASME Paper No. IPACK2013-73219.
Schmidt, R. , 2004, “ Thermal Profile of a High-Density Data Center: Methodology to Thermally Characterize a Data Center,” ASHRAE Trans., 110(2), pp. 635–642.
Schmidt, R. , and Iyengar, M. , 2005, “ Effect of Data Center Layout on Rack Inlet Air Temperatures,” ASME Paper No. IPACK2005-73385.
Karlsson, J. F. , and Moshfegh, B. , 2005, “ Investigation of Indoor Climate and Power Usage in a Data Center,” Energy Build., 37(10), pp. 1075–1083. [CrossRef]
Boucher, T. D. , Auslander, D. M. , Bash, C. E. , Federspiel, C. C. , and Patel, C. D. , 2005, “ Viability of Dynamic Cooling Control in a Data Center Environment,” ASME J. Electron. Packag., 128(2), pp. 137–144. [CrossRef]
Beitelmal, M. H. , Wang, Z. , Felix, C. , Bash, C. , Hoover, C. , and McReynolds, A. , 2009, “ Local Cooling Control of Data Centers With Adaptive Vent Tiles,” ASME Paper No. InterPACK2009-89035.
Chen, K. , Auslander, D. M. , Bash, C. E. , and Patel, C. D. , 2006, “ Local Temperature Control in Data Center Cooling: Part I, Correlation Matrix,” HP Enterprise Software and Systems Laboratory, Report No. HPL-2006-42.
Chen, K. , Bash, C. E. , Auslander, D. M. , and Patel, C. D. , 2006, “ Local Temperature Control in Data Center Cooling: Part II, Statistical Analysis,” HP Enterprise Software and Systems Laboratory, Report No. HPL-2006-43.
Abdelmaksoud, W. A. , Dang, T. Q. , Khalifa, H. E. , and Schmidt, R. R. , 2013, “ Improved Computational Fluid Dynamics Model for Open-Aisle Air-Cooled Data Center Simulations,” ASME J. Electron. Packag., 135(3), p. 030901. [CrossRef]
Arghode, V. K. , and Joshi, Y. , 2015, “ Experimental Investigation of Air Flow Through a Perforated Tile in a Raised Floor Data Center,” ASME J. Electron. Packag., 137(1), p. 011011. [CrossRef]
Bhopte, S. , Sammakia, B. , Iyengar, M. , and Schmidt, R. , 2007, “ Experimental Investigation of the Impact of Under Floor Blockages on Flow Distribution in a Data Center Cell,” ASME Paper No. IPACK2007-33540.
Vangilder, J. W. , Pardey, Z. M. , Zhang, X. , and Healey, C. , 2013, “ Experimental Measurement of Server Thermal Effectiveness for Compact Transient Data Center Model,” ASME Paper No. IPACK2013-73155.
Iyengar, M. , Schmidt, R. , Hamann, H. , and Vangilder, J. , 2007, “ Comparison Between Numerical and Experimental Temperature Distributions in a Small Data Center Test Cell,” ASME Paper No. IPACK2007-33508.
Fakhim, B. , Behnia, M. , Armfield, S. W. , and Srinarayana, N. , 2011, “ Cooling Solutions in an Operational Data Centre: A Case Study,” Appl. Therm. Eng., 31(14–15), pp. 2279–2291. [CrossRef]
Arghode, V. K. , and Joshi, Y. , 2014, “ Room Level Modeling of Air Flow in a Contained Data Center Aisle,” ASME J. Electron. Packag., 136(1), p. 011011. [CrossRef]
Simons, R. , Moran, K. , Antonetti, V. , and Chu, R. , 1982, “ Thermal Design of the IBM 3081 Computer,” National Electronic Packaging and Production Conference, Anaheim, CA, Feb. 23–25, pp. 124–141.
Chu, R. , Hwang, U. , and Simons, R. , 1982, “ Conduction Cooling for an LSI Package: A One Dimensional Approach,” IBM J. Res. Dev., 26(1), pp. 45–54. [CrossRef]
Hwang, U. , and Moran, K. , 1990, “ Cold Plate for IBM Thermal Conduction Module Electronic Modules,” Heat Transfer in Electronic and Microelectronic Equipment, Vol. 29, A. E. Bergles, ed., Hemisphere, New York, pp. 495–508.
Delia, D. , Gilgert, T. , Graham, N. , Hwang, U. , Ing, P. , Kan, J. , Kemink, R. , Maling, G. , Martin, R. , Moran, K. , Reyes, J. , Schmidt, R. , and Steinbrecher, R. , 1992, “ System Cooling Design for the Water-Cooled IBM Enterprise System/9000 Processors,” IBM J. Res. Dev., 36(4), pp. 791–803. [CrossRef]
Lei, N. , Skandakumaran, P. , and Ortega, A. , 2006, “ Experiments and Modeling of Multilayer Copper Minichannel Heat Sinks in Single-Phase Flow,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM '06), San Diego, CA, May 30–June 2, pp. 9–18.
Dede, E. , 2014, “ Single-Phase Microchannel Cold Plate for Hybrid Vehicle Electronics,” IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), San Jose, CA, Mar. 9–13, pp. 118–124.
Iyengar, M. , David, M. , Parida, P. , Kamath, V. , Kochuparambil, B. , Graybill, D. , Schultz, M. , Gaynes, M. , Simons, R. , Schmidt, R. , and Chainer, T. , 2012, “ Server Liquid Cooling With Chiller-Less Data Center Design to Enable Significant Energy Savings,” IEEE Semiconductor Thermal Measurement and Management Sysposium (SEMI-THERM), San Jose, CA, Mar. 18–22, pp. 212–223.
David, M. , Iyengar, M. , Parida, P. , Simons, R. , Schultz, M. , Gaynes, M. , Schmidt, R. , and Chainer, T. , 2012, “ Experimental Characterization of an Energy Efficient Chiller-Less Data Center Test Facility With Warm Water Cooled Servers,” 28th Annual IEEE Semiconductor Thermal and Measurement and Management Symposium (SEMI-THERM), San Jose, CA, Mar. 18–22, pp. 232–237.
Parida, P. , David, M. , Iyengar, M. , Schultz, M. , Gaynes, M. , Kamath, V. , Kochuparambil, B. , and Chainer, T. , 2012, “ Experimental Investigation of Water Cooled Server Microprocessors and Memory Devices in an Energy Efficient Chiller-Less Data Center,” 28th Annual IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), San Jose, CA, Mar. 18–22, pp. 224–231.
Iyengar, M. , David, M. , Parida, P. , Kamath, V. , Kochuparambil, B. , Graybill, D. , Schultz, M. , Gaynes, M. , Simons, R. , Schmidt, R. , and Chainer, T. , 2012, “ Extreme Energy Efficiency Using Water Cooled Server Inside a Chiller-Less Data Center,” 13th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 137–149.
David, M. , Iyengar, M. , Parida, P. , Simons, R. , Schultz, M. , Gaynes, M. , Schmidt, R. , and Chainer, T. , 2012, “ Impact of Operating Conditions on a Chiller-Less Data Center Test Facility With Liquid Cooled Servers,” 13th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 562–573.
Eiland, R. , Fernandes, J. , Vallejo, M. , Agonafer, D. , and Mulay, V. , 2014, “ Flow Rate and Inlet Temperature Considerations for Direct Immersion of a Single Server in Mineral Oil,” IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), Orlando, FL, May 27–30, pp. 706–714.
Tuma, P. , 2010, “ The Merits of Open Bath Immersion Cooling of Datacom Equipment,” IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), Santa Clara, CA, Feb. 21–25, pp. 123–131.
Patterson, M. K. , and Fenwick, D. , 2008, “ The State of Data Center Cooling: A Review of Current Air and Liquid Cooling Solutions,” Intel, Digital Enterprise Group, White Paper.
Blough, B. , ed., 2011, “ Qualitative Analysis of Cooling Architectures for Data Centers,” The Green Grid, Beaverton, OR, Report No. 30.
Niemann, J. , 2008, “ Hot Aisle vs. Cold Aisle Containment,” American Power Conversion, West Kingston, RI, White Paper No. 135.
Niemann, J. , Brown, K. , and Avelar, V. , 2010, “ Hot-Aisle vs. Cold-Aisle Containment for Data Centers,” American Power Conversion, West Kingston, RI, White Paper No. 135, rev. 1.
Tozer, R. , and Salim, M. , 2010, “ Data Center Air Management Metrics-Practical Approach,” 12th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM), Las Vegas, NV, June 2–5, pp. 1–8.
Gondipalli, S. , Sammakia, B. , Bhopte, S. , Schmidt, R. , Iyengar, M. , and Murray, B. , 2009, “ Optimization of Cold Aisle Isolation Designs for a Data Center With Roofs and Doors Using Slits,” ASME Paper No. InterPACK2009-89203.
Emerson Network Power, 2010, “ Combining Cold Aisle Containment With Intelligent Control to Optimize Data Center Cooling Efficiency,” Emerson Network Power, Columbus, OH, White Paper.
Pervila, M. , and Kangasharju, J. , 2011, “ Cold Air Containment,” 2nd ACM SIGCOMM Workshop on Green Networking (GreenNets '11), Toronto, ON, Canada, Aug. 15–19, pp. 7–12.
Schmidt, R. , Vallury, A. , and Iyengar, M. , 2011, “ Energy Savings Through Hot and Cold Aisle Containment Configurations for Air Cooled Servers in Data Centers,” ASME Paper No. IPACK2011-52206.
Shrivastava, S. K. , Calder, A. R. , and Ibrahim, M. , 2012, “ Quantitative Comparison of Air Containment Systems,” 13th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 68–77.
Demetriou, D. W. , and Khalifa, H. E. , 2011, “ Energy Modeling Of Air-Cooled Data Centers—Part l: The Optimization of Enclosed Aisle Configurations,” ASME Paper No. IPACK2011-52003.
Xu, Y. , Gao, Z. , and Deng, Y. , 2013, “ Analyzing the Cooling Behavior of Hot and Cold Aisle Containment in Data Centers,” 4th IEEE International Conference on Emerging Intelligent Data and Web Technologies (EIDWT), Xi'an, China, Sept. 9–11, pp. 685–689.
Sundaralingam, V. , Arghode, V. K. , Joshi, Y. , and Phelps, W. , 2015, “ Experimental Characterization of Various Cold Aisle Containment Configurations for Data Centers,” ASME J. Electron. Packag., 137(1), p. 011007. [CrossRef]
Muradlidharan, B. , Ibrahim, M. , Shrivistava, S. , Alkharabsheh, S. , and Sammakia, B. , 2013, “ Advantages of Cold Aisle Containemnt (CAC) System and Its Leakage Quantification,” ASME Paper No. IPACK2013-73201.
Kennedy, J. , 2012, “ Ramification of Server Airflow Leakage in Data Centers With Aisle Containment,” Tate Access Floors, Jessup, MD, White Paper.
Alkharabsheh, S. A. , Muralidharan, B. , Ibrahim, M. , Shrivastava, S. , and Sammakia, B. , 2013, “ Open and Contained Cold Aisle Experimentally Validated CFD Model Implementing CRAC and Servers Fan Curves on a Testing Data Center,” ASME Paper No. IPACK2013-73214.
Alkharabsheh, S. A. , Shrivastava, S. K. , and Sammakia, B. G. , 2013, “ Effect of Containment System Perforation on Data Center Flow Rates and Temperatures,” ASME Paper No. IPACK2013-73216.
Gebrehiwot, B. , Dhiman, N. , Rajagopalan, K. , Agonafer, D. , Kannan, N. , Hoverson, J. , and Kaler, M. , 2013, “ CFD Modeling of Indirect/Direct Evaporative Cooling Unit for Modular Data Center Applications,” ASME Paper No. IPACK2013-73302.
Vasani, R. , and Agonafer, D. , 2014, “ Numerical Simulation of Fogging in a Square Duct—A Data Center Perspective,” 30th Annual IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), San Jose, CA, Mar. 9–13, pp. 45–52.
Seger, D. , and Solberg, A. , “ Economizer Performance: Applying CFD Modeling to the Data Center's Exterior,” SearchDataCenter.com, accessed October 26, 2014, http://searchdatacenter.techtarget.com/tip/Economizer-performance-Applying-CFD-modeling-to-the-data-centers-exterior
Beaty, D. L. , 2004, “ Liquid Cooling: Friend or Foe,” ASHRAE Trans., 110(2), pp. 643–652.
Ellsworth, M. J. , Campbell, L. A. , Simons, R. E. , Iyengar, M. , and Schmidt, R. R. , 2008, “ The Evolution of Water Cooling for IBM Large Server Systems: Back to the Future,” 11th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM 2008), Lake Buena Vista, FL, May 28–31, pp. 266–274.
ASHRAE TC 9.9, 2011, “ Thermal Guidelines for Liquid Cooled Data Processing Environments,” American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE), White Paper.
Heydari, A. , and Sabounchi, P. , 2004, “ Refrigeration Assisted Spot Cooling of a High Heat Density Data Center,” Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITHERM '04), Las Vegas, NV, June 1–4, pp. 601–606.
Mulay, V. , Agonafer, D. , and Schmidt, R. , 2008, “ Liquid Cooling for Thermal Management of Data Centers,” ASME Paper No. IMECE2008-68743.
Schmidt, R. , and Iyengar, M. , 2009, “ Server Rack Rear Door Heat Exchanger and the New ASHRAE Recommended Environmental Guidelines,” ASME Paper No. InterPACK2009-89212.
Tsukamoto, T. , Takayoshi, J. , Schmidt, R. , and Iyengar, M. , 2009, “ Refrigeration Heat Exchanger Systems for Server Rack Cooling in Data Centers,” ASME Paper No. InterPACK2009-89258.
Iyengar, M. , Schmidt, R. , Kamath, V. , and Kochuparambil, B. , 2011, “ Experimental Characterization of Server Rack Energy Use at Elevated Ambient Temperatures,” ASME Paper No. IPACK2011-52207.
Iyengar, M. , Schmidt, R. , and Caricari, J. , 2010, “ Reducing Energy Usage in Data Centers Through Control of Room Air Conditioning Units,” 12th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), Las Vegas, NV, June 2–5, pp. 1–11.
Fernandes, J. , Ghalambor, S. , Agonafer, D. , Kamath, V. , and Schmidt, R. , 2012, “ Multi-Design Variable Optimization for a Fixed Pumping Power of a Water-Cooled Cold Plate for High Power Electronics Applications,” 13th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 684–692.
Goth, G. , Arvelo, A. , Eagle, J. , Ellsworth, M. , Marston, K. , Sinha, A. , and Zitz, J. , 2012, “ Thermal and Mechanical Analysis and Design of the IBM Power 775 Water Cooled Supercomputing Central Electronics Complex,” 13th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems (ITHERM), San Diego, CA, May 30–June 1, pp. 700–709.
Brunschwiler, T. , Rothuizen, H. , Paredes, S. , Michel, B. , Colgan, E. , and Bezama, P. , 2009, “ Hotspot-Adapted Cold Plates to Maximize System Efficiency,” 15th IEEE International Workshop on Thermal Investigations of ICs and Systems (THERMINIC 2009), Leuven, Belgium, Oct. 7–9, pp. 150–156.

Figures

Grahic Jump Location
Fig. 4

(a) Modular data center layout, (b) effect of plenum height on the airflow distribution, and (c) effect of tile open area ratio on the airflow distribution. Increasing the plenum depth and decreasing the tile open area enhance the uniformity of the flow in the tiles [35].

Grahic Jump Location
Fig. 3

Most common data center cooling scheme. Raised floor forms a plenum for the cold air supplied from the cooling units (CRAC/CRAH). Cold air enters the area above raised floor through perforated tiles. Server racks are used to house the IT equipment and provide the necessary structure for cooling through front and rear perforated doors (Photo courtesy of 42U Data Center).

Grahic Jump Location
Fig. 2

Multiscale thermal systems. The heat is essentially generated from a chip level inside the IT equipment. The heat transfers through multiscale subsystems from the chip level, server level, rack level, and data center room level system [16].

Grahic Jump Location
Fig. 1

Power usage effectiveness (PUE) survey. More than 55% of the sample exhibit PUE greater than 1.8. Data adopted from the Uptime Institute Data Center Industry Survey [7].

Grahic Jump Location
Fig. 5

(a) Rack and server CFD model, (b) temperature contours for 100 W dissipated power, (c) velocity vectors showing the air recirculation inside the rack, and (d) rack with internal blockages to prevent recirculation [50]

Grahic Jump Location
Fig. 7

(a) Modular data center used for room level analysis and (b) inlet temperature response for room level model showing the effect of server heat capacity (HC). The server HC has a significant impact on the transient response and must be included in transient simulation for accurate estimation of the thermal time constant [57].

Grahic Jump Location
Fig. 6

(a) Temperature contours showing the baseline model and (b) temperature contours showing the optimized model. The inlet temperatures are reduced and the hot spots become less prominent by changing the plenum depth, cold aisle location, and height of the room [66].

Grahic Jump Location
Fig. 8

(a) Experimental setup showing hot-wire anemometer probe to measure the air velocity, (b) tested tiles, and (c) measurements and CFD simulations for tile C with symmetric 25% perforation. The experimental results used to develop numerical model for an accurate prediction of downstream velocity of a tile using CFD simulations [38].

Grahic Jump Location
Fig. 10

(a) CAC and (b) hot aisle containment (HAC). Containment systems reduce the hot air recirculation and enhance the inlet temperature uniformity, which leads to energy savings (Photo courtesy of 42U Data Center).

Grahic Jump Location
Fig. 9

(a) Sectioned view of multipass branching microchannel cold plate and (b) schematic of the experimental setup. This cold plate design shows good thermal characteristics, but advanced diffusion bonding techniques are needed, which can be challenging for high-volume production [112].

Grahic Jump Location
Fig. 14

Rear door heat exchanger for cooling rack exhaust air: (a) schematic side view, (b) rack mounted example, and (c) data center application. The exhaust hot air goes through the heat exchanger and gets cooled before it recirculates into the cold aisle [141].

Grahic Jump Location
Fig. 15

(a) Specifics of the cold plate's geometry chosen for optimization, specifically design variables used as input, namely, serpentine channel width (not highlighted) and height (indicated as middle thickness); and influence of said parameters on (b) weight, and (c) thermal performance of the cooling solution [149].

Grahic Jump Location
Fig. 11

(a) Research data center layout, (b) cold aisle configurations, and (c) experimental results at the rack inlet for overprovisioned cold aisle case. It is recommended to overprovision CACs and to use a ceiling only containment system over doors only if full containment is not an available option [132].

Grahic Jump Location
Fig. 12

(a) Schematic of detailed CAC model, (b) schematic of detailed rack model, (c) results of validating the CFD model of CAC, and (d) the impact of leakage at high elevation of the racks. Detailed modeling of CAC panels and calibration of the pressure drops in cooling units and servers is important for accurate CAC simulations. Small overprovisioning does not prevent leakage [78].

Grahic Jump Location
Fig. 13

CFD model of indirect/direct evaporative cooling unit. High face velocity affects the life and the performance of the air filters. Air flow distribution improvement ideas should address this challenge [137].

Tables

Table Grahic Jump Location
Table 2 Summary of experimental measurements in data centers
Table Footer NoteTCMs: thermal conduction modules.
Table Grahic Jump Location
Table 1 Summary of the CFD modeling efforts in data centers
Table Footer NoteCACs: cold aisle containment systems.
Table Grahic Jump Location
Table 3 Summary of recent thermal management technologies in data centers

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In