Thermal conductivity and interfacial thermal conductance play crucial roles in the design of engineering systems where temperature and thermal stress are of concerns. To date, a variety of measurement techniques are available for both bulk and thin film solid-state materials with a broad temperature range. For thermal characterization of bulk material, the steady-state method, transient hot-wire method, laser flash diffusivity method, and transient plane source (TPS) method are most used. For thin film measurement, the 3*ω* method and the transient thermoreflectance technique including both time-domain and frequency-domain analysis are widely employed. This work reviews several most commonly used measurement techniques. In general, it is a very challenging task to determine thermal conductivity and interfacial thermal conductance with less than 5% error. Selecting a specific measurement technique to characterize thermal properties needs to be based on: (1) knowledge on the sample whose thermophysical properties are to be determined, including the sample geometry and size, and the material preparation method; (2) understanding of fundamentals and procedures of the testing technique, for example, some techniques are limited to samples with specific geometries and some are limited to a specific range of thermophysical properties; and (3) understanding of the potential error sources which might affect the final results, for example, the convection and radiation heat losses.

# Measurement Techniques for Thermal Conductivity and Interfacial Thermal Conductance of Bulk and Thin Film Materials OPEN ACCESS

**Dongliang Zhao**,

**Xin Qian**,

**Xiaokun Gu**,

**Saad Ayub Jajja**

**Ronggui Yang**

^{1}Corresponding author.

Contributed by the Electronic and Photonic Packaging Division of ASME for publication in the JOURNAL OF ELECTRONIC PACKAGING. Manuscript received May 26, 2016; final manuscript received August 30, 2016; published online October 6, 2016. Assoc. Editor: Mehdi Asheghi.

*J. Electron. Packag*138(4), 040802 (Oct 06, 2016) (19 pages) Paper No: EP-16-1067; doi: 10.1115/1.4034605 History: Received May 26, 2016; Revised August 30, 2016

## Abstract

Thermal conductivity (denoted as $k$, $\kappa $, or $\lambda $) measures the heat conducting capability of a material. As shown in Fig. 1(a), it can be defined as the thermal energy (heat) $Q$ transmitted through a length or thickness $L$ in the direction normal to a surface area $A$, under a steady-state temperature difference $Th\u2212Tc$. Thermal conductivity of a solid-phase material can span for several orders of magnitude, with a value of ∼0.015 $W/m\u2009K$ for aerogels at the low end to ∼2000 $W/m\u2009K$ for diamond and ∼3000 $W/m\u2009K$ for single-layer graphene at the high-end, at room temperature. Thermal conductivity of a material is also temperature-dependent and can be directional-dependent (anisotropic). Interfacial thermal conductance (denoted as $K\u2009$ or $G$) is defined as the ratio of heat flux to temperature drop across the interface of two components. For bulk materials, the temperature drop across an interface is primarily due to the roughness of the surfaces because it is generally impossible to have “atomically smooth contact” at the interface as shown in Fig. 1(b). Interfacial thermal conductance of bulk materials is affected by several factors, such as surface roughness, surface hardness, impurities and cleanness, the thermal conductivity of the mating solids, and the contact pressure [1]. For thin films, the temperature drop across an interface can be attributed to the bonding strength and material difference. Note that thermal contact resistance and thermal boundary resistance (or Kapitza resistance [2]) are usually used to describe heat conduction capability of an interface in bulk materials and thin films, respectively. Interfacial thermal conductance is simply the inverse of thermal contact/boundary resistance. Knowledge of thermal conductivity and interfacial thermal conductance and their variation with temperature are critical for the design of thermal systems. In this paper, we review measurement techniques for characterizing thermal conductivity and interfacial thermal conductance of solid-state materials in both bulk and thin film forms.

Extensive efforts have been made since the 1950s for the characterization of thermal conductivity and thermal contact resistance in bulk materials [3–8]. Table 1 summarizes some of the most commonly used measurement techniques, which in general can be divided into two categories: steady-state methods and transient methods. The steady-state methods measure thermal properties by establishing a temperature difference that does not change with time. Transient techniques usually measure time-dependent energy dissipation process of a sample. Each of these techniques has its own advantages and limitations and is suitable for only a limited range of materials, depending on the thermal properties, sample configuration, and measurement temperature. Section 2 is devoted to comparing some of these measurement techniques when applied for bulk materials.

Thin film form of many solid materials with a thickness ranging from several nanometers to hundreds of microns has been extensively used in engineering systems to improve mechanical, optical, electrical, and thermal functionality, including microelectronics [9], photonics [10], optical coating [11], solar cells, and thermoelectrics [12]. Thin film materials can be bonded on a substrate (Fig. 1(c)), free-standing, or in a multilayer stack. When the thickness of a thin film is smaller than the mean free path of its heat carriers, which are electrons and phonons depending on whether the material is electrically conducting or not, the thermal conductivity of thin films is reduced compared to its bulk counterparts because of the geometric constraints. Thermal conductivity of thin films is usually thickness-dependent and anisotropic, where the heat conducting capability in the direction perpendicular to the film plane (cross-plane) is very different from that parallel to the film plane (in-plane), as shown in Fig. 1(c). The thermal conductivity of thin films also depends strongly on the materials preparation (processing) method, and the substrate that thin films are sitting on. The conventional thermal conductivity measurement techniques for bulk materials are usually too large in size to measure the temperature drop and the heat flux across a length scale ranging from a few nanometers to tens of microns. For example, the smallest beads of commercial thermocouples have a diameter of around 25 $\mu m$, which could be much larger than the thicknesses of most electronic thin films.

Significant progresses have also been made for the characterization of thermal conductivity and thermal boundary resistance of thin films over the past 30 years due to the vibrant research in micro- and nano-scale heat transfer [13–19]. Section 3 reviews a few measurement techniques for thin films including the steady-state methods, the 3*ω* method, and the transient thermoreflectance technique in both time-domain (TDTR) and frequency-domain (FDTR), as summarized in Table 1. We note that some techniques (e.g., 3*ω*, time-domain thermoreflectance (TDTR), and frequency-domain thermoreflectance (FDTR)) are actually very versatile and can be applied for the thermal characterization of both bulk and thin film materials although the techniques reviewed here have been divided into two categories for bulk materials and thin films just for convenience.

In the steady-state measurement, the thermal conductivity and interfacial thermal conductance are determined by measuring the temperature difference $\Delta T$ at a separation (distance) under the steady-state heat flow $Q$ through the sample. Figure 2 shows the schematic of four different steady-state methods commonly adopted: absolute technique, comparative cut bar technique, radial heat flow method, and the parallel thermal conductance technique.

Absolute technique is usually used for samples that have a rectangular or cylindrical shape. When conducting this measurement, the testing block is placed between a heat source and a heat sink as shown in Fig. 2(a). The sample is heated by the heat source with known steady-state power input, and the resulting temperature drop $\Delta T$ across a given length (separation) of the sample is measured by temperature sensors after a steady-state temperature distribution is established. The temperature sensors employed can be thermocouples and thermistors. Thermocouples are the most widely used sensors due to their wide range of applicability and accuracy. The resulting measurement error in $\Delta T$ due to temperature sensor shall be less than 1% [20]. Thermal conductivity $k$ of the sample can be calculated using Fourier's law of heat conduction

where $Q$ is the amount of heat flowing through the sample, $A$ is the cross-sectional area of the sample, $L$ and $\Delta T$ are the distance and temperature difference between temperature sensors, $p$ is the applied heating power at heat source side, and $Qloss$ is the parasitic heat losses due to radiation, conduction, and convection to the ambient.

The major challenge of the absolute technique is to determine the heat flow rate $Q$ through the sample at the presence of parasitic heat losses $Qloss$ and to measure temperature difference $\Delta T$ accurately. Parasitic heat losses include convection and radiation to the surrounding and conduction through thermocouple wires. In general, parasitic heat losses should be controlled to be less than 2% of the total heat flow through the sample. To minimize convection and radiation heat losses, most measurements are conducted under vacuum with radiation shields [21]. Besides the convection and radiation heat losses, another concern is that the heat conduction through thermocouple wires. It is therefore preferable to use thermocouples with small wire diameter (e.g., 0.001 in. [3]) and low thermal conductivity wires (e.g., chromel–constantan). Also, in order to minimize the conduction heat loss, differential thermocouple with only two wires coming off the sample can be applied to directly obtain the temperature difference $\Delta T$ [3]. A typical test apparatus of the absolute technique is the guarded-hot-plate apparatus. ASTM C177 [20], European Standard EN 12667 [22], and International Standard ISO 8302 [23] have more details about the apparatus and testing procedure. Major drawbacks of the absolute technique include: (1) the testing sample should be relatively large (in centimeter scale or even larger) and can form a standard circular or rectangular shapes when thermocouples are used for measuring temperatures. (2) The test usually suffers from a long waiting time, up to a few hours, to reach steady-state. Resistance thermometers (i.e., RTDs) [24,25] and infrared (IR) thermography [26] are often employed for temperature sensing when testing small samples (in micron scale or even smaller) by using absolute technique.

The absolute technique has also been applied for measuring the thermal contact resistance between two components, as shown in Fig. 3(a). Testing samples are pressed together with controllable contact pressures. Several thermocouples (usually four or six) are placed inside the two mating samples at uniform intervals (separations) to measure local temperature in response to an applied heating power. Once the steady-state heat flow is achieved, temperatures are recorded and plotted (denoted by solid dots) in the temperature versus distance curves as depicted in Fig. 3(b). Temperatures at both interfaces (i.e., $Th$ and $Tc$, denoted by hollow circles) can be deduced assuming the temperature distribution on each side is linear. The thermal contact resistance is then calculated by the temperature drop (i.e., $Th$ − $Tc$) divided by total heat flow across the interface. In order to obtain an accurate thermal contact resistance, the temperature drop across the interface should be maintained relatively large (e.g., >2 °C [27]) through the control of applied heating power.

In addition to bare contacts, thermal interface materials (TIMs) are usually employed to reduce thermal contact resistance. Commonly used TIMs include pressurized gases (e.g., helium and hydrogen), thermal greases (e.g., silicone oil and glycerin), thermal adhesive, thermal conductive pad, and polymer matrix composited with high thermal conductive particulates, such as silver and ceramics as fillers, phase-change material, and solders [28]. The one-dimensional steady-state testing method is widely used for characterizing the thermal conductivity of TIMs, as well as the thermal contact resistance [29]. The testing instrument is similar to Fig. 3(a), except that the two testing samples are replaced by two metal blocks (usually copper) with known thermal conductivity, and the TIM to be investigated is inserted in between as shown in Fig. 3(c).

The biggest challenge in the absolute technique is to accurately determine the heat flow through sample. However, if one has a standard material whose thermal conductivity is known, the comparative cut bar technique can be applied and the direct measurement of heat flow is unnecessary. Figure 2(b) shows the measurement configuration of the comparative cut bar technique, which is similar to that of the absolute method. At least two temperature sensors should be employed on each bar. Extra sensors can be used for conforming linearity of temperature versus distance along the column. The selection of temperature sensors depends on the system size, temperature range, meter bars, specimen, and gas within the system [30], while thermocouples are the most widely employed temperature sensors. Since the amount of heat flow through the standard material equals to the heat flow through the measurement sample target, the thermal conductivity of the measurement sample target is given by

By implementing the standard material with known thermal conductivity, the sample target thermal conductivity measured can be extracted without heat flow measurement as shown in Eq. (3), and the associated error due to heat flow measurement can thus be eliminated. However, efforts are still needed to ensure equal heat flow between the standard material and the testing specimen. This technique achieves the best accuracy when the thermal conductivity of the measurement target is comparable to that of the standard material [3]. This is the most widely used method for axial heat flow thermal conductivity testing. ASTM E1225 [30] gives the experimental requirements and procedure for the comparative cut bar technique.

Another type of comparative technique is the heat flow meter method. A heat flux transducer is used, which essentially replaces the standard material in the comparative cut bar method. By appropriate calibration of the heat flux transducer using a specimen of known thermal conductivity, the thermal conductivity of measurement sample can then be easily determined using Fourier's law of heat conduction with measured heat flux. This method is usually used to characterize low thermal conductivity materials, such as building insulation materials. ASTM C518 [31], ASTM E1530 [32], and European Standard EN 12667 [22] specify the test apparatus and calibration method for heat flow meter method.

The two steady-state methods described above use a longitudinal arrangement of samples to measure its thermal conductivity. This can be satisfactory at low temperatures. However, for measurement at very high temperatures (e.g., >1000 K), radiation heat loss from the heater and sample surfaces is not negligible and can cause large uncertainties when quantifying the heat flow through the sample. In order to overcome this, samples with cylindrical geometry are used in the radial heat flow method. Flynn described the evolution of the apparatus used for measurement of thermal conductivity by using this technique [33]. The ASTM C335 [34] and the ISO 8497 [35] cover the relevant measurement requirements and testing procedure using this method. The sample is heated internally at the axis, and the heat flows radially outward as depicted in Fig. 2(c). A steady-state temperature gradient in the radial direction is established. Thermocouples are used predominantly for temperature sensing in radial heat flow method, with an accuracy within ±0.1 °C [34]. The thermal conductivity is derived from Fourier's law of heat conduction in cylindrical coordinate

where $r1$ and $r2$ are the radius where the two temperature sensors are positioned, $H$ is the sample height, and $\Delta T$ is the temperature difference between the temperature sensors.

Characterization of small bulk materials with a size in the millimeter scale is very challenging because temperature sensing by thermocouples and the heat flux measurement is extremely difficult. The parallel thermal conductance technique was introduced by Tritt and coworkers [36] for small needlelike samples (e.g., 2.0 × 0.05 × 0.1 mm^{3} [36] and 10 × 1 × 1 mm^{3} [37]). Figure 2(d) shows the typical experimental configuration, which is a variation of the absolute technique for those samples that cannot support heaters and thermocouples. A sample holder or stage is used between the heat source and heat sink. Differential thermocouple is positioned between the hot side and the post on the one junction end, and between the cold side and the post on the other junction. Before measuring the thermal conductivity of the specimen, a thermal conductance measurement of the sample holder is performed first to quantify the thermal losses associated with the sample holder. The testing sample is then attached to the sample holder, and the thermal conductance is measured again. Thermal conductance of the sample can be deduced by taking the difference of these two measurements. Thermal conductivity is then calculated from the thermal conductance by multiplying sample length and dividing by the sample cross-sectional area. The major drawback of this method is the requirement to measure cross-sectional area of such small samples. Inaccuracies in cross-sectional area measurement can lead to large uncertainties in the calculated thermal conductivity.

In order to overcome the drawbacks associated with the steady-state methods described previously, such as parasitic heat losses, contact resistance of temperature sensors, and long waiting time for establishing steady-state temperature difference, a variety of transient techniques have been developed. The heat sources used in transient techniques are supplied either periodically or as a pulse, resulting in periodic (phase signal output) or transient (amplitude signal output) temperature changes in the sample, respectively. This section focuses on the four commonly used transient techniques, namely, pulsed power technique, hot-wire method, transient plane source (TPS) method, and laser flash thermal diffusivity method.

Pulsed power technique was first introduced by Maldonado to measure both thermal conductivity and thermoelectric power [38]. This technique is a derivative of the absolute technique in the steady-state methods, with the difference that a periodic electrical heating current $I(t)$ is used. This technique is in principle very close to the Angstrom's method [39,40] in terms of the heating method. But the difference is that the heat sink temperature of this technique is slowly varying during the measurement. Figure 4(a) shows the schematic of a typical setup for pulsed power technique. The sample (usually in cylindrical or rectangular geometry) is held between a heat source and a heat sink. The heating current used can be either a square wave of constant-amplitude current or a sinusoid wave [41]. During the experiment, a periodic electric current with a period of $2\tau $ is applied to the heat source, while the temperature of the heat sink bath $Tc$ drifts slowly. A small temperature gradient $\Delta T=Th\u2212Tc$ (usually ∼0.3 K) is created between the heat source and the heat sink, which can be measured by a calibrated Au–Fe chromel thermocouple [38]. The heat balance equation between the heat dissipated by the heater and conducted through the sample is given as

It is possible to obtain thermal conductance $K$ as a function of temperature from the measured temperature $Th$. However, Eq. (5) is nonlinear and difficult to be solved analytically. Therefore, $C(Tc)$, $R(Tc)$, and $K(Tc)$ are usually used in Eq. (5) to replace $C(Th)$, $R(Th)$, and $K((Th+Tc)/2)$ to linearize the equation. This assumption holds because temperature difference between $Tc$ and $Th$ (i.e., $\Delta T$) is very small (Fig. 4(b)). Also, $Tc$ is considered to be constant since it is drifted very slowly compared to the periodic current. The final solution has an oscillating sawtoothlike shape as shown in Fig. 4(b). Smooth curves (i.e., the two dashed lines) are drawn through the maxima and minima of the oscillations. The difference between the two dashed smooth curves $\Delta Tpp$ yields a relation for the thermal conductance of the measured sample [38]

where $\tau $ is the half period of the heating current, $C$ is the volumetric heat capacity, *R* is the thermal resistance, and $I0$ is the amplitude of electric current.

Numerical iteration can be applied to solve for the thermal conductance term $K$ in Eq. (6) since all other parameters are known as a function of temperature. This technique is capable of measuring a wide temperature range from 1.9 to 390 K as reported in literature [42–44], and an ultra-low thermal conductivity as low as 0.004 $W/m\u2009K$ for ZrW_{2}O_{8} at temperature 2 K [43]. The measurement uncertainty reported by Maldonado is less than 3% [38].

The hot-wire method is a transient technique that measures temperature rise at a known distance from a linear heat source (i.e., hot wire, usually platinum or tantalum) embedded in the test sample. Stalhane and Pyk employed this method in 1931 to measure the thermal conductivity of solids and powders [45]. The method assumes an idealized “one-dimensional radial heat flow” inside the isotropic and homogeneous test sample, which is based on the assumption that the linear heat source has infinite length and infinitesimal diameter as shown in Fig. 5. When there is an electric current of constant intensity that passes through the hot wire, thermal conductivity of the test sample can be derived from the resulting temperature change at a known distance from the hot wire over a known time interval. Hot-wire method is commonly used to measure low thermal conductivity materials, such as soils [46], ice cores [47], and refractories (refractory brick, refractory fibers, plastic refractories, and powdered materials [48]). This method has also been commonly used for measuring the thermal conductivity of liquids. ASTM C1113 and ISO 8894 specify more details on apparatus and test procedure for the measuring of refractories using hot-wire method [48,49].

As the hot wire produces a thermal pulse for a finite-time with constant heating power, it generates isothermal lines in an infinite homogeneous medium initially at thermal equilibrium. The transient temperature for sufficiently long time from the start of the heating can be expressed with good approximation by [50]

For sufficiently large values of the time $t$, the terms $r2/4\alpha t$ inside the parenthesis are negligible because it is far less than one. The above equation can then be simplified to

The temperature rise at a point in the test sample from time $t1$ to $t2$ is given by

Thermal conductivity is then obtained from the temperature rise $\Delta T$ versus natural logarithm of the time $ln(t)$ expressed below

It should be noted that when $r$ in Eq. (8) equals to zero, the wire will act both as a line source heater and a resistance thermometer. Today's hot-wire method instruments allow more than 1000 data readings of the transient temperature rise from times less than 1 ms up to 1 s (or 10 s, in the case of solids) coupled with finite-element methods to establish a very low uncertainty [51]. If applied properly, it can achieve uncertainties below 1% for gases, liquids, and solids, and below 2% for nanofluids [51]. Despite its advantages, there are very few commercial hot-wire instruments [52]. Possible reason is due to the delicacy of the very thin wire which easily snaps.

The needle-probe method, also for testing isotropic and homogeneous materials, is a variation of the hot-wire method. Working principle of the needle-probe method is same as the hot-wire method. But the temperature sensing is based on a zero radius system (i.e., $r$ = 0 in Eq. (8)). The heating wire and the temperature sensor (thermocouple) are encapsulated in a probe that electrically insulates the heating wire and the temperature sensor from the test sample. The probe helps protect the heating wire. This configuration is particularly practical where thermal conductivity is determined by a probe inserted in the test sample. Therefore, the method is conveniently applied to powderlike materials and soils. A needle-probe device can be used to measure sample thermal properties in situ, but most commonly a temperature-controlled furnace is used to produce the base temperatures for the measurements [53]. ASTM D5930 [54] and ASTM D5334 [55] standardize the test procedure and data analysis method for the needle-probe method.

The TPS method (i.e., hot disk method) uses a thin metal strip or a disk as both a continuous plane heat source and a temperature sensor as depicted in Fig. 6(a). The metal disk is first sealed by electrical insulation and then sandwiched between two identical thin slab-shaped testing samples. All other surfaces of the testing samples are thermally insulated. During the experiment, a small constant current is applied to the metal disk to heat it up. Since the temperature increase of the metal disk is highly dependent on the two testing samples attached to it, thermal properties of the testing samples can be determined by monitoring the temperature increase for a short time period. This time period is generally only a few seconds so that the metal disk can be considered in contact with infinite size samples throughout the transient signal recording process. Temperature increase at the sensor surface $\Delta T$ (e.g., 1–3 °C [57]) as a function of time can be measured. Measurement accuracy of the temperature sensor (temperature resistance thermometer) is usually ±0.01 °C [57]. Then, fitting the temperature curves by Eqs. (11) and (12) to the measured $\Delta T$ renders the inverse of thermal conductivity $1/k$ [58]

where $r$ is the sensor radius, and $D(\varphi )$ is a dimensionless theoretical expression of the time-dependent increase that describes heat conduction of the sensor (Fig. 6(b)).

The TPS method was reported to be capable for measuring materials with thermal conductivity ranging from 0.005 to 500 $W/m\u2009K$ in the temperature range from cryogenic temperatures to 500 K, including liquids, aerogels, and solids [59–62]. ASTM D7984 [57] and ISO 22007-2 [63] specify the test apparatus and procedure for the TPS method. One drawback of TPS measurement is that each of the two sample pieces needs to have one entirely planar side. This makes it difficult for some materials, especially powders or granules [64]. The TPS measurement errors come from several sources: (1) thermal contact resistance between the sensor and the testing samples, (2) the thermal inertia of the sensor, (3) the measured power input being influenced by the heat capacity of the electrical isolation films, and (4) the electrical resistance change of the metallic disk sensors. Model corrections for these errors are necessary to improve measurement accuracy when doing data analysis. Readers can refer to Refs. [59,60,62] for more information. For example, when considering the influence of the sensor's electrical resistance change with temperature, Eq. (11) needs to be revised to [62]

Contact thermal resistance is a major source of error for temperature measurement. The laser flash method employs noncontact, nondestructive temperature sensing to achieve high accuracy [65]. The method was first introduced by Parker et al. [66]. It uses optical heating as an instantaneous heating source, along with a thermographic technique for quick noninvasive temperature sensing. The testing sample is usually a solid planar-shaped material when measuring thermal conductivity and is a multilayer structure when characterizing thermal contact resistance. A typical measurement configuration for laser flash method is depicted in Fig. 7(a). An instantaneous light source is used to uniformly heat up the sample's front side, and a detector measures the time-dependent temperature rise at the rear side. Heat conduction is assumed to be one-dimensional (i.e., no lateral heat loss). The testing sample is usually prepared by spraying a layer of graphite on both sides to act as an absorber on the front side and as an emitter on the rear side for temperature sensing [68]. The rear-side infrared radiation thermometer should be fast enough to respond to the emitting signals, and the precision of temperature calibration is usually ±0.2 K [53]. Dynamic rear-side temperature response curve (Fig. 7(b)) is used to fit the thermal diffusivity. The higher the thermal diffusivity of the sample, the faster the heat transfer and temperature rise on the rear side.

Theoretically, the temperature rise at the rear side as a function of time can be written as [66]

where $d$ is the sample thickness, and $\alpha $ is the thermal diffusivity. To simplify Eq. (14), two dimensionless parameters, $W$ and $\eta $, can be defined

where $Tmax$ denotes the maximum temperature at the rear side. The combination of Eqs. (14)–(16) yields

When $W$ is equal to 0.5, which means the rear-side temperature reaches to one-half of its maximum temperature, $\eta $ is equal to 1.38, and so the thermal diffusivity $\alpha $ is calculated by [66]

where $t1/2$ is the time that takes for the sample to heat to one-half maximum temperature on the rear surface.

ASTM E1461 [69] and ISO 22007-4 [70] specify the requirements of apparatus, test sample, and procedure for thermal diffusivity measurement by the laser flash method. In addition to the thermal diffusivity, $\alpha $, measured using the laser flash method, material density, $\rho $, and specific heat, $cp$, need to be measured from separate experiments, to obtain the thermal conductivity using the relationship $k=\alpha \rho cp$. The laser flash method is capable of measuring thermal conductivity over a wide temperature range (−120 °C to 2800 °C [70]) with measurement uncertainty reported to be less than 3% [71]. The advantages of the method are not only its speed (usually 1–2 s for most solid) but also its capability to use very small samples, e.g., 5–12 mm in diameter [53]. There are, however, some considerations that should be kept in mind before carrying out laser flash measurement. First of all, sample heat capacity and density should be known or determined from separate experiments, which may result in the “stack up” of uncertainties and lead to larger errors. Another criticism of the laser flash method is that the effect of sample holder heating could lead to significant error in the measurements if not accounted for properly [72]. Though laser flash method can be used to measure thin films, thickness of the measured sample is limited by the timescales associated with the heating pulse and the infrared detector. Typical commercial laser flash instruments can measure samples with a thickness of ∼100 *μ*m and above depending on the thermal diffusivity of the sample. For thin film sample with a thickness less than 100 *μ*m, one needs to resort to the 3*ω* method or transient thermoreflectance techniques developed over the past two decades.

Temperature drop across a thin film sample needs to be created and measured to characterize the cross-plane thermal conductivity. Creating and measuring the temperature drop are extremely challenging when the sample thickness is as small as a few nanometers to tens of microns. Figure 8 shows the schematic of two steady-state measurement configurations being frequently employed. In both configurations, thin films with thickness $df$ are grown or deposited onto a substrate with high thermal conductivity and small surface roughness (e.g., polished silicon wafer). A metallic strip with length $L$ and width $2a$ ($L\u226b2a$) is then deposited onto the thin film whose thermal conductivity is to be determined. The metallic strip should have a high temperature coefficient of resistance, such as Cr/Au film. During the experiment, the metallic strip is heated by a direct current (DC) passing through it. The metallic strip serves as both an electrical heater and a sensor to measure its own temperature $Th$.

The temperature at the top of the film $Tf,1$ is generally assumed to be the same as the average heater temperature $Th$. The most straightforward way would be using another sensor to directly measure the temperature $Tf,2$ at the bottom side of the film (Fig. 8(a)). But this approach complicates the sample preparation processes which usually involve cleanroom microfabrication. The other approach is using another sensor situated at a known distance away from the heater/sensor to measure temperature rise of the substrate right underneath it (Fig. 8(b)). A two-dimensional heat conduction model is then used to infer the substrate temperature rise at the heater/sensor location from the measured substrate temperature rise at the sensor location [15].

The major challenge of measuring in-plane thermal conductivity is the evaluation of heat flow along the film in the presence of parasitic heat loss through the substrate. In order to increase measurement accuracy, Volklein et al. [73] concluded that it is desirable to have the product of the thin film in-plane thermal conductivity $kf,\u2009\u2225$ and film thickness $df$ equal to or greater than the corresponding product of the substrate (i.e., $kf,\u2225df\u2265kSdS$). However, in order to completely remove parasitic heat loss through the substrate, a suspended structure by removing the substrate as shown in Fig. 9 is desirable, which complicates microfabrication for sample preparation [74].

Figure 9 depicts the schematic of the two steady-state methods for measuring in-plane thermal conductivity along suspended thin films. Figure 9(a) shows the measurement configuration that was first developed by Volklein and Starz [75,76], where a metallic strip (Cr/Au film) is deposited on top of the thin film that serves as both an electrical heater and temperature sensor. When a DC passes through the heater/sensor, the temperature rise in the heater/sensor is a function of the heating power, thin film thermal conductivity, ambient temperature, thin film thickness $df$, and width $Lf$. The in-plane thermal conductivity can then be deduced from the difference in heater/sensor temperature rise of two measurements using two different thin film widths with all other parameters unchanged [76]. The other steady-state method is shown in Fig. 9(b), another sensor is used to measure the heat sink temperature where the thermal conductivity can then be straightforwardly written as

where $Q$ is the power dissipated in the metallic heater per unit length; $Lf/2$ is the distance from the heater to the heat sink; $Tf,1$ is the thin film temperature right underneath the heater/sensor, which is assumed the same temperature as heater/sensor; and $Tf,2$ is the temperature of the thin film edge in contact with the substrate.

For measuring the thermal conductivity of an electrically conductive material or a semiconducting material, an additional electrical insulation layer is needed between the electrical heater/sensor and thin film for both methods, which significantly complicated the data analysis. In order to ensure one-dimensional heat conduction inside thin film, parasitic heat losses must be minimized, which include heat conduction loss along the length direction of the heater/sensor and convection and radiation losses to the ambient. Heat conduction along the heater/sensor can be minimized through advanced microfabrication to minimize its cross-sectional area. To minimize heat loss to the ambient, the experimental measurement is usually carried out under vacuum. Usually a small temperature difference is used to minimize the radiation heat loss, while coating the surface with low-emissivity material could be another option. Nevertheless, the most effective way to deal with radiation heat loss would be using a transient heating (e.g., using alternating current (AC)) and temperature sensing technique, such as the 3*ω* method.

The 3*ω* method is widely used to measure thermal properties of both bulk materials and thin films after it was first introduced in 1990 by Cahill et al. [77,78]. Figure 10 shows a typical schematic of the 3*ω* measurement. The thin film of interest is grown or deposited on a substrate (e.g., silicon and sapphire [79]). A metallic strip (e.g., aluminum, gold, and platinum) is deposited on top of a substrate or the film-on-substrate stack. Dimensions of the metallic strip are usually half-width $a$ = 10–50 $\mu m$ and length $L$ = 1000–10,000 $\mu m$, which is treated as infinitely long in the mathematical model. The metallic strip serves as both an electrical heater and a temperature sensor, as shown in Fig. 10. An AC at frequency *ω* passes through the heater/sensor, which is expressed as

where $I0$ is the current amplitude, which results in Joule heating of the resistive heater/sensor at 2*ω* frequency because of its electrical resistance. Such a 2*ω* heating leads to temperature change of the heater/sensor also at 2*ω* frequency

where $\Delta T0$ is the temperature change amplitude, and $\phi $ is the phase. The temperature change perturbs the heater/sensor's electrical resistance at 2*ω*

where $\alpha R$ is the temperature coefficient of resistance of the heater/sensor, and $Re,0$ is the heater/sensor's electrical resistance at the initial state. When multiplied by the 1*ω* driving current, a small voltage signal across the heater/sensor at 3*ω* frequency can be detected [80]

This change in voltage at 3*ω* frequency (i.e., the third term in Eq. (23)) has the information about thermal transport within the sample [81]. However, since the 3*ω* voltage signal (amplitude $R0I0\alpha R\Delta T0/2$) is very weak and usually about three orders of magnitude smaller than the amplitude of the applied 1*ω* voltage, a lock-in amplifier is usually employed for implementing such measurement technique.

When measuring the thermal conductivity of an electrically conductive material or a semiconducting material, an additional electrical insulation layer is needed between the electrical heater/sensor and thin film. Depending on the width of the heater, both the cross-plane and the in-plane thermal conductivity of thin films can be measured using the 3*ω* method. Approximate analytical expressions are usually employed to determine the cross-plane and the in-plane thermal conductivity. Borca-Tasciuc et al. presented a general solution for heat conduction across a multilayer-film-on-substrate system [13]. Dames also described the general case of thermal and electrical transfer functions framework [81].

For the simplest case that a metallic heater/sensor is deposited on an isotropic substrate without a thin film, the heater/sensor can be approximated as a line source if the thermal penetration depth $Lp=\alpha S/2\omega $ is much larger than the heater/sensor half-width $a$. By choosing an appropriate heating frequency of the heating current, thermal penetration can be localized within the substrate. The temperature rise of the heater/sensor can then be approximated as [13]

where subscript $S$ is associated with substrate, $\eta $ is a constant, $i$ is $\u22121$, $k$ is the thermal conductivity, $p/L$ is the peak electrical power per unit length, and $flinear$ is a linear function of $ln\omega $. It is clear that the isotropic thermal conductivity of substrate $kS$ can be determined from the slope of the real part of the temperature amplitude as a linear function of the logarithm frequency $ln(\omega )$ (i.e., the “slope method”), according to Eq. (24).

With a film-on-substrate, one needs to estimate the temperature drop across the thin film to find the cross-plane thermal conductivity $kf,\u2009\u22a5$ (Fig. 11(a)). The temperature at the upper side of the film is usually taken to be equal to the heater/sensor temperature because the contact resistances are typically very small, $10\u22128\u221210\u22127m2K/W$ [82]. The most common method to determine the temperature at the bottom side of the film is calculated from the experimental heat flux with the substrate thermal conductivity $kS$, which is usually known or can be measured using the 3*ω* method as shown in Eq. (24). Assuming one-dimensional heat conduction across the thin film (Fig. 10(a)), the thermal conductivity of the thin film can be easily determined from

The 3*ω* method has also been extensively used to measure the in-plane thermal conductivity of thin films. In comparison to the cross-plane thermal conductivity measurement, a much narrower heater is used so that the heat transfer process within the film is sensitive to both in-plane and cross-plane thermal conductivity as shown in Fig. 11(b). The half-width $a$ of the heater should be narrow enough to satisfy [81]

where $kf,\u2009\u22a5$ and $kf,\u2009\u2225$ are the cross-plane and in-plane thermal conductivities of the thin film, respectively, and $df$ is the film thickness. Due to the lateral heat spreading which is sensitive to the in-plane thermal conductivity, a two-dimensional heat transfer model needs to be used for data reduction. The temperature drop across the thin film is obtained as [13]

Equation (27) gives the temperature drop of thin film normalized to the value for purely one-dimensional heat conduction through the film, as a function of the in-plane and cross-plane thermal conductivities and the heater/sensor half-width $a$. In practice, $kf,\u2009\u22a5$ is usually measured first with a heater/sensor with much greater width that is only sensitive to cross-plane thermal conductivity. $kf,\u2009\u2225\u2009$ is then measured with a much smaller heater/sensor width (see Fig. 12).

The previously-mentioned 3*ω* method has been limited to samples with thermal conductivity tensors that are either isotropic or have their principle axes aligned with the Cartesian coordinate system defined by the heater line and the sample surface. Recently, Mishra et al. introduced a 3*ω* method that can measure an arbitrarily aligned anisotropic thermal conductivity tensor [83]. An anisotropic thermal conductivity tensor with finite off-diagonal terms is considered. An exact closed-form solution has been found and verified numerically. The authors found that the common slope method yields the determinant of the thermal conductivity tensor, which is invariant upon rotation about the heater's axis. Following the analytic result, an experimental scheme is proposed to isolate the thermal conductivity tensor elements. By using four heater lines, this method can measure all the six unknown tensor elements in the three dimensions.

A significant advantage of the 3*ω* method over the conventional steady-state methods is that the error due to radiation heat loss is greatly reduced. Errors due to thermal radiation are shown to scale with a characteristic length of the experimental geometry. The calculated error of 3*ω* measurement due to radiation is less than 2% even at a high temperature of 1000 K [84]. The 3*ω* method can be used for measuring dielectric, semiconducting, and electrically conducting thin films. For electrically conducting and semiconducting materials, samples need to be electrically isolated from the metallic heater/sensor with additional insulating layer [79,85], which introduces extra thermal resistance and inevitably reduces both sensitivity and measurement accuracy. Another challenge is that the 3*ω* method involves microfabrication for the metallic heater/sensor. Optical heating and sensing method (e.g., transient thermoreflectance technique), on the other hand, usually requires minimal sample preparation.

The transient thermoreflectance technique is a noncontact optical heating and sensing method to measure thermal properties (thermal conductivity, heat capacity, and interfacial thermal conductance) of both bulk and thin film materials. Samples are usually coated with a metal thin film (e.g., aluminum or tungsten), referred to as metallic transducer layer, whose reflectance changes with the temperature rise at the laser wavelength. This allows to detect the thermal response by monitoring the reflectance change. Figure 13 shows the schematic diagrams of the sample configuration for a thin film and a bulk material being measured using concentrically focused pump and probe beams. The thermoreflectance technique was primarily developed in the 1970s and 1980s when continuous wave (CW) light sources are used to heat the sample [86,87]. With the advancement of pico- and femtosecond pulsed laser after 1980, this technique has been widely used for studying nonequilibrium electron–phonon interaction [88,89], detecting coherent phonon transport [90–92] and thermal transport across interfaces [93–95]. This technique has recently been further developed over the last few years for measuring anisotropic thermal conductivity of thin films [96–98] and probing spectral phonon transport [99–102].

The transient thermoreflectance technique can be implemented as both time-domain thermoreflectance (TDTR) method [103,104] and frequency-domain thermoreflectance (FDTR) method [97,105]. The TDTR method measures the thermoreflectance response as a function of the time delay between the arrival of the probe and the pump pulses at the sample surface. Figure 14(a) shows a typical experimental system. A Ti–sapphire oscillator is used as the light source with wavelength centered at around 800 nm and a repetition rate of 80 MHz. The laser output is split into pump beam for heating and probe beam for sensing. Before focusing on the sample, the pump beam is modulated by an acoustic-optic modulator (AOM) or electro-optic modulator (EOM) at a frequency from a few kilohertz to a few megahertz. The probe beam passes through a mechanical delay stage such that the temperature responses are detected with a delay time (usually a few picoseconds to a few nanoseconds) after the sample is heated by the pump pulse. The signal from thermoreflectance change is then extracted by the lock-in amplifier. Spatially separating the pump and the probe or spectrally screening the pump beam with a filter (the two-color [106] and two-tint method [107]) can avoid the scattered light from modulated pump beam entering the photodetector.

The other group of thermoreflectance technique is FDTR, where the thermoreflectance change is measured as a function of the modulation frequency of the pump beam. Therefore, FDTR can be easily implemented using the same TDTR system (Fig. 14(a)) by fixing the delay stage at a certain position and varying the modulation frequency. FDTR system, however, can avoid the complexity of beam walk-off and divergence associated with the mechanical delay stage because the probe delay is fixed. FDTR system can also be implemented using the less expensive continuous wave (CW) lasers as shown in Fig. 14(b), which achieves similar accuracy to TDTR measurement for thermal conductivity of many thin film materials [97,104,105,108]. Similar to TDTR, the pump beam of CW-based FDTR (CW-FDTR) is modulated by EOM and creates a time-dependent temperature gradient. However, the heating process by pump incidence is continuous in CW-FDTR. The probe beam is directly focused on the sample without passing through a mechanical delay stage. The thermoreflectance change embedded in the reflected probe beam is also extracted using a photodiode and a lock-in amplifier.

An illustration of the TDTR data acquisition is given in Fig. 15. The pump pulses modulated at frequency $\omega 0$ (Fig. 15(a)) heat the sample periodically. The oscillating temperature response of the sample (Fig. 15(b)) is then detected by the probe beam arriving with a delayed time $\tau 0$. In this case, the thermoreflectance response $Z$ at modulation frequency $\omega 0\u2009$ is expressed as an accumulation of the unit impulse responses $h(t)$ in time domain

where $Vin$ and $Vout$ are the real and imaginary part of the response usually referred to as in-phase and out-of-phase signal, respectively, $\tau d$ is the delay time, $(2\pi /\omega s)$ is the time between two successive pulses at laser repetition frequency $\omega s$, and $\beta $ is a constant coefficient determined by

where $Gdet$ is the gain of photodetector; $P1$ and $P2$ are the power of pump and probe beams, respectively; $R1$ and $R2$ are the reflectivity at the wavelengths of pump and probe beams, respectively; and $(dR2/dT\u2009)$ is the thermoreflectance coefficient of the transducer at probe wavelength. Identically, the thermoreflectance response $Z$ can be expressed in the frequency domain

where $H(\omega 0+l\omega s)$ is the thermoreflectance response of the sample heated by a continuous Gaussian beam modulated at frequency $(\omega 0+l\omega s)$. The single frequency response $H(\omega )$ is determined by thermal properties including thermal conductivity $k$, heat capacity $C$ of each layer, and interfacial thermal conductance $G$ between different layers.

Here, we show an outline for the derivation of the single frequency response function $H(\omega )$, while the detailed derivation of the heat transfer model can be found in Refs. [93,104,106]. The heat conduction equation in cylindrical coordinates is written as

where $kr$ and $kz$ are the thermal conductivity in the in-plane and cross-plane directions (see Fig. 16 for definition of in-plane and cross-plane directions).

By applying the Fourier transformation to the time variable $t$ and Hankel transform to the radial coordinate $r,$ a transfer matrix can be obtained relating the temperature $\theta t$ and heat flux $Ft$ on the top of the single slab and the temperature $\theta b$ and heat flux $Fb$ on the bottom side (Fig. 16(a))

where $d$ is the thickness of the slab, and the complex thermal wave number $q=((krx2+i\omega C)/kz)$, where $x$ is the Hankel transform variable. Multiple layers can be handled by multiplying the matrices for individual layers together (Fig. 16(b))

For interfaces, the transfer matrix can be obtained by taking the heat capacity as zero and choosing $kz$ and $d$ such that interfacial thermal conductance $G$ equals to $kz/d$. The boundary condition is approximately adiabatic in experimental condition, hence $Fb=C\theta t+DFt=0$, the surface temperature can be obtained by

In experiment, if a Gaussian laser with radius $w0$ and power $Q$ is used as pump, the heat flux $Ft$ after Hankel transform is $Ft=(Q/2\pi )exp(\u2212(w02x2)/8)\u2009$, then Eq. (34) becomes

The reflectivity response is then weighted by the Gaussian distribution of probe beam intensity with radius $w1$

We give an example of TDTR signal detected by the lock-in amplifier in Fig. 17. Thermal properties including the thermal conductivity, heat capacity, and interfacial thermal conductance are encoded in the TDTR signal trace. The in-phase signal $Vin$ represents the change of surface temperature. The peak in the $Vin$ represents the surface temperature jump right after the pump pulse incidence, and the decaying tail in $Vin$ represents the cooling of surface due to the heat dissipation in the sample. The out-of-phase signal can be viewed as the sinusoidal heating of the sample at modulation frequency $\omega 0$ [109]. When processing the experimental data, the ratio between in-phase and out-of-phase signal − $Vin/Vout$ is fitted with the heat transfer model to extract thermophysical properties. In this fitting process, numerical optimization algorithms (e.g., quasi Newton [110] and simplex minimization [111]) are used to minimize the squared difference between the experimental data and the heat transfer model, until the value change in thermal properties is smaller than the tolerance (e.g., 1%).

In the case of FDTR system based on pulsed laser (Fig. 14(a)), the obtained signal can be fitted with $Z(\omega 0;\tau d)$ in Eq. (30), where the modulation frequency $\omega 0$ is varied as an independent variable and the delay time $\tau d\u2009$ is fixed. When an FDTR system is implemented using CW lasers, the thermoreflectance signal is instead directly proportional to single frequency response $H(\omega 0)$ [96,112,113]

Figure 18 shows the calculated phase response $\varphi =arctan(Vout/Vin)$ of pulsed and CW-based FDTR measurement of 100 nm Al on sapphire with modulation frequency ranges from 50 kHz to 20 MHz. A clear phase difference is observed between the pulsed FDTR and the CW-based FDTR, and attentions must be paid to adopt the correct solution when processing signals from different FDTR systems. Similar to the TDTR system, the least-squared error method can be used to extract thermophysical properties.

The challenge of thermoreflectance technique lies in its versatile capability that multiple thermal properties including thermal conductivity, interfacial thermal conductance, and heat capacity can be determined depending on the measurement regime. The interfacial thermal conductance can be determined at long delay time (>2 ns) using TDTR [114,115]. A sensitivity parameter $Sp$ is defined to characterize the dependence of signal on different parameters $p$ (interface thermal conductance, heat capacity, and thermal conductivity, etc.)

The sensitivity parameter describes the scaling law between change in the signal $\u2212Vin/Vout$ and the change in the parameter $p$. For example, a sensitivity value of 0.4 means that there would be 0.4% change in the signal if the parameter $p$ is changed by 1%. As shown in Fig. 19, the TDTR signal is dominantly determined by the interfacial thermal conductance when delay time is longer than 2 ns. TDTR technique is therefore implemented extensively to study the thermal transport mechanisms across interfaces, including the effect of surface chemistry on interfacial thermal conductance across functionalized liquid–solid boundary [94,116,117] and solid–solid interfaces [118], interfacial thermal conductance between metals and dielectrics [119–123], and interface between low-dimensional materials and bulk substrates [124–126].

With interfacial thermal conductance determined, heat capacity and thermal conductivity can then be simultaneously obtained. Liu et al. demonstrated that thermal conductivity and heat capacity can be determined simultaneously for both bulk materials and thin films by performing multiple measurements using different combinations of modulation frequencies and spot sizes [115].

Let us consider bulk materials first to introduce the physical picture we based on to determine heat capacity and thermal conductivity simultaneously. Under periodic laser heating, the penetration depth is defined as a characteristic length which describes the depth the temperature gradient penetrates into the sample and can be written as $Lp,\delta =2k\delta /\omega 0C$, where $\omega 0$ is the modulation frequency, $C$ is the volumetric heat capacity, and the index of directions $\delta $ corresponds to in-plane ($\delta =r$) and cross-plane ($\delta =z$), respectively. If a very low modulation frequency and small spot size are used to conduct TDTR measurement, the radial heat transfer dominates the thermal dissipation (Fig. 20(a)), and the signal is determined by the geometric average thermal conductivity $krkz$. This approximation holds true when the radial penetration depth $Lp,r\u226b(1/4)w02+w12$, where $w0$ and $w1$ are $1/e$ radius of pump and probe beam. On the other hand, if we use a large spot size and high modulation frequency, the temperature gradient only penetrates into a very thin layer into the sample and cross-plane thermal transport is dominating (Fig. 20(b)). The thermal response signal is determined by $kz/C$. If we assume the material isotropic, we can first determine thermal conductivity using a small beam spot at low frequency and then measure heat capacity using a large beam spot at high frequency.

Measuring thermal properties of thin film is similar to the bulk material with only slight differences. If a large beam spot modulated at high frequency is used to measure thermal properties, the cross-plane heat transfer dominates. Different from bulk samples whose thickness is always much longer than the cross-plane penetration depth, the thickness of a thin film might be comparable or even smaller than the cross-plane penetration depth $Lp,\u2009z$. We describe the sample “thermally thin” if the penetration depth $Lp,z$ is still much larger than film thickness $d$ (Fig. 21(a)). In this case, the thermal response is controlled by the thermal resistance $d/kz$ and heat capacity $C$. At high frequency limit, the penetration depth $Lp,z$ would be so small that temperature gradient only penetrates into a limited depth of the layer (referred to as “thermally thick,” see Fig. 21(b)), the thermal response is analogous to the high frequency limit of the bulk material, only dominated by thermal effusivity $kzC$. If the penetration depth is comparable to the thin film thickness, both thermal effusivity $kzC\u2009$ and diffusivity $kz/C$ affect the temperature response of the thin film sample. Based on the analysis above, both cross-plane thermal conductivity and heat capacity of thin films can be obtained by applying different modulation frequencies (Fig. 22).

If the beam is tightly focused to the thin film sample at low modulation frequency, the heat transfer is dominated in the in-plane radial direction (Fig. 21(c)). In this case, the in-plane thermal conductivity $kr$ dominates the thermal response of the material. A low thermal conductivity substrate can be used to further improve the sensitivity to the in-plane thermal conductivity of the thin film [96,98,115].

This method of measuring in-plane thermal conductivity using a small spot size, however, assumes cylindrical symmetry of thermal conductivity in the in-plane direction [93]. For anisotropic thin films lacking in-plane symmetry, the thermal conductivity tensor can be extracted by offsetting the pump beam away from the probe beam [127]. The schematic of implementing beam-offset TDTR on measuring in-plane thermal conductivity tensor of *α*-SiO_{2} is shown in Fig. 23. Instead of measuring the ratio between in-phase and out-of-phase signal $\u2212Vin/Vout$, beam-offset TDTR measures the full-width half maximum (FWHM) of the out-of-phase signal $Vout$, as the pump beam is detuned from the probe beam (Fig. 23(b)). By sweeping the beam in different directions parallel to the thin film plane, beam-offset TDTR directly detects the in-plane penetration length by monitoring the two-dimensional temperature profile in the in-plane direction, thus makes it possible to extract anisotropic in-plane thermal conductivity. For example, the thermal conductivity $kc$ along *c*-axis of the SiO_{2} can be measured by offsetting the pump beam parallel to the *c*-axis (Fig. 23(a)), and the out-of-phase signal is then recorded as a function of beam-offset distance $y0$, shown as the open circles in Fig. 23(b). This step maps a Gaussian shaped out-of-phase signal, and the experimental FWHM is extracted from the Gaussian profile of $Vout$. Then, the FWHM can be calculated as a function of thermal conductivity $ka$ shown as the curves in Fig. 23(c). With the experimentally measured FWHM, the thermal conductivity $kc$ can be extracted from the FWHM– $kc\u2009$ curve. The FWHM perpendicular to the *c*-axis is also plotted in Fig. 23(d), nearly independent of the thermal conductivity $kc\u2009$ along *c*-axis, ensuring that the in-plane thermal conductivity along different directions can be extracted independently by offsetting the pump beam in different directions. Similarly, the thermal conductivity $ka$ perpendicular to the *c*-axis can also be extracted by offsetting the pump beam along *x*-direction (Fig. 23(c)). The beam-offset technique allows the TDTR technique to extract thermal conductivity tensor for materials lacking in-plane symmetry. This beam-offset technique has also been extended to time-resolved magneto-optic Kerr effect (TR-MOKE) based pump-probe measurement [128] and has been applied to measure thermal conductivity tensor of two-dimensional materials like MoS_{2} [128] and black phosphorous [129,130].

Transient thermoreflectance method has been widely applied to explore thermal properties of novel materials ranging from low thermal conductivity of ≤0.1 $W/m\u2009K$ of hybrid materials [131,132] and fullerene derivatives [133,134], to very high thermal conductivity, such as graphene [135]. In addition, the capability of determining multiple thermophysical properties simultaneously makes transient thermoreflectance a versatile technique to characterize thermal properties of a wide range of materials including diamond [136,137], pure and doped Si films [114], disordered layered crystals [138], and superlattices [103,139].