Climate change is beginning to have an enormously negative effect on technology. Intense heat waves, wildfires and droughts are becoming increasingly common and supercomputers have now apparently become part of the collateral damage.
The California wildfire in 2018, also known as the Camp Fire, is one such incident. Following a savage drought, it burned 620 square kilometers of land, reduced several towns to near ashes and killed at least 85 people. The National Energy Research Scientific Computing Center (NERSC), a supercomputer facility operated by Lawrence Berkeley National Laboratory (LBNL), suffered a ripple effect from the flames of the disaster though it was nearly 230 kilometers away.
The facility typically relies on outside air to help cool its hot electronics. Yet smoke and soot from the fire forced engineers to cool recirculated air, driving up humidity levels.
“That’s when we discovered, ‘Wow, this is a real event,’” Norm Bourassa, an energy performance engineer at NERSC, says. NERSC serves about 3,000 users a year in fields from cosmology to advanced materials.
California utilities cut NERSC’s power as hot and dry weather took a toll again a year later. This they did out of fear that winds near LBNL would blow trees into power lines, sparking new fires.
Although NERSC has backup generators, many machines were shut down for days, says Bourassa. Thus, the costly effects of climate change and the wildfires and storms is intensifying and this is waking up managers at High Performance Computing (HPC) facilities. HPC centers, which include both supercomputers and data centers, are vulnerable due to heavy demands for cooling and a massive appetite for energy.
Natalie Bates, chair of an HPC energy efficiency working group set up by Lawrence Livermore National Laboratory (LLNL) points out, “Weather extremes are making the design and location of supercomputers far more difficult.”
Climate change not only brings heat but also increases humidity. This in turn reduces the efficiency of the evaporative coolers many HPC centers rely on. During a second fire, NERSC discovered that humidity can also threaten the supercomputers themselves. Bourassa says condensation inside server racks led to a blowout in one cabinet during the recirculation of interior air.
NERSC is now planning to install power-hungry chiller units similar to air conditioners for its next supercomputer set to open in 2026. These units will both cool and dehumidify outside air.
Relocating supercomputers extremely costly
The cost of such adaptations is however forcing some HPC facilities to move to cooler and drier climates in places like Canada and Finland.
“We can’t build in some locations going forward, it just doesn’t make sense, we need to move north,” says, Nicolas Dubè, chief technologist for Hewlett Packard Enterprise’s HPC division.
Some HPC facilities find themselves stuck however since the supercomputers at LLNL are used to simulate explosions of nuclear weapons. Chief Engineer Anna-Maria Bailey says the cost of relocating specialized personnel could be prohibitive, and LLNL’s California site is a highly secure facility.
“Humidity and temperature control would be a lot easier in something like a wine cave,” she adds, as LLNL is studying the possibility of moving its computers underground instead.
Tackling the problem of climate change
Running away from climate change can be futile sometimes as well. The National Center for Atmospheric Research opened a supercomputer site in Cheyenne, Wyoming in 2012. The site was opened to take advantage of its cool dry air. However, climate change has led to longer and wetter thunderstorm seasons there, hampering evaporative cooling.
The Wyoming center added a backup chiller in response. Bates says, “Now you have to build your infrastructure to meet the worst possible conditions, and that’s expensive.”
HPC centers consume up to 100 megawatts of power, as much as a medium-size town. Nevertheless, climate change is also threatening electricity, the lifeblood of these HPC facilities.
Additionally, hotter temperatures can increase power demands by other users. For example, when air-conditioning use surged during California’s heat wave this summer, LLNL’S utility told the facility to prepare for power cuts to 2 to 8 megawatts. Bailey says it was the first time the laboratory was asked to prepare for non-voluntary cuts although they did not happen.
Many HPC centers are also heavy water users that they pipe around components to carry away heat. As a consequence, it will grow more scarce in the western United States as droughts persist or worsen. Jason Hick, a program manager at LANL, says Los Alamos National Laboratory (LANL) in New Mexico invested in water treatment facilities ten years ago. This allows supercomputers to use reclaimed wastewater rather than more precious municipal water.
Climate change and its effects on technology and engineering
Droughts and rising temperatures may be the biggest threats. One facility in Kobe, Japan (RIKEN HPC) must contend with power outages as global warming is expected to intensify storms. In 2018, a high-voltage substation was flooded, cutting RIKEN’s power for more than 45 hours. In a similar incident this year, lightning struck a power line that knocked the facility out for about 15 hours.
Fumiyoshi Shoji, who directs operations and computer technologies says the center’s 200 projects span fields such as materials science and nuclear fusion. “If our system were unavailable, those research projects would stall,” he adds.
Bates concludes that supercomputers will need to be constructed in ways that will allow them to cut performance- and the need for cooling and power- during bouts of bad weather. “We’re still building race cars,” he says, “but we’re building them with a throttle.”
Source: Greekreporter.com
Leave a comment