Celebrity cameos and special effects may spark interest, but they can’t overcome the barriers that keep people from purchasing electric vehicles or adopting other clean energy technologies.
A 2020 survey by Consumer Reports cited the cost of new electric cars and limited access to charging stations as the biggest obstacles to the public.
Creating clean energy options that are affordable and accessible to all is possible at the National Renewable Energy Laboratory (NREL), where researchers rely on high-performance computing to transform data into models and simulations of their groundbreaking discoveries. .
NREL has its foot on the (electric) pedal to clean up our entire energy economy at unprecedented speed and scale. No delay, not even a pandemic-induced global supply shortage, can lead the lab off course, with its computational science experts performing some incredible creative problem-solving feats.
A new challenge
Whether it’s creating more efficient and cleaner transportation or developing better buildings, networks, and production and storage of solar, hydro, geothermal, and wind power, the United States Department of Energy (DOE) relies on NREL to address a ‘wide align of energy challenges. Indeed, of all 17 national laboratories, NREL is the only laboratory dedicated exclusively to energy efficiency and renewable energy research for the DOE.
Each of these energy challenges requires the powerful computing capabilities of NREL supercomputers, such as Eagle and the highly anticipated Kestrel, to help researchers quickly identify information and accelerate solutions.
Approximately 85% of NREL’s High Performance Computing (HPC) time is devoted to DOE projects. But in the final months of 2020, the DOE’s Vehicle Technologies Office (VTO) asked NREL to plan to meet their projected doubling of IT resource needs by 2022.
A quick fix
NREL’s advanced computing and computational science experts were tasked with a sizable challenge: designing a world-class HPC resource nearly half the size of Eagle that could be operational within a year. This is an aggressive timeline for a normal year, exacerbated by the global shortage of semiconductor chips and supply chain delays caused by COVID-19.
However, the resulting machine, aptly named Swift, was completed and became operational in NREL’s Energy Systems Integration Facility (ESIF) last summer. Although Swift physically occupies only one server row in the ESIF, it packs 2 petabytes of storage space and over 28,000 compute cores (for multiple and concurrent processes) across 440 nodes. For context, Facebook relies on 1.5 petabytes to store its users’ 10 billion photos.
In anticipation of future demands, NREL researchers designed Swift with flexibility in mind. That’s why they chose Spack— packaging software from DOE’s Office of Science Exascale Computing Project, which will serve as Swift’s software environment.
“Spack is an international project focused on delivering software that is easily deployable in complex, high-performance computing environments,” said Jon Rood, computational scientist at NREL, who emphasized why Spack makes long-term strategic sense. “Spack’s popularity continues to increase as it evolves to serve system administrators, scientific software developers, and supercomputer end users to provide them with a consistent platform where productivity is key.”
“Additional benefits of Spack are its ability to connect to the Stack ecosystem of science software on an extreme scale, also known as E4S, where researchers can benefit from predefined software applications and containers, which provide some of the most popular science software, without waiting time between downloading applications and using them to get results, “added Rood.
Swift’s inclusion in NREL’s ESI Funds reflects a strategy that intertwines NREL’s advanced computational operations and computational science expertise. NREL’s design and delivery of world-class IT solutions enable rapid data movement and the economics of shared support infrastructure. In 2022 and beyond, the combination of Eagle (or Kestrel) and Swift will provide solid support for the VTO portfolio. In addition, there will be future declines in the software environment that is continually being optimized by the user and the application engagement team, allowing for performance optimization, greater flexibility and harmonization of resources that report back to NREL’s HPC.
Living on the edge
Remember that 15% slice of NREL’s HPC capacity? It is dedicated to NREL’s laboratory research and development and the Technology Partnership Program portfolio, which aims at NREL’s vision: a clean energy future for the world. If 15% of computing power doesn’t seem enough to support such a bold view, it isn’t; while the NREL researchers built and delivered the Swift solution for DOE, they did the same with Vermillion for their NREL colleagues.
Vermillion enables support for major NREL projects, as well as experimentation on HPC, commercial cloud, and edge computing. Photo by Vern Slocum, NREL
Vermillion is the first phase of a flexible, on-premise cloud resource tailored to major NREL projects such as artificial intelligence (AI) training.
This on-premise cloud computing, or edge computing, runs close to the original data source, instead of accessing data on the cloud in one of a dozen data centers around the world. The latency, or delay, of accessing cloud-based information requires edge computing; Autonomous vehicles rely on split-second data access to protect passengers. Other AI-based energy solutions (for smart grids and buildings) also benefit from edge computing.
NREL is a living laboratory: we simulate and test our proposed solutions to see how they might work in our complex and interconnected world. With Vermillion, NREL can now experiment with HPC, commercial cloud computing, and edge computing to envision cleaner energy technology scenarios. Vermillion is designed to be accessible and flexible to meet the needs of researchers now and in the future.
The system software is based on powerful open source standards using the Linux, OpenStack and Kubernetes infrastructure, known in the technical world as LOKI. This software stack groups virtual resources for dynamic grouping, providing greater flexibility to meet NREL’s demanding workflows. And take advantage of Slurm scheduling to appropriately assign and execute calculation tasks and maximize job productivity.
In true NREL style, Vermillion’s name is inspired by the natural world and hints at growing possibilities. Named after a tributary of the Green River, Vermillion is the first computer resource dedicated exclusively to NREL that will fuel a roaring stream of research.
Vermillion is already able to evolve and track the cutting edge of both the NREL workload and the IT industry. Just as multiple tributaries amplify a river’s strength, NREL researchers are eagerly anticipating the amplified effects of what may soon join Vermillion.
A global crisis has brought the supply chain to its knees and these impacts continue to resonate across all sectors. But nothing has been able to slow NREL researchers on the road to a clean energy future.