The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is
the rapid improvement and implementation of artificial intelligence (AI) tools, while the other is the sprint toward clean energy deployment in the face of the global climate crisis.
Both of these technological changes will completely alter humanity’s trajectory. What’s more, their fates are intertwined.
Rick Stevens – the Associate Laboratory Director for Computing, Environment, and Life Sciences at Argonne National Laboratory – is one of the smart people who’s thinking deeply about how these two revolutions will interact. In fact, he co-authored Argonne’s AI for Energy report which discusses the lab’s current work as well as future aspirations for deploying AI tools during the clean energy revolution.
I was lucky enough to sit down with Stevens and discuss the report, as well as his musings on how AI could and should be deployed in the energy sector. While we couldn’t cover the entirety of the 70-page report’s contents, Stevens outlines some specific potential use cases of AI within energy as well as the challenges we’ll need to overcome.
A General Acceleration of Innovation
The report outlined five major areas within energy that AI could influence: nuclear energy, power grid, carbon management, energy storage, and energy materials. As we began our discussion, Stevens made a note that AI in energy should result in a “general acceleration of innovation.”
He initially mentioned nuclear reactors as a place where AI could accelerate certain necessary processes. The report itself stated that one of the largest obstacles to advanced nuclear reactors in the U.S. is a “slow, expensive, and convoluted regulatory process.” This is a task that is perfectly suited for AI.
“On the nuclear reactor front, one of the biggest targets for that community right now is trying to streamline licensing and helping to build reactors on a timeline within the budget,” Stevens said in our interview. “This is, of course, a huge problem for these projects.”
Staying within a timeline and a budget for nuclear reactors is challenging, as obtaining a construction permit and operating license for a new reactor in the U.S. can drag on for more than five years and can sometimes take decades. The report mentioned that multi-modal LLMs could help accelerate this process.
By training on datasets of scientific literature, technical documents, and operational data, these LLMs can help to streamline and expedite the nuclear regulatory licensing and compliance process. In a sense, these LLMs could act as virtual subject matter experts to help guide humans through the complicated regulatory process. On top of nuclear reactors, Steven’s mentions that the same sort of foundation model could help with the licensing process for renewable energies like wind or solar.
This is an overarching strategy that will apply to all scientific endeavors, not just energy. Steven’s mentioned the Frontiers in Artificial Intelligence for Science, Security, and Technology (FASST) initiative from the Department of Energy. Among other goals, this federal mandate is pushing to build capable foundation models that are experts in specific domains.
“The strategy that we ve been working on in the FASST initiative is to build a handful of very capable foundation models,” Stevens said. “Think of them like ChatGPT but they are experts in some specific domain. You might imagine an expert model in the grid that knows everything about how grids work. The grid dynamics, the regulatory issues, the structural issues, the technical issues, the geopolitical issues – everything that humanity knows about building power grids, you could imagine a model that has all of that knowledge.”
With such potential for acceleration from AI, it will also be important to consider why we want to accelerate certain scientific fields. For instance, Stevens mentions drug development and how the success of these projects is literally a matter of life and death.
“You have a real motivation for trying to go faster, but you also want to go better,” Stevens said. “I think we need to help people understand that when we talk about accelerating science, we’re not just trying to turn the crank faster. We’re trying to build a better crank.”
This discussion will be especially relevant as we address the energy infrastructure issues that lead us to the current climate crisis. The worst-case predictions for climate change will lead to mass migration, famine, and water shortages. While it’s not a silver bullet solution, using AI tools to assist in the clean energy transition is of the utmost importance.
New Ways to Do Energy Science
As AI tools are relatively new – or at least many of their current capabilities are – implementing these solutions will require innovative ways of thinking. Stevens mentions the Stormer project as one area of AI with versatile use cases. This is a weather-specific vision transformer that can predict the global atmosphere 14 days into the future and is as accurate or sometimes more accurate than current partial differential equation prediction methods.
“(Stormer is) orders of magnitude faster (than current solutions), which means you can get a 10-day forecast in a few minutes,” Stevens said. “If you think about the application of that in the context of energy – say you’re running a wind farm and you’re trying to do capacity planning or plan maintenance. You’ll know what you have to anticipate.”
Stevens continued: “So far, that s my favorite application because a large part of energy production and market-based pricing and where power is coming from in the grid is a prediction problem that tries to link up supply and demand. If we can get better models that can allow us to predict the factors that are affecting supply and demand, that means we can run at a higher efficiency. We can reduce cost and we can also help the market price better.”
On top of applying these AI tools in innovative ways, scientists in the energy space will also have to rethink how we do science. Stevens mentioned how the application of AI systems may benefit by operating under what he calls an inverse design.
Stevens stated that science currently proceeds by scientists thinking about something and making a hypothesis. In essence, the scientist guesses as to what they think might be correct and then they do experiments to test that guess.
While that process works wonderfully for humans, the implementation of AI tools might take a different path.
“If an AI can learn an entire domain deeply and it can reason about a specific material, then you can turn the whole process upside down,” Stevens said. “You can say look I want a material that behaves like this – I shine a light on it and it turns purple.’ Rather than having to work forward through thousands of candidates and trying to search for things that turn purple as opposed to green, the system would operate under an inverse design. It might say ‘here s the thing that makes purple when you shine light on it. This idea that you’re directly going to a solution is this idea of inverse design.”
Stevens is using an easy-to-understand example here with the purple-green distinction, but it isn’t hard to see how such an inverse design would be radically advantageous for scientists working on discovering new energy materials.
Pro-Science, But Never Anti-Human
It’s impossible to discuss AI innovations without also addressing the common fear that these tools will replace people. In Stevens’ mind, nothing could be further from the truth when it comes to integrating AI into the clean energy transition. When asked how we can safely apply AI tools to domains that demand success like nuclear reactors, he had quite a pithy response:
“Well, humans also sometimes get things wrong, and that is really important,” Stevens said. “We need to understand how things currently fail. Not just how AI fails, but how do complex systems where people are already making decisions fail?”
We already operate in a world of human imperfection. As such, we embed checks and balances within our many complicated systems to catch humans who may be incorrect, incompetent, or malicious. Steven’s stated that we’ll have to do much of the same for AI and he mentioned a clarifying metaphor.
“Imagine you have somebody who can hit a lot of home runs, but they also strike out a lot,” Stevens said. “The question is how do you minimize the strikeouts while maximizing the home runs? More specifically for AI, can we build AI systems that have more awareness of their own mistakes?”
Stevens mentioned that there’s a technical term for this within AI called uncertainty quantification. This is where users want the AI to output a result, but they also want it to estimate how likely it is that the result is correct.
In a perfect world, this would allow us to tell the AI to only relay information to us that is correct – but we don’t live in a perfect world. Stevens stated that solving the problem of determining the validity of what a model is outputting is a huge area of research.
To solve problems like this at a larger scale, the report mentions that “laboratories must establish a leadership computing ecosystem to train and host data and foundation models at ever-increasing scales.” To Stevens, a “leadership computing ecosystem” would have several components.
“One aspect is that they train a big foundation model,” Stevens said. “These take many months on exascale-class machines. We would need to have essentially dedicated multi-access scale class hardware at the heart of the ecosystem for training. That’s what FASST is building out with even larger machines, heading towards this 100,000 AI exaflop class devices.
On top of these centralized machines, Stevens also mentions that this leadership computing ecosystem would also need to focus on edge devices.
He mentions a scenario where someone is monitoring a real-time system like a generator, the grid, or some other complex energy system. They would need sensors flowing into the model for inference, and they might also have a parallel simulator digital twin running in parallel. In such a scenario, the big machine would be used for the heavy lifting concerning these foundation models, but there will also be coordinated sensors and other devices on the edge to collect data.
As scientists build these foundation models, the pipelines of clean data being fed in will require fine-tuning of models as well as alignment. Stevens states that one might think of this as a layered process with the integration of many different kinds of facilities. He calls this an “integrated research infrastructure.”
“The concept is to tie the facilities together with high-speed networking, common APIs, common data interfaces, and control interfaces, so AI can read data directly from these facilities,” Stevens said. “If you were in a scenario where it makes sense to control them with AI, you would have a control interface. And you would tie all of that together with these inference engines.”
On top of this, a leadership computing ecosystem wouldn’t just share resources – it would also create a structured foundation on which to build new knowledge. AI tools are capable of thinking in ways that humans cannot, and this can often lead to exciting discoveries.
During our interview, we mentioned a research project where a surrogate model was trained on basic quantum mechanical results. Eventually, the model began to form salt crystals that it was not directly told about. While this is interesting in its own right, Stevens thinks we can take it a step further.
“If I integrate what we know about some domain, the model can synthesize that and make reasonable predictions like with these salt crystals – but we already knew about salt crystals,” Stevens said. “The question is whether it can make predictions about phenomenon that we don’t know about.”
This is exactly why AI will be a vital tool in the clean energy revolution. We have been using fossil fuels and legacy energy systems for so long that shifting gears will require new ways of thinking. While humans will obviously play a role in this shift, AI is capable of bringing about the new and innovative ideas that will help us stave off the worst effects of the climate crisis.
The integration of AI into the energy sector represents a pivotal moment in human history, where technological advancement intersects with the urgent need for sustainable energy solutions. As we navigate this transformative journey, it will be important to remember that AI should complement human expertise and be guided by ethical considerations.
#Academia #AI/ML/DL #Energy #Green #Slider:FrontPage #AI #Argonne #ArgonneLeadershipComputingFacility #ArgonneNationalLaboratory #cleanenergy #climate #climatechange #energy [Source: EnterpriseAI]