The Open Catalyst Project aims to help solve the challenge of building a grid reliant on intermittent renewable power sources.
Instead, Facebook and Carnegie Mellon believe that converting excess renewable energy into another fuel, such as hydrogen or ethanol, could be more scalable. But current techniques are inefficient and rely on expensive and rate electrocatalysts like platinum.
Open Catalyst hopes to discover new, far cheaper, catalysts that could radically reduce the cost of conversion.
That’s not an easy task – assuming catalysts are created from up to three of the 40 known metals, there are nearly 10,000 combinations of elements. With each combination then tested by adjusting the ratios or configurations of elements, the number of possible permutations expands into the billions.
Traditional experimentalists may try three or four possible catalyst compositions per year by hand, while modern computational laboratories are able to run 40,000 simulations per year. Open Catalyst aims to expand the reach to billions of tests a year.
That’s still a long way off.
Current quantum mechanical simulation tools, such as density functional theory (DFT), can help researchers focus on catalysts and combinations that prove promising, “estimating the energy of a system and attempting to find the configuration with the lowest energy, or the ‘relaxed’ state,” Facebook AI research scientist Larry Zitnick said.
But DFT requires immense computing power to simulate the movement of atoms, and struggles to scale as you simulate more atoms.
Open Compute wants to use artificial intelligence to approximate DFT computation by training machine learning models with a small number of DFT calculations to teach them to approximate the energy and forces of molecules based on past data. Machine learning hopes to revolutionize science, something we looked at in detail last year.
Already, researchers are using predictive simulation in areas from climate science, to genomic studies, to the quest to cure cancer. But while energy storage catalyst researchers have begun to experiment in the field, they’ve been held back by a paucity of data to train and test models on.
As part of Open Catalyst, Facebook and Carnegie Mellon have released the largest data set of electrocatalyst structures to date, featuring more than 1.3 million relaxations of molecular adsorptions onto surfaces. This, they hope, should prove a catalyst for other DFT artificial intelligence researchers.
Much of the work came out of a research group led by Professor Zachary Ulissi at CMU, while Facebook used spare compute cycles at its data centers over the past four months to crunch the data.
“While the creation and release of the OC20 data set marks a major milestone in this research, we’ve only begun to explore the data’s potential for ML models,” Zitnick said. “With Facebook’s high-end servers, each relaxation for the OC20 data set still took between 12 and 72 hours to execute. Our goal is to accelerate this process via AI models so that ultimately each relaxation takes mere seconds to complete.”