Nevertheless, NNIPs remain susceptible to making poor predictions in extrapolative regimes. As a result, a wide range of NNIP architectures has been developed 11, 12, 13, 14, 15, 16 and used for applications in reactive processes 17, 18, protein design 19, 20, solids 21, 22, solid-liquid interfaces 23, coarse-graining 24, 25, and more. Furthermore, NNIPs have displayed sufficient flexibility to learn across diverse chemical compositions and structures 8, 9, 10. When employed to predict the potential energy surfaces (PESs) of materials systems, NN interatomic potentials (NNIPs) recover the accuracy of ab initio methods while being orders of magnitude faster, enabling simulations of larger time- and system size-scales 6, 7. In addition to modeling Quantitative Structure Property/Activity Relationship (QSPR/QSAR), NNs have been used extensively to model interatomic potentials in atomistic simulations of materials 1, 2, 3, 4, 5. Over the last decade, neural networks (NN) have increasingly been deployed to study complex materials systems. More broadly, cost-effective, single deterministic models cannot yet consistently match or outperform ensembling for uncertainty quantification in NNIPs. Ensembling remained better at generalization and for NNIP robustness MVE only proved effective for in-domain interpolation, while GMM was better out-of-domain and evidential regression, despite its promise, was not the preferable alternative in any of the cases. Our experiments show that none of the methods consistently outperformed each other across the various metrics. Performance is measured across multiple metrics relating model error to uncertainty. We explore three datasets ranging from in-domain interpolative learning to more extrapolative out-of-domain generalization challenges: rMD17, ammonia inversion, and bulk silica glass. In particular, we compare incumbent ensemble-based methods against strategies that use single, deterministic NNs: mean-variance estimation (MVE), deep evidential regression, and Gaussian mixture models (GMM). In this work, we examine multiple UQ schemes for improving the robustness of NN interatomic potentials (NNIPs) through active learning. However, a variety of UQ techniques, including newly developed ones, exist for atomistic simulations and there are no clear guidelines for which are most effective or suitable for a given case. Differentiable UQ techniques can find new informative data and drive active learning loops for robust potentials. When they are employed to model interatomic potentials in materials systems, this problem leads to unphysical structures that disrupt simulations, or to biased statistics and dynamics that do not reflect the true physics. Neural networks (NNs) often assign high confidence to their predictions, even for points far out of distribution, making uncertainty quantification (UQ) a challenge.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |