It’s Time to Get Comfortable With Uncertainty in AI Model Training

Home / Articles / External / Government

Researchers at Pacific Northwest National Laboratory are helping scientists understand the strengths and limits of AI network models such as the MACE model, shown here, to predict which molecules will have desirable properties for practical uses. (Image by Shannon Colson | Pacific Northwest National Laboratory)
Researchers at Pacific Northwest National Laboratory are helping scientists understand the strengths and limits of AI network models such as the MACE model, shown here, to predict which molecules will have desirable properties for practical uses (image by Shannon Colson | Pacific Northwest National Laboratory).

June 3, 2025 | Originally published by Pacific Northwest National Laboratory (PNNL) on April 24, 2025

RICHLAND, Wash.—It’s obvious when a dog has been poorly trained. It doesn’t respond properly to commands. It pushes boundaries and behaves unpredictably.

The same is true with a poorly trained artificial intelligence (AI) model. Only with AI, it’s not always easy to identify what went wrong with the training.

Research scientists globally are working with a variety of AI models that have been trained on experimental and theoretical data. The goal:  to predict a material’s properties before taking the time and expense to create and test it. They are using AI to design better medicines and industrial chemicals in a fraction of the time it takes for experimental trial and error.

But how can they trust the answers that AI models provide? It’s not just an academic question. Millions of investment dollars can ride on whether AI model predictions are reliable.

Focus Areas