Changelog¶
All notable changes to metatrain are documented here, following the keep a
changelog format. This project follows
Semantic Versioning.
Unreleased¶
Added¶
A new dataset format,
MemmapDataset, allows storing data on disk in a memory-mapped format, improving performance compared toDiskDataseton some filesystems.FlashMD was added as a new architecture allowing long-stride molecular dynamics simulations. Its implementation is based on PET.
Changed¶
PETmodel received a major update, including new default hyperparameters, a new transformer architecture, and a new featurizer. Please refer to the updated documentation for more details.The SOAP-BPNN and PET trainers now uses a cosine annealing learning rate scheduler with warmup.
NanoPEThas been deprecated in favor of the stablePETarchitecture. Thedeprecated.nanopetarchitecture is still available for loading old checkpoints, but it will not receive any updates or bug fixes.The
NanoPETandGAParchitectures now use the new composition model, and the old composition model has been removed.The
LLPRmodule is now a stable architecture, instead of a utility module. It can be trained from the command line in the same way as other architectures.We now require Python >= 3.10
The
Scalermodel in metatrain now calculates per-block and per-property scales. For atomic targets, it calculates per-element scales.
Version 2025.10 - 2025-09-09¶
Fixed¶
Fixed a bug with the composition model during transfer-learning
Changed¶
Refactored the
loss.pymodule to provide an easier to extend interface for custom loss functions.Updated the trainer checkpoints to account for changes in the loss-related hypers.
Version 2025.9.1 - 2025-08-21¶
Fixed¶
Fixed incompatibilities with PET-MAD when updating checkpoints and exporting
Version 2025.9 - 2025-08-18¶
Added¶
Use the best model instead of the latest model for evaluation at the end of training.
Log the best epoch when loading checkpoints.
Allow changing the scheduler factor in PET.
Introduce checkpoint versioning and updating.
Added CI tests on GPU.
Log the number of model parameters before training starts.
Add additional logs to the checkpoints, model, and output directories at the end of training.
Cache files locally and re-use them when downloading checkpoints and models from Hugging Face.
extra_datais now a valid section in theoptions.yamlfile, allowing users to add custom data to the training set. The data is included in the dataloader and can be used in custom loss functions or models.mtt evalcan now evaluate models on aDiskDataset.
Changed¶
Updated to a new general composition model.
Updated to a new implementation of LLPR.
Fixed¶
Fixed
deviceanddtypenot being set during LoRA fine-tuning in PET.Log messages are now shown when training with
restart="auto".Fixed incorrect sub-section naming in the Wandb logger.
Version 2025.8 - 2025-06-11¶
Changed¶
Checkpoints for fine-tuning files are now passed from the
options.yaml.
Version 2025.7 - 2025-05-27¶
Changed¶
Metatrain is now built on top of
metatomicinstead ofmetatensor.torch.atomistic. Please refer to https://docs.metatensor.org/metatomic/ to find how to use the new models.
Version 2025.6 - 2025-04-28¶
Fixed¶
PETcan now evaluate on single-atom structures without crashingThe metatrain dataloader doesn’t load all batches ahead of each epoch anymore
Added¶
NanoPETandPETcan now train on non-conservative stressesUsers can now choose the name of the extension directory in
mtt trainandmtt exportvia the--extensions(or-e) optionUpdate to
metatensor-torch-0.7.6, adding support for torch 2.7PETnow supports gradient clipping as a new training hyperparameter
Changed¶
Training and exporting models without extensions will no longer lead to the creation of an empty directory for the extensions
The SOAP-BPNN model now uses
torch-spexinstead offeatomicas its SOAP backendPETfrom the previous version is now deprecated and accessible asdeprecated.pet, while the oldNativePET(experimental.nativepet) is now calledPET(petfrom training option files)The Angstrom character is now represented as
Aand notÅin the training logs
Version 2025.5 - 2025-04-13¶
Fixed¶
Fix more composition model issues
Added¶
Update to
metatensor-torch-0.7.5to allow training onnon_conservative_forcesandnon_conservative_stresstargetsAdd
NativePETas a readable, efficient, backward-compatible PET implementationAdded Wandb logger
Save loss history in a
.csvfile
Version 2025.4 - 2025-03-29¶
Changed¶
upgraded to
metatensor.torch0.7.4, which gives access to batched ASE evaluation
Version 2025.3 - 2025-03-25¶
Fixed¶
Fixed a bug in the composition model, affecting SOAP-BPNN and nanoPET
Changed¶
metatrain.util.io.load_model()does not copy a remote model to the current directory.
Version 2025.2 - 2025-03-11¶
Added¶
Implement a long-range featurizer as a utility for all models
Speed up system preparation
Changed¶
Remove biases in SOAP-BPNN’s linear layers
Fixed¶
Fix NanoPET multi-GPU error message
Fix
devicefor fixed composition weights
Version 2025.1 - 2025-02-20¶
Added¶
Support for Python 3.13 and
ase>= 3.23
Fixed¶
Some irrelevant autograd warnings
Version 2025.0 - 2025-02-19¶
Added¶
First release outside of the lab