-
Notifications
You must be signed in to change notification settings - Fork 131
Add support for Torchsim #1335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add support for Torchsim #1335
Conversation
src/atomate2/torchsim/core.py
Outdated
| if model_type == TSModelType.FAIRCHEMV1: | ||
| from torch_sim.models.fairchem_legacy import FairChemV1Model | ||
|
|
||
| return FairChemV1Model(model=model_path, **model_kwargs) | ||
| if model_type == TSModelType.FAIRCHEM: | ||
| from torch_sim.models.fairchem import FairChemModel | ||
|
|
||
| return FairChemModel(model=model_path, **model_kwargs) | ||
| if model_type == TSModelType.GRAPHPESWRAPPER: | ||
| from torch_sim.models.graphpes import GraphPESWrapper | ||
|
|
||
| return GraphPESWrapper(model=model_path, **model_kwargs) | ||
| if model_type == TSModelType.MACE: | ||
| from torch_sim.models.mace import MaceModel | ||
|
|
||
| return MaceModel(model=model_path, **model_kwargs) | ||
| if model_type == TSModelType.MATTERSIM: | ||
| from torch_sim.models.mattersim import MatterSimModel | ||
|
|
||
| return MatterSimModel(model=model_path, **model_kwargs) | ||
| if model_type == TSModelType.METATOMIC: | ||
| from torch_sim.models.metatomic import MetatomicModel | ||
|
|
||
| return MetatomicModel(model=model_path, **model_kwargs) | ||
| if model_type == TSModelType.NEQUIPFRAMEWORK: | ||
| from torch_sim.models.nequip_framework import NequIPFrameworkModel | ||
|
|
||
| return NequIPFrameworkModel(model=model_path, **model_kwargs) | ||
| if model_type == TSModelType.ORB: | ||
| from torch_sim.models.orb import OrbModel | ||
|
|
||
| return OrbModel(model=model_path, **model_kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For readability, can this block be refactored into a dict like:
import importlib
model_to_import_str = {
"FAIRCHEMV1": "torch_sim.models.fairchem_legacy.FairChemV1Model",
"FAIRCHEM": "torch_sim.models.fairchem.FairChemModel",
...
}
model_module, model_class = model_to_import_str[TSModelType[model_type]].rsplit(".",1)
return getattr(importlib.import_module(model_module),model_class)(model=model_path, **model_kwargs)
src/atomate2/torchsim/schema.py
Outdated
| all_properties: list[dict[str, np.ndarray]] = Field( | ||
| ..., description="List of calculated properties for each structure." | ||
| ) | ||
|
|
||
| model_config = ConfigDict(arbitrary_types_allowed=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can the properties not be cast to a list or other built-in? arbitrary_types_allowed eliminates the benefits of type checking here
pyproject.toml
Outdated
| "quippy-ase>=0.9.14; python_version < '3.12'", | ||
| "sevenn>=0.9.3", | ||
| "torchdata<=0.7.1", # TODO: remove when issue fixed | ||
| "torch_sim==0.4.1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move to its own optional import block since the module is currently distinct from the forcefields
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
esoteric-ephemera
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some general comments besides the ones on specific lines:
- The TS prefixing in makers, enums, etc. is confusing given that TS is probably more familiar as "transition state" than "torchsim" - can you change this to
TorchSim? - Are you envisioning this being added as a separate (as its currently implemented) or additive module to the existing forcefields stuff? If the latter, the schemas would have to be merged for the job outputs
reduce redundancy in initialization logic
Good call.
I think this should be a separate module. It would be really messy to integrate it with the existing forcefields stuff. TorchSim generally expects many -> many calculations (list[structure] -> list[structure]) and has different output files and such. |
|
Encouraging! In that case, let me reframe. It looks like it would be possible, but I anticipate it would be a major headache, one I am reluctant to take on. Though they both run MLIPs, ASE and TorchSim are different software packages with pretty different APIs. I don't see a major reason to have them share schema or logic. The |
|
The motivation should always be the following: |
|
I hear you and I am sympathetic to that argument. In this case, I feel there is a tradeoff between the immediate adoptability of the TorchSim interface and it's overall quality. After looking back and forth at the ASE and TorchSim schema for the past 15 minutes, I don't think it's possible to adapt the TorchSim API to fit the ASE schemas without adding complexity, reducing readability, and making the overall API less natural and maintainable. I would love for users currently using ASE to be able to quickly and reliably switch to TorchSim but it's not clear to me that equating the schemas is the best way to do that. I would be happy to write a transition guide outlining the schema differences and how to transition from ASE -> TorchSim and add it to this PR. |
pyproject.toml
Outdated
| "deepmd-kit>=2.1.4", | ||
| ] | ||
| torchsim = [ | ||
| "torch-sim-atomistic==0.4.1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is a version pin on python needed?
torchsim = [
"torch-sim-atomistic==0.4.1, py>3.11",
]Saw python 3.11 is being skipped for the tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes! I didn't actually know you could do that. TorchSim, unfortunately, only supports python >= 3.12.
EDIT: > to >=
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or >=?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whoops, yes. > 3.12 would be even sillier 😅
|
@JaGeo @esoteric-ephemera do you have any idea why the abinit tests might be failing here? This seems unrelated to any changes I made wrt TorchSim? |
|
I think you can safely ignore it, can happen with parallel runners and there should be a fix for this in the next release of abipy |
|
Other than the unrelated failing tests, I think this is ready to merge? The tests are passing locally and I think all the suggestions are addressed. |
Summary
This PR adds support for TorchSim, namely:
integrate,optimizeandstaticfunctionsintegrate,optimizeandstaticjobsNOTE: this PR uses StrEnum's which are not supported in python 3.10, see #1334
Additional dependencies introduced (if any)
Checklist
Work-in-progress pull requests are encouraged, but please put [WIP] in the pull request
title.
Before a pull request can be merged, the following items must be checked:
The easiest way to handle this is to run the following in the correct sequence on
your local machine. Start with running
ruffandruff formaton your new code. This willautomatically reformat your code to PEP8 conventions and fix many linting issues.
Run ruff on your code.
type check your code.
Note that the CI system will run all the above checks. But it will be much more
efficient if you already fix most errors prior to submitting the PR. It is highly
recommended that you use the pre-commit hook provided in the repository. Simply run
pre-commit installand a check will be run prior to allowing commits.