Added sample_kind to ModelOutput, without breaking changes#168
Added sample_kind to ModelOutput, without breaking changes#168
sample_kind to ModelOutput, without breaking changes#168Conversation
4b3c32d to
87dc377
Compare
|
This is ready for a final look. I checked that one can take the conda |
dff557b to
ea5e7fd
Compare
|
Do not merge before #186, we need to have at least one release of metatomic-ase that works with an already released version of metatomic-torch |
9353d37 to
4df02e3
Compare
…lity Co-Authored-By: Guillaume Fraux <guillaume.fraux@epfl.ch>
a24c33e to
b387342
Compare
|
@HaoZeke I don't understand your change, we wanted it to work (not make engines fail) but deprecate it, issuing a warning so that the engines get changed before it gets removed. What's the problem with issuing a warning, it does not make the code fail no? |
yeah that's true, but there's also the PR description where I thought we don't even want deprecation warnings, but OK, rebasing on #199 and then extending it |
|
Aaah ok, noo the goal was to keep it compatible until metatomic introduces compatibility breaks (when multiple backends are supported probably), and at that point just remove |
…nd migrate call sites Pol's fca4e05 kept the existing `TORCH_WARN_DEPRECATION` in `set_per_atom` / `get_per_atom` intentionally: the new `sample_kind` API is preferred and `per_atom` is the deprecated shim. This commit mirrors the conftest/filterwarnings pattern the morfixes branch uses for `ModelOutput.quantity`, applied only to `per_atom` (no overlap with #199 -- this PR does not depend on morfixes). Tests / packaging - `tests/conftest.py` in all three subpackages: regex-strip the C++ `TORCH_WARN_DEPRECATION` line for `per_atom` from stderr in the autouse `fail_test_with_output` fixture. The regex tolerates both Torch 2.3 (`[W model.cpp:N]`) and Torch 2.11+ (`[W<MMDD> HH:MM:SS.fractZ model.cpp:N]`) prefixes. Any other unexpected stderr still fails the test. - `pyproject.toml` in all three subpackages: ignore the matching Python `DeprecationWarning` so tests exercising the field directly do not blow up under `-W error`. - `tests/outputs.py`: drop the two `assert message in captured.err` sections from `test_sample_kind`. On Torch 2.3 the C++ warning lands on stderr directly and the conftest regex scrubs it; on Torch 2.11 `PyErr_WarnEx` + the filterwarnings ignore leaves stderr empty. Asserting a specific stderr shape mid-test was torch-version-brittle. The `pytest.warns(match=...)` guards remain, so the shim is still verified. Drop the now-unused `capfd` parameter. Internal call site migration - `metatomic_torch/metatomic/torch/heat_flux.py`: the four `ModelOutput(..., per_atom=...)` call sites for masses/velocities/ energies/heat_flux output all use `sample_kind="atom"/"system"` now, so the wrapper does not emit warnings from its own construction path. - `metatomic_torch/metatomic/torch/model.py`: the dedup check in `_get_requested_inputs` compares `sample_kind` instead of reading the deprecated `per_atom` property. - `metatomic_torch/tests/heat_flux.py`, `metatomic_ase/tests/heat_flux.py`, `metatomic_ase/tests/calculator.py::test_inputs_different_units`: migrate all `ModelOutput(..., per_atom=...)` to `sample_kind`.
Review feedback: warnings should be handled as close to their emission point as possible, not via a broad `filterwarnings` ignore or a conftest regex scrub. No code in this repo (outside the shim test itself) uses `ModelOutput.per_atom` after the earlier call-site migrations, so the pyproject filter and the conftest `re.sub` are unnecessary. - `python/metatomic_*/tests/conftest.py`: revert all three to the original `assert captured.err == ""` form. No regex pre-processing. - `python/metatomic_*/pyproject.toml`: drop the `ignore:`per_atom` is deprecated...:DeprecationWarning` line. Under `-W error` the only place a `per_atom` `DeprecationWarning` can fire is inside the `test_sample_kind` shim test, which wraps every trigger in `pytest.warns(...)`. - `python/metatomic_torch/tests/outputs.py::test_sample_kind`: wrap the `ModelOutput(per_atom=True)` constructor in `pytest.warns` (the only remaining unwrapped emission); reintroduce the `capfd` parameter and consume the Torch 2.3 C++ `TORCH_WARN_DEPRECATION` stderr spillover at the end of the test, asserting every stray line is the known per_atom message. Torch 2.11+ routes through `PyErr_WarnEx` so stderr stays empty and the loop is a no-op there.
The `metatensor/lj-test` repo at the currently-pinned SHAs still constructs every `ModelOutput` with `per_atom=True/False`, which now emits the deprecation warning. Under `-W error` that escalates to a `DeprecationWarning` and kills `tests/heat_flux.py::test_wrap` plus the other heat-flux tests that load the LJ reference model. Migrate the pin to `HaoZeke/lj-test@f56f6b4`, a direct port of the two previous upstream SHAs with every `ModelOutput(quantity=..., per_atom=...)` rewritten to `ModelOutput(..., sample_kind=...)` and every `outputs[...].per_atom` read rewritten to `outputs[...].sample_kind == "atom"`. See `HaoZeke/lj-test:chore/issue-17-suppress-deprecations` (metatensor/lj-test#17). Bump this back to `metatensor/lj-test@<SHA>` once that branch lands on lj-test main. Both `install_lj_tests` and the inline `pip install` in the `torchsim-torch-tests` environment are bumped together.
My earlier switch to `HaoZeke/lj-test@f56f6b4` broke every test that computes an energy via the reference LJ model: the fork drops `quantity="energy"` from every `ModelOutput(...)` (forward-looking for the morfixes #199 `quantity` deprecation), which causes `AtomisticModel.forward()` to hit the early `continue` in if declared.quantity == "" or requested.quantity == "": continue and skip the `unit_conversion_factor(declared.unit, requested.unit)` step that the engine needs when asking for energy in `"ev"` while the model declares `"eV"`. Downstream ASE tests see ~100x-wrong energies. `metatensor/lj-test@eea52c6` is the existing upstream sample_kind branch commit: it already migrates every `ModelOutput(per_atom=...)` to `ModelOutput(sample_kind=...)` and every `outputs[...].per_atom` read to `outputs[...].sample_kind == "atom"` so the runtime deprecation path is not hit, but it keeps `quantity=...` intact so unit conversion still runs. That is the right pin for this PR's scope (per_atom only, no quantity overlap). Both `install_lj_tests` and the inline `pip install` in the `torch-tests` env are bumped to `eea52c6`. Bump again once that branch lands on lj-test main and gets a stable SHA.
scoped warnings filter in test_sample_kind Two follow-ups after merging upstream/main (efaa2eb: spin_multiplicity) into this branch: - `test_spin_multiplicity` and `SpinMultiplicityModel.requested_inputs` construct `ModelOutput(..., per_atom=False)`, which emits the `TORCH_WARN_DEPRECATION` now that the per_atom setter is an active shim. Rewrite both to `sample_kind="system"` (semantically equivalent, no warning). - `test_sample_kind` previously relied on `pytest.warns(...)` around every property access / constructor to catch the Python `DeprecationWarning`. That only works on Torch 2.11+, which routes `TORCH_WARN_DEPRECATION` through `PyErr_WarnEx`. Torch 2.3 writes straight to stderr without emitting any Python warning, so `pytest.warns` raised `DID NOT WARN` on the Linux Torch 2.3 matrix cell. Wrap the entire test body in `warnings.catch_warnings()` with a local `filterwarnings("ignore", message="`per_atom` is deprecated", ...)` so `-W error` does not escalate on torch versions that do emit the Python warning. After the `catch_warnings` block, verify the shim actually fired by asserting the expected message is present on `capfd.readouterr().err` (the C++ stderr path is consistent across torch versions we support) and that no unrelated stderr leaked. No conftest regex, no `filterwarnings` in any `pyproject.toml` -- the scoping of the Python-side filter is limited to this one test, which is the sole place in the repo that deliberately exercises the shim.
This PR substitutes #165.
It also implements
sample_kind, with the short-term goal of enabling per-pair targets, but in this case in a way that doesn't break backward compatibility (new code using oldModelOutputmight fail, but old code using the newModelOutputwon't). This is achieved by keeping the possibility to passper_atomas an argument, and setting/getting it as a property, even though from now on the only thingModelOutputstores issample_kind.Thanks to @Luthaf for letting me know that one can set optional arguments in torch exportable classes :)
If you agree with merging this one I will finalize it with some more tests ✌️
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request
⚙️ Download Python wheels for this pull-request (you can install these with pip)
📚 Download documentation for this pull-request