Skip to content

[patch] Fall back on cloudpickle #411

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 21 commits into from
Aug 9, 2024
Merged

Conversation

liamhuber
Copy link
Member

When the pickle backend fails

liamhuber and others added 18 commits August 6, 2024 20:07
With the introduction of a pickle backend, this test wound up failing. I haven't been able to work out exactly why, but since pickle is much faster my guess is that something in the test parallelization gets in a race condition with deleting a saved workflow from another test. Simply renaming it here solved the issue, which is an innocuous change.
Ok, the reason workflow storage tests were failing after introducing pickle was that the original version of the test had the wrong order of the try/finally scope and the for-loop scope. I got away with it earlier because of interplay between the outer-most member of the loop and the default storage backend. Actually fixing the problem is as simple as ensuring the "finally delete" clause happens after _each_ loop step. This does that and reverts the renaming of the workflow.
You're only supposed to use it as a decorator to start with, so the kwarg was senseless
This is necessary (but not sufficient) to get `as_dataclass_node` decorated classes to pickle. The field name is a bit off compared to `Function` and `Macro`, as now we decorate a class definition instead of a function definition, but it's close.
Now that it's pickling as well as anything else
When the pickle backend fails
Copy link

github-actions bot commented Aug 8, 2024

Binder 👈 Launch a binder notebook on branch pyiron/pyiron_workflow/fallback_on_cloudpickle

@liamhuber liamhuber added the enhancement New feature or request label Aug 8, 2024
Base automatically changed from as_dataclass_node_pickle to patch_bump August 9, 2024 16:49
@liamhuber liamhuber merged commit 9fbf820 into patch_bump Aug 9, 2024
2 checks passed
@liamhuber liamhuber deleted the fallback_on_cloudpickle branch August 9, 2024 16:51
liamhuber added a commit that referenced this pull request Aug 13, 2024
* [patch] Pickle storage backend (#408)

* Refactor storage test

* Remove empty line

* Rename workflow in tests

With the introduction of a pickle backend, this test wound up failing. I haven't been able to work out exactly why, but since pickle is much faster my guess is that something in the test parallelization gets in a race condition with deleting a saved workflow from another test. Simply renaming it here solved the issue, which is an innocuous change.

* Introduce `pickle` as a backend for saving

* Fix root cause of storage conflict

Ok, the reason workflow storage tests were failing after introducing pickle was that the original version of the test had the wrong order of the try/finally scope and the for-loop scope. I got away with it earlier because of interplay between the outer-most member of the loop and the default storage backend. Actually fixing the problem is as simple as ensuring the "finally delete" clause happens after _each_ loop step. This does that and reverts the renaming of the workflow.

* Again, correctly order try/finally and for-loops

* [patch] `as_dataclass_node` pickle compatibility (#410)

* Refactor storage test

* Remove empty line

* Rename workflow in tests

With the introduction of a pickle backend, this test wound up failing. I haven't been able to work out exactly why, but since pickle is much faster my guess is that something in the test parallelization gets in a race condition with deleting a saved workflow from another test. Simply renaming it here solved the issue, which is an innocuous change.

* Introduce `pickle` as a backend for saving

* Fix root cause of storage conflict

Ok, the reason workflow storage tests were failing after introducing pickle was that the original version of the test had the wrong order of the try/finally scope and the for-loop scope. I got away with it earlier because of interplay between the outer-most member of the loop and the default storage backend. Actually fixing the problem is as simple as ensuring the "finally delete" clause happens after _each_ loop step. This does that and reverts the renaming of the workflow.

* Again, correctly order try/finally and for-loops

* Remove keyword argument from pure-decorator

You're only supposed to use it as a decorator to start with, so the kwarg was senseless

* Add factory import source

This is necessary (but not sufficient) to get `as_dataclass_node` decorated classes to pickle. The field name is a bit off compared to `Function` and `Macro`, as now we decorate a class definition instead of a function definition, but it's close.

* Bring the dataclass node in line with function and macro

* Leverage new pyiron_snippets.factory stuff to find the class

* Mangle the stored dataclass qualname so it can be found later

* Add tests

* Update docs examples to reflect new naming

* Update snippets dependency

* [dependabot skip] Update env file

* Use new pyiron_snippets syntax consistently

* Expose `as_dataclass_node` in the API

Now that it's pickling as well as anything else

* Format black

---------

Co-authored-by: pyiron-runner <pyiron@mpie.de>

* [patch] Fall back on cloudpickle (#411)

* Refactor storage test

* Remove empty line

* Rename workflow in tests

With the introduction of a pickle backend, this test wound up failing. I haven't been able to work out exactly why, but since pickle is much faster my guess is that something in the test parallelization gets in a race condition with deleting a saved workflow from another test. Simply renaming it here solved the issue, which is an innocuous change.

* Introduce `pickle` as a backend for saving

* Fix root cause of storage conflict

Ok, the reason workflow storage tests were failing after introducing pickle was that the original version of the test had the wrong order of the try/finally scope and the for-loop scope. I got away with it earlier because of interplay between the outer-most member of the loop and the default storage backend. Actually fixing the problem is as simple as ensuring the "finally delete" clause happens after _each_ loop step. This does that and reverts the renaming of the workflow.

* Again, correctly order try/finally and for-loops

* Remove keyword argument from pure-decorator

You're only supposed to use it as a decorator to start with, so the kwarg was senseless

* Add factory import source

This is necessary (but not sufficient) to get `as_dataclass_node` decorated classes to pickle. The field name is a bit off compared to `Function` and `Macro`, as now we decorate a class definition instead of a function definition, but it's close.

* Bring the dataclass node in line with function and macro

* Leverage new pyiron_snippets.factory stuff to find the class

* Mangle the stored dataclass qualname so it can be found later

* Add tests

* Update docs examples to reflect new naming

* Update snippets dependency

* [dependabot skip] Update env file

* Use new pyiron_snippets syntax consistently

* Expose `as_dataclass_node` in the API

Now that it's pickling as well as anything else

* [patch] Fall back on cloudpickle

When the pickle backend fails

* Format black

---------

Co-authored-by: pyiron-runner <pyiron@mpie.de>

* [patch] Make pickle the default storage backend (#412)

* Refactor storage test

* Remove empty line

* Rename workflow in tests

With the introduction of a pickle backend, this test wound up failing. I haven't been able to work out exactly why, but since pickle is much faster my guess is that something in the test parallelization gets in a race condition with deleting a saved workflow from another test. Simply renaming it here solved the issue, which is an innocuous change.

* Introduce `pickle` as a backend for saving

* Fix root cause of storage conflict

Ok, the reason workflow storage tests were failing after introducing pickle was that the original version of the test had the wrong order of the try/finally scope and the for-loop scope. I got away with it earlier because of interplay between the outer-most member of the loop and the default storage backend. Actually fixing the problem is as simple as ensuring the "finally delete" clause happens after _each_ loop step. This does that and reverts the renaming of the workflow.

* Again, correctly order try/finally and for-loops

* Remove keyword argument from pure-decorator

You're only supposed to use it as a decorator to start with, so the kwarg was senseless

* Add factory import source

This is necessary (but not sufficient) to get `as_dataclass_node` decorated classes to pickle. The field name is a bit off compared to `Function` and `Macro`, as now we decorate a class definition instead of a function definition, but it's close.

* Bring the dataclass node in line with function and macro

* Leverage new pyiron_snippets.factory stuff to find the class

* Mangle the stored dataclass qualname so it can be found later

* Add tests

* Update docs examples to reflect new naming

* Update snippets dependency

* [dependabot skip] Update env file

* Use new pyiron_snippets syntax consistently

* Expose `as_dataclass_node` in the API

Now that it's pickling as well as anything else

* [patch] Fall back on cloudpickle

When the pickle backend fails

* [minor] Make pickle the default storage backend

* Format black

* Fall back on loading any storage contents

Regardless of what the specified storage backend was.

* Format black

---------

Co-authored-by: pyiron-runner <pyiron@mpie.de>

* Format black

* Update pyiron deps

* [dependabot skip] Update env file

---------

Co-authored-by: pyiron-runner <pyiron@mpie.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants