Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation for make_s_curve #968

Open
Dan-Yeh opened this issue Apr 3, 2023 · 2 comments
Open

Implementation for make_s_curve #968

Dan-Yeh opened this issue Apr 3, 2023 · 2 comments

Comments

@Dan-Yeh
Copy link

Dan-Yeh commented Apr 3, 2023

Hi guys,

I implemented the make_s_curve function and corresponding tests for it in #967.

However, one of the tests is failing consistently, even though I have verified the values in the outputs and the computation graph. Appreciate any feedback or suggestions on how to resolve this issue.

The test that failed.

@pytest.mark.parametrize(
    "generator",
    [
        dask_ml.datasets.make_blobs,
        dask_ml.datasets.make_classification,
        dask_ml.datasets.make_counts,
        dask_ml.datasets.make_regression,
        dask_ml.datasets.make_s_curve,
    ],
)
def test_deterministic(generator, scheduler):
    a, t = generator(chunks=100, random_state=10)
    b, u = generator(chunks=100, random_state=10)
    assert_eq(a, b)
    assert_eq(t, u)

Environment:

Dask version: 2023.3.2
Dask_ml versuib: 2022.5.27
Python version: 3.10.9
Operating System: OSX
Install method (conda, pip, source): pip

Reproducible example

import numpy as np
import dask.array as da
from dask.array.utils import assert_eq
import dask_ml

def make_s_curve(
    n_samples=100,
    noise=0.0,
    random_state=None,
    chunks=None,
):
    rng = dask_ml.utils.check_random_state(random_state)

    t_scale = 3 * np.pi * 0.5
    t = rng.uniform(low=-t_scale, high=t_scale, size=(n_samples), chunks=(chunks,))
    X = da.empty(shape=(n_samples, 3), chunks=(chunks, 3), dtype="f8")
    X[:, 0] = da.sin(t)
    X[:, 1] = rng.uniform(low=0, high=2, size=n_samples, chunks=(chunks,))
    X[:, 2] = da.sign(t) * (da.cos(t) - 1)

    if noise > 0:
        X += rng.normal(scale=noise, size=X.shape, chunks=X.chunks)

    return X, t


if __name__ == '__main__':
    a, t = make_s_curve(chunks=100, random_state=10)
    b, u = make_s_curve(chunks=100, random_state=10)
    assert_eq(a, b)
    assert_eq(t, u)

Traceback

Traceback (most recent call last):
  File "/Users/dask_mltest.py", line 30, in <module>
    assert_eq(a, b)
  File "/Users/venv/lib/python3.9/site-packages/dask/array/utils.py", line 304, in assert_eq
    a, adt, a_meta, a_computed = _get_dt_meta_computed(
  File "/Users/venv/lib/python3.9/site-packages/dask/array/utils.py", line 259, in _get_dt_meta_computed
    _check_dsk(x.dask)
  File "/Users/venv/lib/python3.9/site-packages/dask/array/utils.py", line 210, in _check_dsk
    assert not non_one, non_one
AssertionError: {('uniform-c022afab3445a4ac294ad46da60634e2', 0): 2}
@TomAugspurger
Copy link
Member

Thanks. It might be a while before I can look at this.

_check_dsk is verifying things about your task graph. It looks like this is saying you have multiple layers that use the same key. I'm not sure if you're making your own layers / keys, but if you are you'll want to use dask.base.tokenize on all the arguments going into whatever that function is.

@Dan-Yeh
Copy link
Author

Dan-Yeh commented Apr 21, 2023

Thanks for your reply.

Yes, _check_dsk has a side effect of checking an overlapping key in the task graph.
But it's the nature of this computation.

Task Graph and Details of Layers

taskgraph

The items of layers from the output

Layer: empty_like-848136d20f353a621c3fb6120e93a97c
('empty_like-848136d20f353a621c3fb6120e93a97c', 0, 0)

Layer: setitem-af567baa0d133fb2414b22009feedc71
('uniform-92c494621cf5b5df81f5fc85445fdfbf', 0)
('sin-b72ce9449b1942fdf9eade38a01fb89e', 0)
('setitem-af567baa0d133fb2414b22009feedc71', 0, 0)

Layer: setitem-c722836162c6db55933255d42e1117d9
('uniform-cd1ff3741c1350d3149e974c0a71b242', 0)
('setitem-c722836162c6db55933255d42e1117d9', 0, 0)

Layer: setitem-63195b1fdf99be7d4bf37a116ed79ef1
('mul-d7bb22edc92ff9263d91f04ac910d2d8', 0)
('uniform-92c494621cf5b5df81f5fc85445fdfbf', 0)
('sign-6d3aa1923db2461ffc8668e5d496fcbc', 0)
('cos-514e590b2a8b5ed03c3f5dc4551ff230', 0)
('sub-5397dd1b68f17458187bee3512c849ca', 0)
('setitem-63195b1fdf99be7d4bf37a116ed79ef1', 0, 0)

We could observe that ('uniform-92c494621cf5b5df81f5fc85445fdfbf', 0) was used in layer 2 and 4, which is the expected behavior.

Also check sklearn make_s_curve

If we only want to check if outcomes are deterministic, maybe it's ok to skip checking the graph overlapping here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants