-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Binary Predicate Test Dispatching #1085
Binary Predicate Test Dispatching #1085
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One initial question about pytest machinery.
(Polygon, Polygon): object_dispatch(polygon_polygon_dispatch_list), | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't you rely on the machinery of @pytest.mark.parameterize()
to build up the Cartesian product of test combinations rather than using nested loops?
types = [Point, Linestring, Polygon]
predicates = [disjoint, intersects, overlaps, contains, constains_properly, within, etc.]
@pytest.mark.parametrize('LHS type', types)
@pytest.mark.parametrize('RHS type', types)
@pytest.mark.parametrize('predicate', predicates)
@pytest.mark.parametrize('the data', the_data)
def test(...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I started out just parameterizing the cases in the fashion that you describe. Because each of the feature combinations is predefined, I needed to have a separate test with separate parameterizations for exclusive set of features (point, linestring), (point, polygon), etc. Because I wanted logging and reporting on the test results, it was much easier to move that into a single test, and a single list of features that are used in the test. The primary contribution of this PR is the hand-coded list in features
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember the specific reason that I parameterize the tests according to the pattern in https://github.com/rapidsai/cuspatial/pull/1085/files#diff-7064c17cb647ac9ab6d1e6f5c4c1726f68fd5b20205dd85f1d34684d53fd3ea6R20-R35 instead of using the pattern that you described. The simple answer is because you cannot parameterize a pytest fixture, only native iterables. I am using the @pytest.mark.parameterize
functionality, I've just moved the generation of the tests into the fixture instead of passing a list to the @pytest.mark.parameterize
call.
python/cuspatial/cuspatial/tests/binpreds/test_binpred_test_dispatch.py
Outdated
Show resolved
Hide resolved
That issue is about documenting all special cases. How does this PR close that? |
…spatch.py Co-authored-by: Mark Harris <783069+harrism@users.noreply.github.com>
#1036 is "Document all simple and single geometric special cases #1036" which specifies two follow up issues, one for multiple geometries and polygons with rings, and another for true "special cases" as we discover them. This PR closes #1036 by including 34 specific tests that cover all geometric configurations of the feature combinations for (point, linestring, polygon), which is presented and executed from the |
So you are replacing "document" with "test"? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really cool!
out_file = open("test_binpred_test_dispatch.log", "w") | ||
|
||
|
||
@xfail_on_exception # TODO: Remove when all tests are passing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All tests are passing now ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
expected = gpd_pred_fn(gpdrhs) | ||
assert (got.values_host == expected.values).all() | ||
|
||
# The test is complete, the rest is just logging. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I appreciate the logging to file part, which can prove to be useful when bulk debugging a new feature of hundreds of failures. But when this system is online in CI, it would also be helpful to have a "CI mode" that throws the result to stdout. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because there will be more binary predicate PRs to cover complex (multi-) geometry types, how would you feel about filing an issue for this feature? I'm certainly fine with implementing it but I'd rather not make it a dependency of #1064 that holds it up and needs to be merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's totally fine with me
""", | ||
point_polygon, | ||
point_polygon, | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harrism the "Documentation" is primarily here. I can move it to the binary predicates design document if you like, as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small suggestion. Not blocking to merge.
/merge |
Closes #1046
Closes #1036
This PR adds a new binary predicate test dispatch test system. The test dispatcher creates one test for each ordered pair of features from the set (point, linestring, polygon) and each predicate that feature tuple can be applied to:
The combination of 9 predicates and 33 tests creates 297 tests that cover all possible combinations of simple features and their predicates.
While development is underway, the test dispatcher automatically
xfails
any test that fails or hasn't been implemented yet in order to pass CI.The test dispatcher outputs diagnostic results during each test run. An output file
test_binpred_test_dispatch.log
is created containing all of the failing tests, including their name, a visual description of the feature tuple and relationship, the shapely objects used to create the test, and the runtime name of the test so it is easy for a developer (myself) to identify which test failed and rerun it. It also creates four .csv files during runtime that collect the results of each test pass or fail relative to which predicate is being run and which pair of features are being tested. These .csv files can be displayed usingtests/binpred/summarize_binpred_test_dispatch_results.py
, which will output a dataframe of each CSV file thusly:Without additional modifications, the test dispatcher produces 97 passing results and 200 xfailing results. In a shortly upcoming PR, updates to the basic predicates and binpred test architecture increase the number of passing results to 274, with 25 xfails. These PRs are separated for ease of review.
Checklist