You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that dnppy has integrated travis-ci documentation building, the environment for automated testing is already in place. All that's missing is testing for the functions.
If tests are developed for dnppy that don't use local files for testing it would be super easy to integrate said testing remotely so you would know when pushes break code.
The text was updated successfully, but these errors were encountered:
Is there a way to create minimal examples of functions that are built to operate on large data sets? Say call the function with a really small dataset that you can check by hand in the comparison? The data doesn't have to be real of course, e.g. for testing one of my libraries right now all I do is hand check stuff I know is right. How plausible is it to generate small pseudo datasets you can check?
I suppose it's possible to create some tiny data subsets for some of the raster functions, like little 10x10 pixel images. Some of our functions are built to download and convert data formats from a variety of NASA DAAC's. It is important to test these as well, but I don't see anyway to create tiny test datasets for those functions.
Now that dnppy has integrated travis-ci documentation building, the environment for automated testing is already in place. All that's missing is testing for the functions.
If tests are developed for dnppy that don't use local files for testing it would be super easy to integrate said testing remotely so you would know when pushes break code.
The text was updated successfully, but these errors were encountered: