You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently a lot of our test cases represent only some very basic input matrices (e.g. using torch.ones or torch.arange). This is good for asserting the result afterwards but many edge cases are undiscovered and we miss our chance to test with big input data for what heat is actually made for.
Therefore it would be good to create a test environment, where we can create tests in a random manner.
One way to achieve this would be to first choose a implementation of the algorithm we want to test against (in our case probably numpy).
Now we only have to specify a shape of the input tensor. This tensor will then be created with random values, split along one (random?) axis, the heat implementation of the algorithm to be tested is applied to the tensor and afterwards the tensor is resplit to None. This way we can assert the result to be close to the numpy result. This would not need much user input for creating a test and will eventually cover many different test cases. The downside is that we can't check if the tensor is correctly distributed after the calculation and before the resplit to None and we are dependent on the random number generation.
The text was updated successfully, but these errors were encountered:
Currently a lot of our test cases represent only some very basic input matrices (e.g. using
torch.ones
ortorch.arange
). This is good for asserting the result afterwards but many edge cases are undiscovered and we miss our chance to test with big input data for what heat is actually made for.Therefore it would be good to create a test environment, where we can create tests in a random manner.
One way to achieve this would be to first choose a implementation of the algorithm we want to test against (in our case probably
numpy
).Now we only have to specify a shape of the input tensor. This tensor will then be created with random values, split along one (random?) axis, the heat implementation of the algorithm to be tested is applied to the tensor and afterwards the tensor is resplit to
None
. This way we can assert the result to be close to thenumpy
result. This would not need much user input for creating a test and will eventually cover many different test cases. The downside is that we can't check if the tensor is correctly distributed after the calculation and before the resplit toNone
and we are dependent on the random number generation.The text was updated successfully, but these errors were encountered: