Description
Hi Streams folks,
I'd like to get involved and agree that the most pressing thing is probably as many tests as can be mustered so we can start to move forward able to rely on the test suite for correctness of changes and catch regressions etc.
My current plan is work on a branch in a fork of node that simply slings in tests. In the absence of another approach think it best just to start somewhere and reviews can tell me if individual cases are relevant or not.
That being said, having looked at what's there I'd like to agree how to structure these additions. Many of the current tests seem to test lots of things in one file, for example for _writev it does a few different encodings, then cork(), uncork() and it makes me a little nervous. The following seem like important decisions:
- what is the scope of each test?
Should they each do one thing or test all cases of a given feature? - How explicit should tests be? There are any number of assertions one could make - I think the temptation is to assert everything given the separated nature of files but can we assume our coverage is gained over a bunch of files?
- What syntax should these files use? I've been leaning toward using fat arrows in the node repo under the assumption we want to cover the semantics of the current implementation.
Thanks, Alex J Burke.