Description
For context, we have an classification attribute named "Implementation Approach," and the two options are Realistic and Stripped. This attribute is an attempt to distinguish implementations that are representative of the real-world (realistic) from implementations that have been purposefully tuned to the particulars of our benchmark (stripped).
Judging the implementation approach is, by its nature, a matter with some nuance. There is considerable gray area between a realistic test and a test that has been carefully hand-tuned to our test cases. For example, for Framework X, is a routing trie considered realistic or stripped? We expect classification to be determined on a case-by-case basis, and in most cases in reaction to analysis of the implementation by the community.
Despite the inevitable ambiguity, we have already identified some implementations that we believe qualify as "stripped." (And one more we plan to switch imminently.)
There have been conversations since the addition of the Implementation Approach attribute that ask a valid question: "Why even include stripped tests? What is the value in doing so?"
My general demeanor in accepting test implementations has been very tolerant. To date, we've only rejected a small number of test implementations. I've been of the opinion that as long as the data can be filtered to the options that are relevant to the reader, more data is not a bad thing. That said, we want the results to represent a high-water mark for production-grade deployments, and a stripped test is probably not production-grade. So the argument for not showing this particular portion of the results data is quite compelling.
My current thinking is that we should hide stripped implementations by default within the results web site. The data remain available if the reader elects to unhide stripped tests, but they won't clutter the default view.
I'm going to leave this open for a few days before I make any changes just on the off chance there are strong opposing opinions here.
Activity