-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggest: add a table whether each type can be validated or not #1025
Comments
@samchon Hi, Just going to chime in on this one. While I certainly think this project could benefit from additional benchmarks and tests, I do not think that these should be submitted by library authors as the benefit and value of community projects like this comes primarily from external independent contributors submitting tests outside of author intervention. This specifically to try and establish an accurate lens into performance and mitigate potential for biasing. On this point, if the typia benchmarks are being put forth (having reviewed them independently, as well as submitted TypeBox schematics to Typia here, here and here for alignment and comparative measurement) I do not feel these would be a good candidate for cross library benchmarking or testing for the following reasons:
Also, for the reporting table, I do actually feel quite strongly about not showing RED marks next to each project listed here, particularly if the criteria for testing is dependent on each library adopting specific assertion criteria as implemented by typia (of which there is much room for interpreting validation semantics across libraries). Again, while I'm certainly for the idea of seeing additional benchmarks (or tests) added here, I do feel these should ideally be defined independently (and openly) with the validation criteria made clear and set low enough such that all libraries currently submitted can participate. In addition, if more sophisticated schematics are deemed warranted (of which I've some interest), my preference would be to omit failing projects from result tables rather than marking them as RED (which may be publicly discouraging to project authors who have contributed their free time and effort to this arena) For establishing "a minimum viable suite" of schematics, I think what's going to be less divisive is a collaborative effort where interested parties can define clearly what the schematics are, what they measure for, and what techniques may be applicable to attain better performance (possibly through GH discussions). This to set a fair and reasonable performance criteria and hopefully help other developers to attain robust high performance assertions in their respective projects, mine included. Just some of my thoughts on this one. |
@samchon I appreciate your involvement and I know you have put a lot of effort into thinking about this. Thank you for your contribution! I do largely agree with @sinclairzx81 that the idea is to keep tests as impartial as possible, that's why they remained so primitive up to this point. I think the way to move forward is to discuss each independent test addition we'd like to do as a separate issue. And try to evaluate and understand what value each test adds and how it will affect the rest of the tests. Again, I am not against change, but we need to think more holistically about it. Tbh, my initial tests uite were not thought out too much at all. I just cobbled together some rough ideas and off it went to be released. |
@marcj do you have any input, I remember you had some strong opinions too before. Thanks! |
I think it is more important how reliably and accurately a type validator validates various types than how fast or slow it is.
However, it seems not a good way to add much more benchmark graphs for various types.
Therefore, how about add a table like below?
If you agree, I can provide much more test types to validate and also implementing validation schemas for each libaries, too.
typia
TypeBox
ajv
io-ts
zod
C.V.
The text was updated successfully, but these errors were encountered: