-
-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configure TypeBox Compiler In Benchmarks #574
Comments
Also, can you investigate the performance degradation on the Object (Simple) benchmark in general? The image on your readme shows a TB degradation of around 30%, but running locally, I see an approximate 15% increase on Ajv. ┌────────────────────────────┬────────────┬──────────────┬──────────────┬──────────────┬──────────────┐
│ (index) │ Iterations │ ValueCheck │ Ajv │ TypeCompiler │ Performance │
├────────────────────────────┼────────────┼──────────────┼──────────────┼──────────────┼──────────────┤
│ Object_Box3D │ 1000000 │ ' 1765 ms' │ ' 61 ms' │ ' 53 ms' │ ' 1.15 x' │
└────────────────────────────┴────────────┴──────────────┴──────────────┴──────────────┴──────────────┘ If I disable the NaN and Array Object checks, I get the following. ┌────────────────────────────┬────────────┬──────────────┬──────────────┬──────────────┬──────────────┐
│ (index) │ Iterations │ ValueCheck │ Ajv │ TypeCompiler │ Performance │
├────────────────────────────┼────────────┼──────────────┼──────────────┼──────────────┼──────────────┤
│ Object_Box3D │ 1000000 │ ' 2002 ms' │ ' 60 ms' │ ' 29 ms' │ ' 2.07 x' │
└────────────────────────────┴────────────┴──────────────┴──────────────┴──────────────┴──────────────┘ The TB benchmarks do not split runs between separate Node processes (so degradation may be a result of one validator breaking runtime optimizations), however the dedicated benchmark system I put together late last year does split runs across distinct node processes, and does shows results inline with the configured compiler. You can review these results at the link below (search for https://sinclairzx81.github.io/runtime-type-benchmarks/ I would like to understand why the Typia benchmarks report such low numbers for TypeBox for these relatively simple checks. |
Now, you can optimize wrapper provider of |
I cannot configure the compiler in the Fastify Type Provider to ignore these checks by default as it's unsafe to do so. However users can configure the compiler to disable these checks through the Note, Fastify recently took TypeBox as a direct dependency on |
As Anyway, separating each benchmark features as independency node process is not simple work for me. Therefore, please wait for a while (maybe next week?). |
Message encoding is often configurable. The following codecs are typical in both HTTP and Web Socket usage as these encodings support import * as msgpack from '@msgpack/msgpack'
{
const encoded = msgpack.encode({ nan: NaN, infinity: Infinity })
const decoded = msgpack.decode(encoded)
console.log(decoded) // { nan: NaN, infinity: Infinity } - unsafe
}
import * as cbor from 'cbor'
{
const encoded = cbor.encode({ nan: NaN, infinity: Infinity })
const decoded = cbor.decode(encoded)
console.log(decoded) // { nan: NaN, infinity: Infinity } - unsafe
}
This is not necessary. I just want to see the TypeBox compiler aligned to the assertion policies used by Typia to get an accurate measurement on performance. This to compare JIT to AOT for equivalent assertion logic under the current benchmarking infrastructure. |
I think (input: any): input is ObjectAlias => {
const $io0 = (input: any): boolean =>
(null === input.id || "string" === typeof input.id) &&
"string" === typeof input.email &&
"string" === typeof input.name &&
(null === input.sex ||
1 === input.sex ||
2 === input.sex ||
"male" === input.sex ||
"female" === input.sex) &&
(null === input.age ||
("number" === typeof input.age &&
Number.isFinite(input.age))) &&
(null === input.dead || "boolean" === typeof input.dead);
return (
Array.isArray(input) &&
input.every(
(elem: any) =>
"object" === typeof elem && null !== elem && $io0(elem),
)
);
} Anyway, upgrading benchmark program, I wanna ask you something. Can you give me an idea? Current As I'd doubted over-fitting optimization like return value cashing, I did such repeating. However, I can't sure whether my assumption (return value cashing) is right or not. As you know, such repeated function call can be extra cost, therefore damage on exact benchmark measurement. Do you think such repeating is required? Or removing all repeating is better? Can you guide me about that? typia/benchmark/internal/IsBenchmarker.ts Lines 22 to 46 in a4054b1
|
I wouldn't expect the additional function call to have much impact in the results. But if it is impacting results, it's better to highlight that impact and optimize in subsequent revisions.
Compute benchmarks should be extremely simple and only measure the elapsed time it takes to complete N iterations. function benchmark_run(iter: number, op: Function) {
const start = performance.now()
for(let i = 0; i < iter; i++) op()
return performance.now() - start
} There will be variability in the elapsed result for subsequent individual runs (due to v8 internals or other system tasks). To fix this you can take an average across multiple runs to yield a more stable / accurate result. function benchmark_average(runs: number, iter: number, op: Function) {
const elapsed: number[] = []
for(let i = 0; i < runs; i++) elapsed.push(benchmark_run(iter, op))
return elapsed.reduce((acc, c) => acc + c, 0) / runs
} The following is the usage const average = benchmark_average(10, 10_000_000, () => {
const [A, B] = [1, 2]
const _ = A + B
})
console.log(average) // 10 runs, 10 million iterations per run If running across distinct node processes, the |
@sinclairzx81 https://github.com/samchon/typia/tree/features/benchmark/benchmark/results/AMD%20Ryzen%207%206800HS%20with%20Radeon%20Graphics Benchmark result after separating each measurements as an independent process. Also, configuration of TypeBox has been changed. Many categories are not revived yet, but it is too hard to migrate. They'll be revived in someday. |
That's interesting, Moltar's benchmark system also saw similar balancing improvements when they moved to distinct processes. The 20,000x delta is inline with the results I was seeing for Typia on my local when investigating this last year. Contrasting comparative benchmarks, they seem to line up correctly. Refer to here for the static datasets used in the above benchmark if it helps to resolve the failing TB and Ajv tests (these are probably best expressed as templates if randomizing in Typia) |
Could you please also update the import { TSchema } from "@sinclair/typebox";
import { TypeCheck } from "@sinclair/typebox/compiler";
import { createAssertBenchmarkProgram } from "../createAssertBenchmarkProgram";
export const createAssertTypeboxBenchmarkProgram = <S extends TSchema>(
schema: TypeCheck<S>,
) =>
createAssertBenchmarkProgram((input) => {
if(schema.Check(input)) return input // added
const first = schema.Errors(input).First();
if (first) throw first;
return input;
}); This is documented on the TypeBox project here with the following description.
Remember, TypeBox does not have a built in Options:
My preference would be option (A) as it compares Check() before Errors() performance against inline Assert() (as implemented in Typia and Ajv). If the data is varying for the benchmark (50% correct, 50% incorrect), I'd expect TypeBox to report 50% the performance of Typia due to dynamic checks performed during diagnostic gathering. |
Prepare #574 - benchmark in each process
@samchon Were you going to implement either option A or B? |
Will accept option A, maybe tomorrow |
Cool :) |
@sinclairzx81 Changed as you want, but I can't sure this is right or not. Typia can do it with only one line, but Typebox needs... |
@samchon Thanks. And yes it's correct as far as implementing a standard
It's actually more flexible to keep It's just a design principle TypeBox tries to follow. |
Can you please configure the TypeBox compiler for each
is
benchmark using theAllowNaN
andAllowArrayObjects
compiler settings.The following shows updates for the
TypeBoxObjectSimple
benchmark.This configuration aligns TypeBox to assert using the same assertion policies as Typia by omitting critical numeric and object array assertion checks. This configuration only applies to the
is
benchmarks. Formal documentation for these policy overrides can be found at the link below.https://github.com/sinclairzx81/typebox/tree/literal#policies
Cheers
The text was updated successfully, but these errors were encountered: