-
Notifications
You must be signed in to change notification settings - Fork 0
Fixed utilizing all cores for runForAsync #82
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1+
@segabriel - plz provide an evidence that this PR is fixing issue #57. Screenshots, statistics, anything. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What was wrong and how it improved.
I run them locally. Looks like latency sky-rockets. But call/sec are indeed stay the same. RequestOneBenchmarks
After changes:
RequestManyLatencyBenchmarks
After changes:
|
The next point is the benchmarks from scalecube-services and I achieved the following results before these changes:
I added imagic number as a concurrency, I began to change it and I achieved the following results: for
for
for
for
@artem-v please suppose your opinion/solution for it |
() -> { | ||
long start = i * countPerThread; | ||
Flux.fromStream(LongStream.range(start, start + countPerThread).boxed()) | ||
.flatMap(unitOfWork::apply, 64, Integer.MAX_VALUE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool finding!
But why not just runtime.available_processoors
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By default, this value is 256, and how I understand this value specifies how many publishers can be processed simultaneously in flatMap
operator (other words, flatMap
subscribe to these publishers)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I think we should add an opportunity to specify it as a benchmark settings param.
Let's add parameter |
Ok, but the adding new param will be in a new PR. |
Let's add in this PR |
fixed #57