-
Notifications
You must be signed in to change notification settings - Fork 208
LocalForwarderChannel #723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
updated build so IntelliJ can see generated code. removed method overload from interface (bad practice...) implemented simple LocalForwarderTelemetryTransmitter and Factory (doesn't have the retry logic like the InProcess Factory)
updated build file to shadow services provided with grpc libraries.
Also surfaces BaseData version number. Deprecated *Telemetry classes who's *Data classes were also deprecated. Some additional fields were surfaced which were previously hidden.
Fixed some bug with QuickPulse grabbing CPU data before the first datum is initialized; i.e. skip if null. Changed logic so a Nop channel is used if a custom channel is misconfigured and cannot be initialized.
…iguration. This should still work if the channel is set after configuration. Use singleton for nop channel and don't actually set channel in TelemetryConfiguration, just return NopTelemetryChannel instance. Updated test to check for nop channel instead of null.
exposed some methods for testing
…ptiondetails to be visiblefortesting corrected one precondition which was casing issues.
wrote test for transmitter.
exposed some methods for inserting mocks. removed final from some classes so they could be mocked.
pulling in TPN and other minor changes
|
@littleaj could you please close the comments that were already resolved and also are we going to have the experimental retries? I believe we decided that we will go with it :) Once you close I can take another pass over it tomorrow and sign off. Also @littleaj when you merge this one I think we should not use squash commit. It would be good to see a history of changes in this channel in the master too. |
| telemetry.getContext().getProperties().put("DeveloperMode", "true"); | ||
| } | ||
|
|
||
| if (telemetrySampler != null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This thing is no longer needed btw. We don't have old sampling mechanism and instead sampling is now done via telemetry processor. I would say since this refactor has came in, it's good to just remove this.
| Preconditions.checkNotNull(channelBuilder, "channelBuilder"); | ||
|
|
||
| if (createDefaultGrpcExecutor) { | ||
| this.grpcServiceExecutor = new ThreadPoolExecutor(1, 10, 30, TimeUnit.SECONDS, new LinkedBlockingDeque<Runnable>(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TLDR; @littleaj I would like to emphasize though the difference might not be as significant but something like channel where performance is really important it could be a good tip.
ArrayBlockingQueue is always much more performant and also is better in terms of Fairness as compared to LinkedBlockingQueue. You can take a look at this two stack overflow articles and it goes in good depth about the details.
https://stackoverflow.com/questions/17061882/java-arrayblockingqueue-vs-linkedblockingqueue
https://stackoverflow.com/questions/35967792/when-to-prefer-linkedblockingqueue-over-arrayblockingqueue
I would suggest that we can replace the LinkedBlockingQueue here from ArrayBlockingQueue.
PS: Please do consider reading atleast and draw the conclusion based on your deeper findings. In any case I just wanted to share knowledge and I think it's also applicable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on how the queue is used, linked seems like the better choice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the queuing section: https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ThreadPoolExecutor.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @littleaj for pointing out. So based on the description there - the two points are prominent:
LinkedBlockingQueue - it admits the possibility of unbounded work queue growth when commands continue to arrive on average faster than they can be processed.
ArrayBlockingQueue - Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artificially low throughput. (This is the advantage of ArrayBlockingQueue I think)
Now we can have max threads relatively less and then use finite queue size. I think we don't know how fast items would be emitted. Its totally based on user application. Sure it will increase CPU use but we might be able to achieve gentle balance of CPU and throughput it seems to me.
PS: If I misunderstood something pardon me. And any changes can happen afterwards. This is not mandatory for merging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's create a discussion issue for this.
| @Override | ||
| public void run() { | ||
| try { | ||
| StreamObserver<TelemetryBatch> requestObserver = asyncService.sendTelemetryBatch(responseObserver); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@littleaj please note this is not a blocking comment. I was looking deeper into gRPC and how the channel could react in high scale applications. One problem that can be pronounced is of BackPressure. Generally when an observable is kind of "Hot Observable" and keeps emitting events at it's own pace - there can be situations when your observer is unable to consume it at a pace at which the observable is emitting it. There are ways to handle this, batching being one.
I didn't find a good built in implementation for native grpc that we are using here, however the salesforce wrapper of Rxjava based gRPC provides good built in ways to handle backpressure. May be you could take a look later!
| public synchronized TimeUnit getPerThreadTimeUnit() { | ||
| return perThreadTimeUnit; | ||
| } | ||
| public synchronized void setPerThreadTimeout(long timeout, TimeUnit unit) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here I think we don't need synchronized on this methods as we are not having it configurable atm. Later when we think there is a need to have these configurable we can bring in synchronized. I don't think it makes sense to add overheads for the future :)
dhaval24
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am fine approving this. There are few comments I made that were not high important but definitely I will encourage you @littleaj to take a look at it some time.
…soft/ApplicationInsights-Java into localServer/channel_phase1
This was branched from the work in #715. We can either review that first, and then this piece or we can abadon 715 and just review this. The result will be the same.
This creates the LocalForwarderChannel to be used in conjunction with the LocalForwarder. It uses protobuf over gRPC for communication.