Skip to content

Conversation

@littleaj
Copy link
Contributor

@littleaj littleaj commented Aug 29, 2018

This was branched from the work in #715. We can either review that first, and then this piece or we can abadon 715 and just review this. The result will be the same.

This creates the LocalForwarderChannel to be used in conjunction with the LocalForwarder. It uses protobuf over gRPC for communication.

  • Changes in public surface reviewed
  • CHANGELOG.md updated
  • update TPN

littleaj added 22 commits August 3, 2018 17:08
updated build so IntelliJ can see generated code.
removed method overload from interface (bad practice...)
implemented simple LocalForwarderTelemetryTransmitter and Factory (doesn't have the retry logic like the InProcess Factory)
updated build file to shadow services provided with grpc libraries.
Also surfaces BaseData version number.
Deprecated *Telemetry classes who's *Data classes were also deprecated.
Some additional fields were surfaced which were previously hidden.
Fixed some bug with QuickPulse grabbing CPU data before the first datum is initialized; i.e. skip if null.
Changed logic so a Nop channel is used if a custom channel is misconfigured and cannot be initialized.
…iguration. This should still work if the channel is set after configuration.

Use singleton for nop channel and don't actually set channel in TelemetryConfiguration, just return NopTelemetryChannel instance.
Updated test to check for nop channel instead of null.
exposed some methods for testing
…ptiondetails to be visiblefortesting

corrected one precondition which was casing issues.
wrote test for transmitter.
exposed some methods for inserting mocks.
removed final from some classes so they could be mocked.
@dhaval24
Copy link
Contributor

dhaval24 commented Sep 8, 2018

@littleaj could you please close the comments that were already resolved and also are we going to have the experimental retries? I believe we decided that we will go with it :) Once you close I can take another pass over it tomorrow and sign off. Also @littleaj when you merge this one I think we should not use squash commit. It would be good to see a history of changes in this channel in the master too.

telemetry.getContext().getProperties().put("DeveloperMode", "true");
}

if (telemetrySampler != null) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This thing is no longer needed btw. We don't have old sampling mechanism and instead sampling is now done via telemetry processor. I would say since this refactor has came in, it's good to just remove this.

Preconditions.checkNotNull(channelBuilder, "channelBuilder");

if (createDefaultGrpcExecutor) {
this.grpcServiceExecutor = new ThreadPoolExecutor(1, 10, 30, TimeUnit.SECONDS, new LinkedBlockingDeque<Runnable>(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TLDR; @littleaj I would like to emphasize though the difference might not be as significant but something like channel where performance is really important it could be a good tip.

ArrayBlockingQueue is always much more performant and also is better in terms of Fairness as compared to LinkedBlockingQueue. You can take a look at this two stack overflow articles and it goes in good depth about the details.

https://stackoverflow.com/questions/17061882/java-arrayblockingqueue-vs-linkedblockingqueue
https://stackoverflow.com/questions/35967792/when-to-prefer-linkedblockingqueue-over-arrayblockingqueue

I would suggest that we can replace the LinkedBlockingQueue here from ArrayBlockingQueue.
PS: Please do consider reading atleast and draw the conclusion based on your deeper findings. In any case I just wanted to share knowledge and I think it's also applicable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on how the queue is used, linked seems like the better choice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @littleaj for pointing out. So based on the description there - the two points are prominent:

LinkedBlockingQueue - it admits the possibility of unbounded work queue growth when commands continue to arrive on average faster than they can be processed.

ArrayBlockingQueue - Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artificially low throughput. (This is the advantage of ArrayBlockingQueue I think)

Now we can have max threads relatively less and then use finite queue size. I think we don't know how fast items would be emitted. Its totally based on user application. Sure it will increase CPU use but we might be able to achieve gentle balance of CPU and throughput it seems to me.

PS: If I misunderstood something pardon me. And any changes can happen afterwards. This is not mandatory for merging.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's create a discussion issue for this.

@Override
public void run() {
try {
StreamObserver<TelemetryBatch> requestObserver = asyncService.sendTelemetryBatch(responseObserver);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@littleaj please note this is not a blocking comment. I was looking deeper into gRPC and how the channel could react in high scale applications. One problem that can be pronounced is of BackPressure. Generally when an observable is kind of "Hot Observable" and keeps emitting events at it's own pace - there can be situations when your observer is unable to consume it at a pace at which the observable is emitting it. There are ways to handle this, batching being one.

I didn't find a good built in implementation for native grpc that we are using here, however the salesforce wrapper of Rxjava based gRPC provides good built in ways to handle backpressure. May be you could take a look later!

public synchronized TimeUnit getPerThreadTimeUnit() {
return perThreadTimeUnit;
}
public synchronized void setPerThreadTimeout(long timeout, TimeUnit unit) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here I think we don't need synchronized on this methods as we are not having it configurable atm. Later when we think there is a need to have these configurable we can bring in synchronized. I don't think it makes sense to add overheads for the future :)

Copy link
Contributor

@dhaval24 dhaval24 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine approving this. There are few comments I made that were not high important but definitely I will encourage you @littleaj to take a look at it some time.

@littleaj littleaj merged commit d7b87ce into master Sep 12, 2018
@littleaj littleaj deleted the localServer/channel_phase1 branch September 12, 2018 21:37
@littleaj littleaj restored the localServer/channel_phase1 branch December 6, 2019 21:27
littleaj added a commit that referenced this pull request Dec 7, 2019
@littleaj littleaj deleted the localServer/channel_phase1 branch December 10, 2019 18:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants