Skip to content

cache: respond to watches in parallel v2 #85

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Dec 8, 2018
Merged

Conversation

sschepens
Copy link
Contributor

@sschepens sschepens commented Nov 30, 2018

Supersedes #75

Added concept of ExecutorGroup mostly based on netty's EventExecutorGroup for distributing Executors to streams in a round robin fashion.

@sschepens sschepens force-pushed the para2 branch 6 times, most recently from 9e5a708 to e913472 Compare November 30, 2018 15:02
@sschepens
Copy link
Contributor Author

tests are failing on circle but work ok on my computer, investigating

@codecov-io
Copy link

codecov-io commented Nov 30, 2018

Codecov Report

Merging #85 into master will increase coverage by 0.08%.
The diff coverage is 100%.

Impacted file tree graph

@@             Coverage Diff              @@
##             master      #85      +/-   ##
============================================
+ Coverage     94.39%   94.47%   +0.08%     
- Complexity      131      134       +3     
============================================
  Files            13       14       +1     
  Lines           517      525       +8     
  Branches         46       46              
============================================
+ Hits            488      496       +8     
  Misses           21       21              
  Partials          8        8
Impacted Files Coverage Δ Complexity Δ
...nvoyproxy/controlplane/server/DiscoveryServer.java 96.49% <100%> (+0.19%) 14 <1> (+1) ⬆️
...roxy/controlplane/server/DefaultExecutorGroup.java 100% <100%> (ø) 2 <2> (?)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c56c848...0743dee. Read the comment docs.

@@ -52,7 +53,18 @@
* @param groups maps an envoy host to a node group
*/
public SimpleCache(NodeGroup<T> groups) {
this (groups, new DefaultExecutorGroup());
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: extra whitespace

*
*/
public Watch(boolean ads, DiscoveryRequest request, Consumer<Response> responseConsumer) {
this(ads, request, responseConsumer, MoreExecutors.directExecutor());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please document the default behavior

@@ -92,12 +94,14 @@ public void invalidNamesListShouldReturnWatcherWithResponseInXdsMode() {
Collections.emptySet(),
responseTracker);

Thread.yield();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this necessary?

* children them in a round robin fashion.
*
*/
public class DefaultExecutorGroup implements ExecutorGroup {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you improve the test coverage of this class?

try {
children[i] = Executors.newSingleThreadExecutor();
success = true;
} catch (Exception e) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When do you expect this to throw? I'd prefer to avoid unnecessary error handling if possible

import java.util.stream.Collectors;

/**
* Default implementation of {@link ExecutorGroup} which defaults to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only thing that ties this implementation to the number of processors is the default ctor, so I'd rephrase this to describe this just as a fixed size RR implementation and move the bit about number of processors to the default ctor.

@@ -102,7 +114,7 @@ public Watch createWatch(
Snapshot snapshot = snapshots.get(group);
String version = snapshot == null ? "" : snapshot.version(request.getTypeUrl());

Watch watch = new Watch(ads, request, responseConsumer);
Watch watch = new Watch(ads, request, responseConsumer, executorGroup.next());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by RRing thread assignments to individual watches like this we're not gonna get thread affinity per stream which imo is a minimum requirement for this to make sense for ADS. We probably want to select the executor per stream, not per watch

If possible, I would love a test that ensures that the ordering of updates sent with ADS is correct, but that might be out of the scope of this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was under the impression we created only one watch per stream, but this is only valid when not using ADS, we're probably going to need to move the RR to DiscoveryServer

/**
* Constructs a simple cache.
*
* @param groups maps an envoy host to a node group
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

document parameter please

* life-cycle and allows shutting them down in a global fashion.
*
*/
public interface ExecutorGroup extends ExecutorService {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could probably simplify this a lot by not extending ExecutorService. You can provide a terminate/shutdown/etc on the DefaultExecutorGroup directly if you feel like that's necessary

*
* @param nThreads amount of children threads
*/
public DefaultExecutorGroup(int nThreads) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

something like threadCount would probably be more consistent with parameter naming used elsewhere in the project

@sschepens
Copy link
Contributor Author

@snowp DefaultExecutorGroup is mostly copypasted from Netty DefaultEventExecutorGroup, wouldn't you think it's probably better to add a dependency on netty-common and use that?
We would avoid having to test and mantain this class.

@snowp
Copy link
Contributor

snowp commented Dec 3, 2018

I would actually go in the other direction: provide the minimum interface necessary here and a very simple implementation, maybe something that just returns the same fixed thread executor each time. That way you can use a netty implementation if you want without bringing all that complexity into the control plane API.

Using a netty implementation would be as simple as doing

  class NettyExecutorGroup implements ExecutorGroup {
    EventExecutorGroup nettyGroup;
    
    @Override public Executor next() {
      return nettyGroup.next();
    }
  }

@sschepens sschepens force-pushed the para2 branch 2 times, most recently from 7482be5 to 622e463 Compare December 3, 2018 19:08
@sschepens
Copy link
Contributor Author

@snowp thanks for the comments, totally agree.

I committed a different, simpler approach, moved ExecutorGroup and DefaultExecutorGroup to server to RR Executors per stream.
Also DefaultExecutorGroup now defaults to the old behavior by just returning directExecutor.

This approach is much simpler and touches less things, please review.

@sschepens sschepens force-pushed the para2 branch 2 times, most recently from 4f3db1e to 2f9543a Compare December 3, 2018 19:17
Signed-off-by: Sebastian Schepens <sebastian.schepens@mercadolibre.com>
Copy link
Contributor

@snowp snowp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty good! Can you update the PR description to reflect the new approach?

/**
* Creates a new instance.
*/
public DefaultExecutorGroup() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: no need for this default ctor


/**
* Default implementation of {@link ExecutorGroup} which
* always returns {@link com.google.common.util.concurrent.MoreExecutors#directExecutor()}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

supernit: shorten this to reference MoreExecutors#directExecutor instead of the fqn

Copy link
Contributor

@joeyb joeyb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks really good, mostly just some annoying nitpick comments.

We don't have to solve this now, but it would be good to document this a bit more. I get the mechanics of it from a high-level, but I don't think I have a full picture of the different types of ExecutorGroup impls that people would want in the real world. Are there use-cases where you would want certain request streams to have an affinity for a particular Executor? Or is it mostly just direct executor vs. thread pool, with the pool size and pool entry chooser (e.g. netty's PowerOfTwoEventExecutorChooser) being the most interesting parts of the latter?

public DefaultExecutorGroup() {
}

@Override
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  /**
   * Returns the next {@link Executor} to use, which in this case is always {@link MoreExecutors#directExecutor()}.
   */

* @param configWatcher source of configuration updates
* @param executorGroup executor group to use for responding stream requests
*/
public DiscoveryServer(List<DiscoveryServerCallbacks> callbacks, ConfigWatcher configWatcher,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: when wrapping parameters, prefer to have each on their own line, i.e.:

public DiscoveryServer(
    List<DiscoveryServerCallbacks> callbacks,
    ConfigWatcher configWatcher,
    ExecutorGroup executorGroup) {
  // ...
}

Subjectively, it's easier to read quickly because it's more consistent. I still need to figure out whether or not it's possible to encode that in the checkstyle rules.

*/
public interface ExecutorGroup {
/**
* Return the next {@link Executor} to use.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: the google java style guide prefers Returns over Return here, and the extra javadoc line below can be removed.

* Return the next {@link Executor} to use.
*
*/
Executor next();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to also pass some contextual information about the request stream here? Would there be use-cases where e.g. a particular type would need to have affinity to a specific Executor?

cc: @snowp

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about this as well, but there's not much contextual information we can send at the moment

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think having each stream always using the same executor is good enough for now


private AtomicLong streamNonce;

public DiscoveryRequestStreamObserver(String defaultTypeUrl, StreamObserver<DiscoveryResponse> responseObserver,
long streamId, boolean ads) {
long streamId, boolean ads, Executor executor) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: ditto on the comment about parameter wrapping.

@sschepens
Copy link
Contributor Author

We don't have to solve this now, but it would be good to document this a bit more. I get the mechanics of it from a high-level, but I don't think I have a full picture of the different types of ExecutorGroup impls that people would want in the real world. Are there use-cases where you would want certain request streams to have an affinity for a particular Executor? Or is it mostly just direct executor vs. thread pool, with the pool size and pool entry chooser (e.g. netty's PowerOfTwoEventExecutorChooser) being the most interesting parts of the latter?

@joeyb I think it would mostly be just direct executor vs. thread pool, What I can see is people probably trying to reuse netty's worker groups, I would personally like that.
I can see little benefit from having stream affinity, even so, right now we have little to no context for taking these decisions.

Signed-off-by: Sebastian Schepens <sebastian.schepens@mercadolibre.com>
@joeyb
Copy link
Contributor

joeyb commented Dec 5, 2018

@sschepens - Cool, makes sense. If the need for more context comes up, then we can cross that bridge when we get there.

Copy link
Contributor

@snowp snowp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@snowp snowp merged commit 771dd57 into envoyproxy:master Dec 8, 2018
@joeyb
Copy link
Contributor

joeyb commented Dec 9, 2018

@sschepens, @snowp - FYI, I cut 0.1.12, which includes this change.

@sschepens
Copy link
Contributor Author

@joeyb thanks!

@sschepens sschepens deleted the para2 branch December 10, 2018 18:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants