Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xds: Consistency while handling CDS updates and EDS #13009

Open
sschepens opened this issue Sep 8, 2020 · 61 comments
Open

xds: Consistency while handling CDS updates and EDS #13009

sschepens opened this issue Sep 8, 2020 · 61 comments
Labels

Comments

@sschepens
Copy link
Contributor

When a cluster is created or updated envoy it enters warming phase and needs a related ClusterLoadAssignement response to fully initialize.
During Envoy startup phase envoy sends requests for those resources to the management server and so the management servers knows it has to respond those.
But, when updating a Cluster via CDS, no EDS re-request is sent to the management server and management server doesn't really know it should send a ClusterLoadAssignement for that Cluster, even if the resource hasn't really changed.
This berhavior introduces subtle bugs in management servers, in our case, our resource versioning scheme somehow included the cluster version, yesterday we introduced a change to remove that, and that unexpectedly broke our cluster updates, leaving some clusters without traffic.

This should probably be handled by envoy, currently go-control-plane and java-control-plane don't really handle this since it's kind of hard to induce this behavior of always pushing an EDS update for a Cluster even if the resource hasn't really changed, and if not using ADS, envoy is possibly connected to multiple management servers which increases the difficulty.

@htuch suggested that envoy should probably unsubscribe from EDS for the updated cluster and immediately subscribe again, since this is what's actually happening inside envoy, a new Cluster is being created and it wants to subscribe to the resources, and the old one wants to unsubscribe. Some more thought should be done on this idea and the consecuences of the old cluster being unsubscribed from resources.

Some discussion already happened on Slack, opening this issue to continue discussion.

This issue also applies to LDS updates with RDS and probably SRDS with RDS.

@sschepens sschepens added bug triage Issue requires triage labels Sep 8, 2020
@htuch
Copy link
Member

htuch commented Sep 8, 2020

CC @envoyproxy/api-shepherds

@htuch htuch added help wanted Needs help! and removed triage Issue requires triage labels Sep 8, 2020
@markdroth
Copy link
Contributor

But, when updating a Cluster via CDS, no EDS re-request is sent to the management server and management server doesn't really know it should send a ClusterLoadAssignement for that Cluster, even if the resource hasn't really changed.

Why would the management server need to send an update for the EDS resource that has not changed, just because the CDS resource that pointed to it has changed? This seems like a pretty clear Envoy bug: the CDS and EDS resources should be tracked independently from the perspective of the xDS client code, so that when a new Cluster object is created, it can just pick up the existing data for the EDS resource it wants.

In gRPC, the pattern we use here is to separate out the transport protocol logic from the data model logic. The transport protocol logic (which includes not just the wire communication but also the caching logic, since those two things are ultimately closely tied together) is implemented in a single XdsClient object. Its API allows callers to start watches for a given resource of a given type. When the first watch for a given resource is started, the XdsClient will send a request to the xDS server subscribing to that resource, and all updates from the xDS server will be sent to that watcher. It's fine for multiple callers to be watching the same resource: if a watch is started for a resource that is already subscribed to (because some other watch was already started), then the XdsClient will immediately send the already-cached data to the new watcher. When all watchers for a given resource are cancelled, the XdsClient will send a request to the xDS server unsubscribing from that resource.

One significant difference between Envoy and gRPC is that when a CDS resource is updated, gRPC can update the individual fields in the cluster instance rather than having to create a whole new cluster instance. But even if we did need to create a whole new cluster instance, it still would not pose a problem, because we'd keep the old instance around while the new instance warmed up. This would mean that when the new instance would start a watch for the EDS resource name, it would get the value that was already cached in the XdsClient, and the management server would not know or care that the cluster instance had been replaced.

@htuch suggested that envoy should probably unsubscribe from EDS for the updated cluster and immediately subscribe again, since this is what's actually happening inside envoy, a new Cluster is being created and it wants to subscribe to the resources, and the old one wants to unsubscribe. Some more thought should be done on this idea and the consecuences of the old cluster being unsubscribed from resources.

This seems sub-optimal, both because it will cause unnecessary network traffic and because it will make warming the new cluster instance take longer. I think it should be possible to avoid all of that by separating out the xDS client code into its own layer, as I described above.

That having been said, my concern here is less about Envoy having sub-optimal behavior and more about making sure that the protocol semantics are clear. I think that the protocol itself should not require clients to re-subscribe in this case, because I'd rather not change gRPC, which already handles this case correctly, to do something less optimal. And if Envoy does do the less optimal thing, I am concerned that it may introduce situations where management servers are built with Envoy-specific assumptions that make it hard for them to support gRPC later.

@htuch
Copy link
Member

htuch commented Sep 8, 2020

@markdroth I think this was a deliberate choice we made when designing CDS warming. The idea is that you might also be switching the endpoints when changing cluster config, e.g. upgrading from non-TLS to TLS cluster. You can see a pretty detailed discussion in #5168.

On clarifying protocol semantics, I think we have a PoR to move forward here with #11299. I'll be putting together a SoW draft for this during the week. This is probably the best way to get a formal and testable statement of intended semantics in these corner cases.

@markdroth
Copy link
Contributor

Reading the discussion in #5168, the problem there actually seems like another symptom of the fact that Envoy's implementation does not cleanly separate the transport protocol layer (which by necessity includes caching) from the data model layer (in this case, the cluster instance). IMHO, these should be two different layers with a clear separation between them, so that implementation details in the data model layer (such as completely replacing a cluster instance whenever one field changes) do not affect the behavior of the transport protocol layer (such as caching).

To say that another way, I think that the existence of this issue is an indication that the existing solution to #5168 didn't really solve the problem. I think that if Envoy changed to cleanly separate these two layers, it would not only solve the immediate problem but also prevent future problems caused by this same underlying architectural problem.

In fact, it looks to me like you actually suggested something similar to this in #5168 (comment), and I don't see any follow-up discussion there saying why this should not work. I see that you said that it seems odd to do this in the gRPC mux impl, which I can understand, but I don't see why we can't wrap a fuller XdsClient abstraction around the gRPC mux impl.

@sschepens
Copy link
Contributor Author

@htuch if one were to make something like change TLS without downtime, we now have transport_socket_matches and also, couldn't something be made with AggregateClusters?

@markdroth
Copy link
Contributor

BTW, if you're interested in what the XdsClient API looks like in gRPC, the relevant part is here:

https://github.com/grpc/grpc/blob/139895d2c8a0e52540dfba323970f905a2d29df6/src/core/ext/xds/xds_client.h#L96

In our implementation, there's a separate watch API for each type of resource, mainly because we can't depend on the protobuf library and didn't want to make upb part of this API. But in Envoy, you could easily have a single watch API used for all resource types, simply by specifying the resource type as a parameter when you start the watch and having the data reported to the watcher be an Any.

@sschepens
Copy link
Contributor Author

I'm with @markdroth here.

I think from a protocol standpoint it pretty crappy for mangement servers to have to resend a configuration change if nothing really changed. Furthermore, in Incremental, Envoy is subscribed to resource updates and should only get notifiend when those actually change.

In the case of updating clusters to TLS, if it's not doable with current mechanics, maybe we're in need of additional methods for transparently doing this sort of updates, but it would seem that transport_socket_matches and AggregateCluster are some of those mechanics.

@htuch
Copy link
Member

htuch commented Sep 8, 2020

As in the original CDS work, our constraints are not greenfield, to some extent we need to be backwards compatible with existing management servers. I don't think it would be particularly hard to add caching to Envoy's existing implementation, the question is really what is the correct behavior on the wire, given Envoy's existing behavior, xDS server implementations and our desired goals. We've been running with the "CDS updates implies warming, implies an EDS response" for some time. Can we design something that works for folks who are assuming that and migrate to a better model? Potentially, but I don't think we can sunset Envoy's cluster warming behavior overnight.

@markdroth
Copy link
Contributor

If Envoy stops requiring management servers to proactively send an EDS response after a CDS update, then it's not harmful for management servers that continue to do that (the update will effectively be a no-op on the client if nothing has changed), but it means that new management servers don't need to send that unnecessary update.

@htuch
Copy link
Member

htuch commented Sep 8, 2020

One thing I'd like to point out, going all the way back to the top of this thread, is that in Envoy today, what is going on, is that a cluster is going away and a new one is arriving, which happens to share the same name. When you delete a resource, in CDS, you logically unsubscribe from the EDS resources. We could special case this to talk about what happens over an update, but keep in mind we're adding in an extra corner case to deal with in implementations.

I'm not opposed to any direction we take here. IMHO, the most logical thing would be for subscriptions to actually references the subscribing resource instance (e.g. via UUID), so that you could independently unsubscribe and resubscribe. But I don't think it's worth building that out just for this.

@htuch
Copy link
Member

htuch commented Sep 8, 2020

If Envoy stops requiring management servers to proactively send an EDS response after a CDS update, then it's not harmful for management servers that continue to do that (the update will effectively be a no-op on the client if nothing has changed), but it means that new management servers don't need to send that unnecessary update.

It's not as simple as this, since some Envoy users will be expecting the warming behavior and want to defer the cluster warming until endpoint update. See the TLS upgrade example, which is not a hypothetical but a valid use of Envoy's existing mechanisms.

@htuch
Copy link
Member

htuch commented Sep 8, 2020

Also worth keeping in mind, this is not some undocumented behavior that people happen to be relying on, we've been guaranteeing this for some time, see https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/cluster_manager.html?highlight=warming#cluster-warming

@markdroth
Copy link
Contributor

markdroth commented Sep 8, 2020

One thing I'd like to point out, going all the way back to the top of this thread, is that in Envoy today, what is going on, is that a cluster is going away and a new one is arriving, which happens to share the same name. When you delete a resource, in CDS, you logically unsubscribe from the EDS resources. We could special case this to talk about what happens over an update, but keep in mind we're adding in an extra corner case to deal with in implementations.

I think the right way to model this is to say that the implicit deletion of RDS and EDS resources happens at the transport protocol layer, not at the data model layer -- which makes sense if you think about it, because this deletion is a cache operation, and caching is dictated by the transport protocol, not the data model. In other words, inside of the XdsClient, when we receive a CDS update that removes a CDS resource, we remove the cached EDS resource that was referred to by the removed CDS resource, and we notify any watchers of that EDS resource that the resource no longer exists. It should not be tied to the deletion of the data model object (the cluster instance) at all.

What this means is that when the XdsClient sees an updated CDS resource that changes some field other than the EDS service name, it knows that the EDS resource is still referenced and is therefore not being deleted. The fact that Envoy will create a new cluster instance and destroy the old one at the data model layer should be irrelevant to that decision.

(Also, as we've discussed separately, I think that the implicit deletion of the RDS and EDS resources is a very ugly layering violation, specifically because it does require the transport protocol layer to know something about the relationship between the resources, which is really a data model concern. IMHO we should remove this behavior as part of moving to the incremental protocol variants, which will make the layering cleaner here.)

It's not as simple as this, since some Envoy users will be expecting the warming behavior and want to defer the cluster warming until endpoint update. See the TLS upgrade example, which is not a hypothetical but a valid use of Envoy's existing mechanisms.

Also worth keeping in mind, this is not some undocumented behavior that people happen to be relying on, we've been guaranteeing this for some time, see https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/cluster_manager.html?highlight=warming#cluster-warming

Interesting. I agree, that will definitely make it harder to get rid of this.

Hmmm... If management servers expect clients to not use the existing EDS data after a CDS update, is it going to cause problems that gRPC will use that existing data? What if a management server is intentionally sending a CDS update that requires drastically different EDS data? For example, if the CDS update switches from plaintext to TLS, and the EDS update points to a new set of endpoints that speak TLS?

This behavior seems like a clear layering violation and is extremely difficult to reason about. I really don't want to implement it in gRPC. I think this is going to cause problems if we don't clean this up.

@markdroth
Copy link
Contributor

Note that if we can find a way to get rid of the existing behavior, then in cases where a management server does not want the new cluster instance to use the old EDS data, it can always use a new EDS service name in the updated CDS resource.

@htuch
Copy link
Member

htuch commented Sep 8, 2020

I see this less as a layering violation as an attempt to bolt on consistency to an eventually consistent model. What warming on EDS is currently doing is saying "we may know the old EDS, but we want another EDS that is temporally fresher than this older one" (if you ignore my cluster actor model above and think purely in transport terms). Which might not even be true in an eventually consistent environment (i.e. not ADS), your warming EDS might be some delayed EDS update of data older than the cluster.

We've discussed trying to explicitly add in a notion of version dependency in the past, this probably isn't the right time to add it, but if we did have that, then it would be a lot clearer what is going on.

I agree it's pretty broken and hard to deal with.

Maybe one path forward here is to add explicit fields to Cluster to describe what warming behavior we want to see (with the default retaining existing semantics).

@fxposter
Copy link

fxposter commented Sep 8, 2020

@htuch @markdroth I am very far from what you both discuss about separating data model and transport model, but - from what I see in APIs perspective - it basically says that dependencies between resources are described in terms of names. This "warming" thing from APIs perspective look like an implementation detail of envoy that is simply not possible to describe in terms of what API provides: cluster has changed, it's name is the same, management server's state from endpoints perspective has not changed - no updates by default should be here.
Also, if we go a bit further - if you get multiple updates to the same cluster in short period of time - that would require the "warming" to happen on the last update, but when you receive EDS "warming response" - is it possible for envoy to know - to which exact update is this EDS "warming response" related?

@htuch
Copy link
Member

htuch commented Sep 8, 2020

@fxposter Envoy just cares that it sees an EDS update after the CDS update, not which cluster it relates to. I don't see this as an Envoy implementation detail, this is clearly specified in the docs which are part of what currently constitutes the API for folks building control planes.

@markdroth
Copy link
Contributor

Maybe one path forward here is to add explicit fields to Cluster to describe what warming behavior we want to see (with the default retaining existing semantics).

That might very well be a useful migration path. We could do a phased migration where we initially leave the default alone but provide the field to request the new behavior, then change the default but allow it to be disabled via the field, and then finally remove the field completely.

I just want to make sure that we wind up in a plce where this kind of layering violation is not supported. Otherwise, we will never have a clean separation between the transport protocol and the data model.

@htuch
Copy link
Member

htuch commented Sep 8, 2020

As discussed with @markdroth offline, this is an area of xDS that is super confusing when we consider the mix of SotW, delta, ADS and non-ADS, different consistency models. From the above thread, it's also apparent that the current behavior is not well understood. I think we should put together a uRFC https://github.com/cncf/udpa/tree/master/proposals to clarify what the existing behavior is and what we want the future behavior to be, without breaking backwards compatibility for existing use cases where the current behavior makes sense.

@fxposter
Copy link

fxposter commented Sep 8, 2020

@htuch I mean a bit different thing. If this feature is intended for envoy not using "partially constructed" clusters in this particular case - then if there are multiple updates to the same cluster - when you receive an update to corresponding endpoints - you still don't know which cluster state that update of endpoints is related. And yes - "warming" will succeed, but it may succeed with wrong state.

ie: if cluster states (for single particular cluster) are CS1, CS2, CS3... and corresponding "good" endpoint states are ES1, ES2, ES3... Then if you receive an almost simultaneous updates "-> CS2" and "->CS3" - CDS will require warmup for "CS3" state. And if afterwards envoy receives state "ES2" and later "ES3", I can guess the first "fully constructed state" will be (CS3 + ES2), which might be incorrect.

@htuch
Copy link
Member

htuch commented Sep 8, 2020

@fxposter yes, unless you linearize on ADS you can get various kinds of races happening. This is what I was referring to when the ADS vs. non-ADS case being also pretty confusing above.

@htuch
Copy link
Member

htuch commented Sep 8, 2020

Here's my concrete proposal (which should be turned into a uRFC):

  1. We add a field to Cluster that says dont_warm_on_eds_for_updates. When this is set, Envoy inherits the existing cluster's hosts (I think @snowp mentioned in the original issue that this was plausible).
  2. We clarify for future xDS that resource subscriptions for resource Y from resource X remain active across updates to X.

None of this will change existing behaviors of LDS, CDS, etc. since we will potentially break folks who are relying on the existing warming behaviors.

@markdroth
Copy link
Contributor

I think that's a fine proposal, but I would also like to see a migration path for fully eliminating the problematic behavior. If we don't define such a path, we'll never be rid of it.

@sschepens
Copy link
Contributor Author

@htuch shouldn't this also consider Listener? since they also warmup

@mattklein123
Copy link
Member

The proposal in #13009 (comment) SGTM. We can discuss the exact field name, etc. in code review.

I agree overall that this is an incredibly confusing detail that has tripped a lot of people up and we should fix this.

I think that's a fine proposal, but I would also like to see a migration path for fully eliminating the problematic behavior. If we don't define such a path, we'll never be rid of it.

On the Envoy side we can talk about potentially flipping the default for the field, but we would have to give a lot of warning, runtime revert support, etc. as we have a large number of management servers in the wild that might break.

@ramaraochavali
Copy link
Contributor

FWIW, #6021 implemented 1 on the proposal above

@markdroth
Copy link
Contributor

FWIW, I agree that the semantics of ACK vs. request in the transport protocol could be made clearer, and I think that's something we should work on. However, as @htuch said, I think that's a separate problem from the caching functionality that is the subject of this issue. In this case, there should not be a subsequent request in the first place, because the client should not be re-requesting a resource that it is already subscribed to. I think we should add caching to solve that problem rather than trying to hack in some way for the client to re-request something that it should already have.

@hzxuzhonghu
Copy link
Contributor

@stevenzzzz
Copy link
Contributor

cc @stevenzzzz

@markdroth
Copy link
Contributor

Work on the xDS transport protocol caching layer is being tracked in #24773.

hashi-derek added a commit to hashicorp/consul that referenced this issue Dec 8, 2023
This fixes the following race condition:
- Send update endpoints
- Send update cluster
- Recv ACK endpoints
- Recv ACK cluster

Prior to this fix, it would have resulted in the endpoints NOT existing in
Envoy. This occurred because the cluster update implicitly clears the endpoints
in Envoy, but we would never re-send the endpoint data to compensate for the
loss, because we would incorrectly ACK the invalid old endpoint hash. Since the
endpoint's hash did not actually change, they would not be resent.

The fix for this is to effectively clear out the invalid pending ACKs for child
resources whenever the parent changes. This ensures that we do not store the
child's hash as accepted when the race occurs.

An escape-hatch environment variable `XDS_PROTOCOL_LEGACY_CHILD_RESEND` was
added so that users can revert back to the old legacy behavior in the event
that this produces unknown side-effects. Visit the following thread for some
extra context on why certainty around these race conditions is difficult:
envoyproxy/envoy#13009

This bug report and fix was mostly implemented by @ksmiley with some minor
tweaks.

Co-authored-by: Keith Smiley <ksmiley@salesforce.com>
hashi-derek added a commit to hashicorp/consul that referenced this issue Dec 8, 2023
This fixes the following race condition:
- Send update endpoints
- Send update cluster
- Recv ACK endpoints
- Recv ACK cluster

Prior to this fix, it would have resulted in the endpoints NOT existing in
Envoy. This occurred because the cluster update implicitly clears the endpoints
in Envoy, but we would never re-send the endpoint data to compensate for the
loss, because we would incorrectly ACK the invalid old endpoint hash. Since the
endpoint's hash did not actually change, they would not be resent.

The fix for this is to effectively clear out the invalid pending ACKs for child
resources whenever the parent changes. This ensures that we do not store the
child's hash as accepted when the race occurs.

An escape-hatch environment variable `XDS_PROTOCOL_LEGACY_CHILD_RESEND` was
added so that users can revert back to the old legacy behavior in the event
that this produces unknown side-effects. Visit the following thread for some
extra context on why certainty around these race conditions is difficult:
envoyproxy/envoy#13009

This bug report and fix was mostly implemented by @ksmiley with some minor
tweaks.

Co-authored-by: Keith Smiley <ksmiley@salesforce.com>
hashi-derek added a commit to hashicorp/consul that referenced this issue Dec 8, 2023
This fixes the following race condition:
- Send update endpoints
- Send update cluster
- Recv ACK endpoints
- Recv ACK cluster

Prior to this fix, it would have resulted in the endpoints NOT existing in
Envoy. This occurred because the cluster update implicitly clears the endpoints
in Envoy, but we would never re-send the endpoint data to compensate for the
loss, because we would incorrectly ACK the invalid old endpoint hash. Since the
endpoint's hash did not actually change, they would not be resent.

The fix for this is to effectively clear out the invalid pending ACKs for child
resources whenever the parent changes. This ensures that we do not store the
child's hash as accepted when the race occurs.

An escape-hatch environment variable `XDS_PROTOCOL_LEGACY_CHILD_RESEND` was
added so that users can revert back to the old legacy behavior in the event
that this produces unknown side-effects. Visit the following thread for some
extra context on why certainty around these race conditions is difficult:
envoyproxy/envoy#13009

This bug report and fix was mostly implemented by @ksmiley with some minor
tweaks.

Co-authored-by: Keith Smiley <ksmiley@salesforce.com>
hashi-derek added a commit to hashicorp/consul that referenced this issue Dec 8, 2023
This fixes the following race condition:
- Send update endpoints
- Send update cluster
- Recv ACK endpoints
- Recv ACK cluster

Prior to this fix, it would have resulted in the endpoints NOT existing in
Envoy. This occurred because the cluster update implicitly clears the endpoints
in Envoy, but we would never re-send the endpoint data to compensate for the
loss, because we would incorrectly ACK the invalid old endpoint hash. Since the
endpoint's hash did not actually change, they would not be resent.

The fix for this is to effectively clear out the invalid pending ACKs for child
resources whenever the parent changes. This ensures that we do not store the
child's hash as accepted when the race occurs.

An escape-hatch environment variable `XDS_PROTOCOL_LEGACY_CHILD_RESEND` was
added so that users can revert back to the old legacy behavior in the event
that this produces unknown side-effects. Visit the following thread for some
extra context on why certainty around these race conditions is difficult:
envoyproxy/envoy#13009

This bug report and fix was mostly implemented by @ksmiley with some minor
tweaks.

Co-authored-by: Keith Smiley <ksmiley@salesforce.com>
Lord-Y pushed a commit to Lord-Y/consul that referenced this issue Dec 14, 2023
This fixes the following race condition:
- Send update endpoints
- Send update cluster
- Recv ACK endpoints
- Recv ACK cluster

Prior to this fix, it would have resulted in the endpoints NOT existing in
Envoy. This occurred because the cluster update implicitly clears the endpoints
in Envoy, but we would never re-send the endpoint data to compensate for the
loss, because we would incorrectly ACK the invalid old endpoint hash. Since the
endpoint's hash did not actually change, they would not be resent.

The fix for this is to effectively clear out the invalid pending ACKs for child
resources whenever the parent changes. This ensures that we do not store the
child's hash as accepted when the race occurs.

An escape-hatch environment variable `XDS_PROTOCOL_LEGACY_CHILD_RESEND` was
added so that users can revert back to the old legacy behavior in the event
that this produces unknown side-effects. Visit the following thread for some
extra context on why certainty around these race conditions is difficult:
envoyproxy/envoy#13009

This bug report and fix was mostly implemented by @ksmiley with some minor
tweaks.

Co-authored-by: Keith Smiley <ksmiley@salesforce.com>
jmurret added a commit to hashicorp/consul that referenced this issue Dec 21, 2023
…config.cluster.v3.Cluster.http_protocol_options. into release/1.16.x (#20032)

* Add grpc keepalive configuration. (#19339)

Prior to the introduction of this configuration, grpc keepalive messages were
sent after 2 hours of inactivity on the stream. This posed issues in various
scenarios where the server-side xds connection balancing was unaware that envoy
instances were uncleanly killed / force-closed, since the connections would
only be cleaned up after ~5 minutes of TCP timeouts occurred. Setting this
config to a 30 second interval with a 20 second timeout ensures that at most,
it should take up to 50 seconds for a dead xds connection to be closed.

* bump raft-wal version to 0.4.1 (#19314)

* bump raft-wal version to 0.4.1

* changelog

* go mod tidy integration tests

* go mod tidy test-integ

* NET-5397 - wire up destination golden tests from sidecar-proxy controller for xds controller and xdsv2 (#19167)

* NET-5397 - wire up golden tests from sidecar-proxy controller for xds controller and xdsv2

* WIP

* WIP

* everything matching except leafCerts.  need to mock those

* single port destinations working except mixed destinations

* golden test input to xds controller tests for destinations

* proposed fix for failover group naming errors

* clean up test to use helper.

* clean up test to use helper.

* fix test file

* add docstring for test function.

* add docstring for test function.

* fix linting error

* fixing test after route fix merged into main

* gvk partial inference (#19058)

* gvk partial inference

* NET-6079 - wire up sidecarproxy golden file inputs into xds controller - sources (#19241)

* NET-5397 - wire up golden tests from sidecar-proxy controller for xds controller and xdsv2

* WIP

* WIP

* everything matching except leafCerts.  need to mock those

* single port destinations working except mixed destinations

* golden test input to xds controller tests for destinations

* proposed fix for failover group naming errors

* clean up test to use helper.

* clean up test to use helper.

* fix test file

* add docstring for test function.

* add docstring for test function.

* fix linting error

* fixing test after route fix merged into main

* first source test works

* WIP

* modify all source files

* source tests pass

* fixing tests after bug fix in main

* NET-6080 - xds controller golden file inputs into xds resources - destinations (#19244)

* NET-5397 - wire up golden tests from sidecar-proxy controller for xds controller and xdsv2

* WIP

* WIP

* everything matching except leafCerts.  need to mock those

* single port destinations working except mixed destinations

* golden test input to xds controller tests for destinations

* proposed fix for failover group naming errors

* clean up test to use helper.

* clean up test to use helper.

* fix test file

* add docstring for test function.

* add docstring for test function.

* fix linting error

* fixing test after route fix merged into main

* first source test works

* WIP

* modify all source files

* source tests pass

* fixing tests after bug fix in main

* got first destination working.

* adding destinations

* fix docstring for test

* fixing tests after bug fix in main

* NET-6081 - xds controller golden file inputs into xds resources - sources (#19250)

* NET-5397 - wire up golden tests from sidecar-proxy controller for xds controller and xdsv2

* WIP

* WIP

* everything matching except leafCerts.  need to mock those

* single port destinations working except mixed destinations

* golden test input to xds controller tests for destinations

* proposed fix for failover group naming errors

* clean up test to use helper.

* clean up test to use helper.

* fix test file

* add docstring for test function.

* add docstring for test function.

* fix linting error

* fixing test after route fix merged into main

* first source test works

* WIP

* modify all source files

* source tests pass

* fixing tests after bug fix in main

* got first destination working.

* adding destinations

* fix docstring for test

* fixing tests after bug fix in main

* adding source proxies

* fixing tests after bug fix in main

* got first destination working.

* adding destinations

* fix docstring for test

* fixing tests after bug fix in main

* got first destination working.

* adding destinations

* fix docstring for test

* fixing tests after bug fix in main

* Use strict DNS for mesh gateways with hostnames (#19268)

* Use strict DNS for mesh gateways with hostnames

* Add changelog

* [NET-6305] xds: Ensure v2 route match and protocol are populated for gRPC (#19343)

* xds: Ensure v2 route match is populated for gRPC

Similar to HTTP, ensure that route match config (which is required by
Envoy) is populated when default values are used.

Because the default matches generated for gRPC contain a single empty
`GRPCRouteMatch`, and that proto does not directly support prefix-based
config, an interpretation of the empty struct is needed to generate the
same output that the `HTTPRouteMatch` is explicitly configured to
provide in internal/mesh/internal/controllers/routes/generate.go.

* xds: Ensure protocol set for gRPC resources

Add explicit protocol in `ProxyStateTemplate` builders and validate it
is always set on clusters. This ensures that HTTP filters and
`http2_protocol_options` are populated in all the necessary places for
gRPC traffic and prevents future unintended omissions of non-TCP
protocols.

Co-authored-by: John Murret <john.murret@hashicorp.com>

---------

Co-authored-by: John Murret <john.murret@hashicorp.com>

* Add clarification for route match behavior (#19363)

* Add clarification for route match behavior

* Update website/content/docs/connect/config-entries/service-defaults.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

---------

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Fix 1.17.x release notes and added templated policies (#19370)

* docs - release notes (add enterprise label and example of non compatible service name) (#19377)

* Update v1_17_x.mdx

* Update v1_17_x.mdx

* feat: read resource namespace (#19320)

* test: add missing tests for read resource namespace

* refactor: remove redundant test

* refactor: rename import aliases

* fix: typo var name

* refctor: remove another redundant test

* Net 5875 - Create the Exported Services Resources (#19117)

* init

* computed exported service

* make proto

* exported services resource

* exported services test

* added some tests and namespace exported service

* partition exported services

* computed service

* computed services tests

* register types

* fix comment

* make proto lint

* fix proto format make proto

* make codegen

* Update proto-public/pbmulticluster/v1alpha1/computed_exported_services.proto

Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>

* Update internal/multicluster/internal/types/computed_exported_services.go

Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>

* using different way of resource creation in tests

* make proto

* fix computed exported services test

* fix tests

* differnet validation for computed services for ent and ce

* Acls for exported services

* added validations for enterprise features in ce

* fix error

* fix acls test

* Update internal/multicluster/internal/types/validation_exported_services_ee.go

Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>

* removed the create method

* update proto

* removed namespace

* created seperate function for ce and ent

* test files updated and validations fixed

* added nil checks

* fix tests

* added comments

* removed tenancy check

* added mutation function

* fix mutation method

* fix list permissions in test

* fix pr comments

* fix tests

* lisence

* busl license

* Update internal/multicluster/internal/types/helpers_ce.go

Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>

* Update internal/multicluster/internal/types/helpers_ce.go

Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>

* Update internal/multicluster/internal/types/helpers_ce.go

Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>

* make proto

* some pr comments addressed

* some pr comments addressed

* acls helper

* some comment changes

* removed unused files

* fixes

* fix function in file

* caps

* some positioing

* added test for validation error

* fix names

* made valid a function

* remvoed patch

* removed mutations

* v2 beta1

* v2beta1

* rmeoved v1alpha1

* validate error

* merge ent

* some nits

* removed dup func

* removed nil check

---------

Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com>

* test: add missing tests for list endpoint (#19364)

* Add enterprise label for rate limiting (#19384)

* test deployer: fix a bug when deploying cluster with various ent images (#19381)

* Stop use of templated-policy and templated-policy-file simultaneously (#19389)

* Resource Hook Pre-Decode Utilities (#18548)

Add some generic type hook wrappers to first decode the data

There seems to be a pattern for Validation, Mutation and Write Authorization hooks where they first need to decode the Any data before doing the domain specific work.

This PR introduces 3 new functions to generate wrappers around the other hooks to pre-decode the data into a DecodedResource and pass that in instead of the original pbresource.Resource.

This PR also updates the various catalog data types to use the new hook generators.

* resource: resource service now checks for `v2tenancy` feature flag (#19400)

* Fix casing in example yaml config (#19369)

* Delete unused files (#19402)

* NET-6294 - v1 Agentless proxycfg datasource errors after v2 changes (#19365)

* increasing unit tests timeout from 10m to 30m (#19423)

* [NET-6138] security: Bump `google.golang.org/grpc` to 1.56.3 (CVE-2023-44487) (#19414)

Bump google.golang.org/grpc to 1.56.3

This resolves [CVE-2023-44487](https://nvd.nist.gov/vuln/detail/CVE-2023-44487).

Co-authored-by: Chris Thain <chris.m.thain@gmail.com>

* Update multi-port examples to remove spec.template.metadata.name (#19430)

* integ test: snapshot mesh frozen bug test (#19435)

* integ test: snapshot mesh frozen bug test

* docs - Update k8s compat matrix  (#19378)

* Update compatibility.mdx

* Update README.md (#19413)

* Fix typo in kind for JWT config on API Gateway (#19441)

* NET-5186 Add NET_BIND_SERVICE capability to consul-dataplane requirements (#18512)

* Add NET_BIND_SERVICE capability to list of consul-dataplane requirements

* Add link to related Kubernetes documentation

Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>

---------

Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>

* added redirect for conf entries 1.8.x (#19460)

* Update docs for service splitter example typo (#19469)

* Regen expired test certs (#19476)

* build: ensure we pull through the hashicorp proxy instead of going directly to the docker hub (#19482)

* temporarily disallow L7 traffic permissions (#19322)

* Source / local_app golden tests to include all protocols. (#19436)

* cover all protocols in local_app golden tests

* fix xds tests

* updating latest

* fix broken test

* add sorting of routers to TestBuildLocalApp to get rid of the flaking

* [NET-5916] Fix locality-aware routing config and tests (CE) (#19483)

Fix locality-aware routing config and tests

* testing/deployer: update deployer to use v2 catalog constructs when requested (#19046)

This updates the testing/deployer (aka "topology test") framework to conditionally 
configure and launch catalog constructs using v2 resources. This is controlled via a 
Version field on the Node construct in a topology.Config. This only functions for a 
dataplane type and has other restrictions that match the rest of v2 (no peering, no 
wanfed, no mesh gateways).

Like config entries, you can statically provide a set of initial resources to be synced 
when bringing up the cluster (beyond those that are generated for you such as 
workloads, services, etc).

If you want to author a test that can be freely converted between v1 and v2 then that 
is possible. If you switch to the multi-port definition on a topology.Service (aka 
"workload/instance") then that makes v1 ineligible.

This also adds a starter set of "on every PR" integration tests for single and multiport 
under test-integ/catalogv2

* resource: misc finalizer apis (#19474)

* testing/deployer: support tproxy in v2 for dataplane (#19094)

This updates the testing/deployer (aka "topology test") framework to allow for a 
v2-oriented topology to opt services into enabling TransparentProxy. The restrictions 
are similar to that of #19046

The multiport Ports map that was added in #19046 was changed to allow for the 
protocol to be specified at this time, but for now the only supported protocol is TCP 
as only L4 functions currently on main.

As part of making transparent proxy work, the DNS server needed a new zonefile 
for responding to virtual.consul requests, since there is no Kubernetes DNS and 
the Consul DNS work for v2 has not happened yet. Once Consul DNS supports v2 we should switch over. For now the format of queries is:

<service>--<namespace>--<partition>.virtual.consul

Additionally:

- All transparent proxy enabled services are assigned a virtual ip in the 10.244.0/24
  range. This is something Consul will do in v2 at a later date, likely during 1.18.
- All services with exposed ports (non-mesh) are assigned a virtual port number for use
  with tproxy
- The consul-dataplane image has been made un-distroless, and gotten the necessary
  tools to execute consul connect redirect-traffic before running dataplane, thus simulating
  a kubernetes init container in plain docker.

* update v2 changelog (#19446)

* NET-6319 - L7 routes have statePrefix of upstream. and should have a full path (#19473)

* resource: finalizer aware delete endpoint (2 of 5) (#19493)

resource: make delete endpoint finalizer aware

* build: dependency updates for 1.17.0 (#19453)

* chore: apply enterprise changes that were missed to some testing files (#19504)

This should align between CE ef35525 and ENT 7f95226dbe40151c8f17dd4464784b60cf358dc1 in:

- testing/integration/consul-container
- test-integ
- testing/deployer

* Net-6291/fix/watch resources (#19467)

* fix: update watch endpoint to default based on scope

* test: additional test

* refactor: rename list validate function

* refactor: rename validate<Op>Request() -> ensure<Op>RequestValid() for consistency

* XDS V1 should not make runs for TCP Disco Chains. (#19496)

* XDS V1 should not make runs for TCP Disco Chains.

* update TestEnvoyExtenderWithSnapshot

* testing: disable v2 linkage to nodes in integration tests (#19509)

* Shuffle CICD tests to spread worker load. (#19501)

* [NET-6459] Fix issue with wanfed lan ip conflicts. (#19503)

Fix issue with wanfed lan ip conflicts.

Prior to this commit, the connection pools were unaware which datacenter the
connection was associated with. This meant that any time servers with
overlapping LAN IP addresses and node shortnames existed, they would be
incorrectly co-located in the same pool. Whenever this occurred, the servers
would get stuck in an infinite loop of forwarding RPCs to themselves (rather
than the intended remote DC) until they eventually run out of memory.

Most notably, this issue can occur whenever wan federation through mesh
gateways is enabled.

This fix adds extra metadata to specify which DC the connection is associated
with in the pool.

* CC-5545: Side Nav (#19342)

* Initial work for sidenav

* Use HDS::Text

* Add resolution for ember-element-helper

* WIP dc selector

* Update HCP Home link

* DC selector

* Hook up remaining selectors

* Fix settings and tutorial links

* Remove comments

* Remove skip-links

* Replace auth with new dropdown

* Use href-to helper for sidenav links

* Changelog

* Add description to NavSelector

* Wrap version in footer and role

* Fix login tests

* Add data-test selectors for namespaces

* Fix datacenter disclosure menu test

* Stop rendering auth dialog if acls are disabled

* Update disabled selector state and token selector

* Fix logic in ACL selector

* Fix HCP Home integration test

* Remove toggling the sidenav in tests

* Add sidenav to eng docs

* Re-add debug navigation for eng docs

* Remove ember-in-viewport

* Remove unused styles

* Upgrade @hashicorp/design-system-componentseee

* Add translations for side-nav

* Only show back to hcp link if url is present

* Disable responsive due to a11y-dialog issue

* fixed typos in redirect for api gateways (#19526)

* fixed typos in redirect for api gateways

* one more typo

* final typo

* [NET-5916] Update locality-aware routing docs (#19529)

* docs: Update locality-aware routing docs

- Align locality-aware routing documentation to the recommended use of the
feature and incorporate engineer feedback.
- Remove docs for unreleased multi-cluster failover feature.
- Fix minor typos and formatting in examples.

* docs: Fix rate limit docs typo

* [NET-5916] docs: Remove locality proxy startup section (#19534)

docs: Remove locality proxy startup section

This section is not necessary as it is not unique to the feature. The
instructions for starting proxies are available in other pages.

* Ci upgrade test 1 17 (#19536)

CI: upgrade test from 1.17

* added 1.17 features to enterprise overview (#19514)

* added 1.17 features to enterprise overview

* added features to runtime tables

* Apply suggestions from code review

Co-authored-by: David Yu <dyu@hashicorp.com>

* add ecs release notes

* add draft of 1.3.x consul-k8s release notes

* update nav with new release notes

* Apply suggestions from code review

Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com>

---------

Co-authored-by: David Yu <dyu@hashicorp.com>
Co-authored-by: Tu Nguyen <im2nguyen@gmail.com>
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com>

* Added tenancy tests for WorkloadHealth controller (#19530)

* v2tenancy: register tenancy controller deps (#19531)

* NET-6385 - Static routes that are inlined in listener filters are also created as a resource. (#19459)

* cover all protocols in local_app golden tests

* fix xds tests

* updating latest

* fix broken test

* add sorting of routers to TestBuildLocalApp to get rid of the flaking

* cover all protocols in local_app golden tests

* cover all protocols in local_app golden tests

* cover all protocols in local_app golden tests

* process envoy resource by walking the map.  use a map rather than array for envoy resource to prevent duplication.

* cleanup.  doc strings.

* update to latest

* fix broken test

* update tests after adding sorting of routers in local_app builder tests

* do not make endpoints for local_app

* fix catalog destinations only by creating clusters for any cluster not already created by walking the graph.

* Configure TestAllResourcesFromSnapshot to run V2 tests

* wip

* fix processing of failover groups

* add endpoints and clusters for any clusters that were not created from walking the listener -> path

* fix xds v2 golden files for clusters to include failover group clusters

* Add kubebuilder annotations to enums (#19454)

* test: fix some of the peering topology tests to safely run without tenancy in CE (#19540)

* Fix more test flakes (#19533)

Fix flaky snapshot and metrics tests.

* NET-6390 Initialize MeshGateway proto (#19548)

* ui: clear peer on home link (#19549)

Clear peer on home link

* test: update makefile to include ways to trigger deployer integration tests (#19553)

* test: update deployer default images (#19554)

* test: update certs for 10 year expiry (#19481)

* Update enterprise features table with 1.17 features (#19558)

update enterprise features table with 1.17 features

* Fix typo in GatewayClassConfig docs (#19563)

* Fix typo in GatewayClassConfig docs

* Fix broken links

* docs: Multi-port support for v1.17 GA (#19401)

* Catalog concept pages creation

* Multi-port services overview - initial update

* CLI command page creation

* File location adjustment

* nav

* New resource pages - creation

* nav fix

* resource info

* specs start

* GRPCRoute specs and structure

* GRPCRoute configuration model

* gRPCRoute models and examples

* HTTP copy

* Resource configuration alignment

* Catalogs

* Deployment error fix

* HTTPRoute specs

* TCP Route specifications

* proxy configuration model

* ProxyConfiguration specifications

* Example

* basic traffic permissions info

* complete config structure

* tab spacing

* Traffic permission specifications

* Proxy config example description

* Apply suggestions from code review

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Co-authored-by: Nitya Dhanushkodi <nitya@hashicorp.com>

* v1 concept review updates

* v2 catalog code review updates

* V2 catalog contraints + guidance

* Proxyconfiguration code review changes

* Apply suggestions from code review

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Nitya Dhanushkodi <nitya@hashicorp.com>
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Cross-apply suggestions to reference pages

* More code review suggestions

* comment fix

* Apply suggestions from code review

* Index + usage updates

* TCP clarification

* Minor fixes

* remove references to unsupported features

* comment fix

* Resource command section removed

* Tested instructions

* More updates based on testing

* Apply suggestions from code review

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Multi-port traffic permissions example

* parent/child alignment

* Dataplanes requirement

* Update version-specific CLI install

* Method 2

* Tab fix

* Consul resource updates

* nav fix

* Catalog groups

* Catalog `group` info

* time formatting

---------

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Co-authored-by: Nitya Dhanushkodi <nitya@hashicorp.com>

* test/deployer: add the method of deregistering services (#19525)

* add DeliverLatest as common function for use by Manager and ProxyTracker Open (#19564)

Open
add DeliverLatest as common function for use by Manager and ProxyTracker

* Migrate individual resource tests for Discovery Chains to TestAllResourcesFromSnapshot (#19508)

migrate disco chain tests to resources_test.go

* NET 6354 - Add tenancy in Node Health Controller (#19457)

* node health controller tenancy

* some prog

* some fixes

* revert

* pr comment resolved

* removed name

* cleanup nodes

* some fixes

* merge main

* Migrate individual resource tests for expose paths and checks to TestAllResourcesFromSnapshot (#19513)

* migrate expose checks and paths  tests to resources_test.go

* fix failing expose paths tests

* Introduce randomized timings and reproducible randomization into controller integration tests. (#19393)

As the V2 architecture hinges on eventual consistency and controllers reconciling the existing state in response to writes, there are potential issues we could run into regarding ordering and timing of operations. We want to be able to guarantee that given a set of resources the system will always eventually get to the desired correct state. The order of resource writes and delays in performing those writes should not alter the final outcome of reaching the desired state.

To that end, this commit introduces arbitrary randomized delays before performing resources writes into the `resourcetest.Client`. Its `PublishResources` method was already randomizing the order of resource writes. By default, no delay is added to normal writes and deletes but tests can opt-in via either passing hard coded options when creating the `resourcetest.Client` or using the `resourcetest.ConfigureTestCLIFlags` function to allow processing of CLI parameters.

In addition to allowing configurability of the request delay min and max, the client also has a configurable random number generator seed. When Using the CLI parameter helpers, a test log will be written noting the currently used settings. If the test fails then you can reproduce the same delays and order randomizations by providing the seed during the previous test failure.

* Migrate individual resource tests for custom configuration to TestAllResourcesFromSnapshot (#19512)

* Configure TestAllResourcesFromSnapshot to run V2 tests

* migrate custom configuration tests to resources_test.go

* Update Helm docs for consul-k8s 1.3.0 (#19581)

* Migrate individual resource tests for JWT Provider to TestAllResourcesFromSnapshot (#19511)

migrate jwt provider tests to resources_test.go

* Fix xds v2 from creating envoy endpoint resources when already inlined in the cluster (#19580)

* migrate expose checks and paths  tests to resources_test.go

* fix failing expose paths tests

* fix the way endpoint resources get created to make expose tests pass.

* wip

* remove endpoint resources that are already inlined on local_app clusters

* renaiming and comments

* test: add a v2 container integration test of xRoute splits (#19570)

This adds a deployer-based integration test verifying that a 90/10 traffic
split works for: HTTPRoute, GRPCRoute, and TCPRoute.

* docs: spike of info about how to use deployer topology tests (#19576)

Initial developer facing docs on how to write deployer (topology) integration tests.

* Net 6439 (#19517)

* node health controller tenancy

* some prog

* some fixes

* revert

* pr comment resolved

* removed name

* Add namespace and tenancy in sidecar proxy controller test

* revert node health controller

* clean up data

* fix local

* copy from ENT

* removed dup code

* removed tenancy

* add test tenancies

* Migrate individual resource tests for Terminating Gateway to TestAllResourcesFromSnapshot (#19505)

migrate terminating-gateway tests to resources_test.go

* Migrate individual resource tests for Ingress Gateway to TestAllResourcesFromSnapshot (#19506)

migrate ingress-gateway tests to resources_test.go

* Update links and fix route kind for APIGW JWT Docs (#19585)

Update links and fix route kind

* Migrate individual resource tests for Mesh Gateway to TestAllResourcesFromSnapshot (#19502)

migrate mesh-gateway tests to resources_test.go

* Migrate individual resource tests for API Gateway to TestAllResourcesFromSnapshot (#19584)

migrate individual api gateway tests to resources_test.go

* toil: use pre-commit to maintain properly formatted imports (#17940)

pro-tip: you can skip this check with `git commit --no-verify` to override

* [NET-6356] Add tenancy to Failover Tests (#19547)

* [NET-6356] Add tenancy to Failover Tests

* [NET-6438] Add tenancy to xDS Tests
- Added cleanup post test run

* [NET-6356] Add tenancy to failover Tests
- using t.Cleanup instead of defer delete

* [NET-6444] Add tenancy to Reaper Tests (#19550)

* Migrate remaining individual resource tests for service mesh to TestAllResourcesFromSnapshot (#19583)

* migrate expose checks and paths  tests to resources_test.go

* fix failing expose paths tests

* fix the way endpoint resources get created to make expose tests pass.

* remove endpoint resources that are already inlined on local_app clusters

* renaiming and comments

* migrate remaining service mesh tests to resources_test.go

* cleanup

* update proxystateconverter to skip ading alpn to clusters and listener filterto match v1 behavior

* NET-6391 Initialize controller for MeshGateway resource (#19552)

* Generate resource_types for MeshGateway by specifying spec option

* Register MeshGateway type w/ TODOs for hooks

* Initialize controller for MeshGateway resources

* Add meshgateway to list of v2 resource dependencies for golden test

* Scope MeshGateway resource to partition

* REPLAT-962 Update LICENSE text (#19574)

* NET-5414: sameness group service show (#19586)

Fix viewing peered services on different namespaces

* Integ test - use asserter (#19597)

* integ-test: use topoutil.asserter to replace customized code

* ui: move queries for selectors within the dropdowns (#19594)

Move queries for selectors within the dropdowns

* added exported svc controller (#19589)

* added exported svc controller

* added license headers

* [NET-6438] Add tenancy to xDS Tests (#19551)

* [NET-6438] Add tenancy to xDS Tests

* [NET-6438] Add tenancy to xDS Tests
- Fixing imports

* [NET-6438] Add tenancy to xDS Tests
- Added cleanup post test run

* [NET-6356] Add tenancy to xDS Tests
- Added cleanup post test run

* [NET-6438] Add tenancy to xDS Tests
- using t.Cleanup instead of defer delete

* [NET-6438] Add tenancy to xDS Tests
- rebased

* [NET-6438] Add tenancy to xDS Tests
- rebased

* testing/deployer: rename various terms to better align with v2 and avoid confusion (#19600)

Conceptually renaming the following topology terms to avoid confusion with v2 and to better align with it:

- ServiceID -> ID
- Service -> Workload
- Upstream -> Destination

* docs: Fix nav link for L7 traffic (#19606)

* fix

* Sameness merge error

* test: add test helper to detect if the current build is enterprise (#19201)

This can help conditionally change test behavior if the test is executing in enterprise without the need to split the test with build tags.

* unhack: fix broken `make lint` on macbooks (#19611)

Fix broken `make lint` on macbooks

* DNS token doc updates (#19592)

* DNS token doc updates

Review feedback

Update website/content/commands/acl/set-agent-token.mdx

Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>

Update dns token doc to link to policy templates

* PR feedback updates

* [NET-6232] docs: Update consul-k8s Helm chart docs (#19577)

docs: Update consul-k8s Helm chart docs

Sync docs for several recent changes to the Helm chart from
`consul-k8s`.

* Fix parts of admin-partitions guide (#19621)

* fix runner count logic in set_test_package_matrix.sh from adding an additional runner (#19620)

* fix runner count logic in set_test_package_matrix.sh from adding an additional runner

* use ceil instead of floor

* [Docs] Update admin-partitions.mdx (#18430)

* [Docs] Update admin-partitions.mdx

Adding a note on DNS queries requiring the presence of a Consul Client in the Admin partition

The consul-dns endpoints are the consul clients and servers as seen In the Helm chart consul/templates/dns-service.yaml
  selector:
    app: {{ template "consul.name" . }}
    release: "{{ .Release.Name }}"
    hasDNS: "true"

all components have the first two labels for app and release but only consul clients and servers have the last one hasDNS so it will only match clients AND servers
grep hasDNS ./* 2> /dev/null
./client-daemonset.yaml:      hasDNS: "true"
./dns-service.yaml:    hasDNS: "true"
./server-statefulset.yaml:        hasDNS: "true"

* Update website/content/docs/enterprise/admin-partitions.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

---------

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>

* Fix ACL permissions for ECS controller (#19636)

* unhack: remove consulprem build tag (#19633)

* Adds proto for the GatewayClass based on the GAMMA Kubernetes Sig (#19615)

* notify on go-tests failure on main and release branches. (#19640)

notify on failures in go-tests on main and release branches.

* NET 6442 - Add tenancy to explicit destinations controller (#19644)

Add tenancy to explicit destinations controller

* NET 6525 (#19645)

Removed resourcetest func

* NET 6539 - Add tenancy tests for folder - internal/mesh/internal/controllers/sidecarproxy (#19646)

* Add tenancy tests for folder - internal/mesh/internal/controllers/sidecarproxy

* removed rej files

* added missed out file

* upgrade test: remove duplicate test case (#19643)

* Updates GatewayClass protobuf to set optional fields to optional (#19648)

* Added tenancy tests for endpoints controller (#19650)

* Add tenancy tests for proxy cfg controller (#19649)

* test: fix some multiport deployer bugs and remove a container test already handled by deployer tests (#19614)

* unhack: add pre-commit guidelines (#19617)

* resource: freeze resources after marked for deletion (4 of 5) (#19603)

* NET-6550 generate stubs for GatewayClassConfig (#19602)

* NET-6550 generate stubs

* gatewayclassconfig

* fix minor spacing issue

* fix minor spacing issue

* convert to snake case

* add ports

* snakecase

* Timeout Docs Update (#19601)

* Update routetimeoutfilter.mdx

* Update website/content/docs/connect/gateways/api-gateway/configuration/routetimeoutfilter.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/connect/gateways/api-gateway/configuration/routetimeoutfilter.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/connect/gateways/api-gateway/configuration/routetimeoutfilter.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

---------

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Skip tests with p95 greater than 30 seconds outside of main and release branches. (#19628)

Skip tests with p95 greater than 30 seconds.

* Integ test (test/deployer): upgrade test with service mesh (#19658)

* Integ test (test/deployer): upgrade test with service mesh

* license

* Added Gatewayclassconfig resource type to proto package (#19664)

resource type + regen

* Add stub for MeshConfiguration proto (#19642)

* Add mesh_configuration.proto

* Run make proto

* Add cluster scope to MeshConfiguration

* Run make proto

* [NET-6103] Enable query tokens by service name using templated policy (#19666)

* Integ test: enable upgrade test deployer 1.17 (#19669)

* integ test: add deployer upgrade test to 1.17.x nightly integ test

* checkout 1.17.x branch

* Add tests for traffic permissions controller (#19672)

* resource: preserve deferred deletion metadata on non-CAS writes (#19674)

* integ-test: fix upgrade test for CE (#19673)

* integ-test: fix upgrade test for CE

* Add engineering docs for controllers and v2 architecture (#19671)

* add controller docs

* add v2 service mesh docs

* fix: temporary remove token policy test (#19683)

* Update ECS compat matrix (#19675)

* fix: remove 2 tests to unblock consul-enterprise merges (#19687)

* [NET-6640] Adds "Policy" BindType to BindingRule (#19499)

feat: add bind type of policy

Co-authored-by: Ronald Ekambi <ronekambi@gmail.com>

* [NET-6640] Add docs for binding type policy (#19677)

* fix a panic in the CLI when deleting an acl policy with an unknown name (#19679)

* fix a panic in the CLI when deleting an acl policy with an unknown name

* add changelog

* Switch to github-actions format (#19667)

* Switch to github-actions format for integration tests (#19693)

* do not auto merge backports (#19694)

do not auto merge backport as there is a bug in backport assistant that could merge the entire main into release branches.

* Default "stats_flush_interval" to 1 minute for Consul Telemetry Collector (#19663)

* Set default of 1m for StatsFlushInterval when the collector is setup

* Add documentation on the stats_flush_interval value

* Do not default in two conditions 1) preconfigured sinks exist 2) preconfigured flush interval exists

* Fix wording of docs

* Add changelog

* Fix docs

* [SECVULN-1532] chore: Remove TODO comments for OIDC/JWT auth (#19700)

chore: Remove TODO comments for OIDC/JWT auth

Remove old TODO comments and update remaining comments for clarity.

* optimized fetching services in exported service controller (#19695)

* optimized fetching services in exported service controller

* added aliases for some complex types

* [SECVULN-1533] chore: Clarify iptables Provider interface docs (#19704)

chore: Clarify iptables Provider interface docs

Add docs clarifying constraints on use and return values.

* cli: add a string method to gvk struct (#19696)

* chore: add suffix to consul version in sidenav (#19660)

* v2tenancy: namespace deletion using finalizers (#19714)

* Add tenancy tests for routes controller (#19706)

* Fix failing test in command/resource/read (#19722)

* Add docs for identity acl rules (#19713)

* feat: create a default namespace (#19681)

* feat: create a default namespace on leader

* refactor: add comment and move inittenancy to leader file

* refactor: rephrase comment

* NET-6251 API gateway templated policy (#19728)

* [NET-6249] Add templated policies description (#19735)

* [NET-6617] security: Bump github.com/golang-jwt/jwt/v4 to 4.5.0 (#19705)

security: Bump github.com/golang-jwt/jwt/v4 to 4.5.0

This version is accepted by Prisma/Twistlock, resolving scan results for
issue PRISMA-2022-0270. Chosen over later versions to avoid a major
version with breaking changes that is otherwise unnecessary.

Note that in practice this is a false positive (see
https://github.com/golang-jwt/jwt/issues/258), but we should update the
version to aid customers relying on scanners that flag it.

* Adds GatewayClassName field to MeshGateway Proto (#19738)

* resource: ListByOwner returns empty list on non-existent tenancy (#19742)

* add nightly integ tests for peering_commontopo [NET-6628] (#19724)

* ci: Run `go mod tidy` check on submodules (#19744)

Today, we do not enforce a clean `go mod tidy` on submodules. This
allows for drift and can eventually lead to `golangci-lint` failures,
along with the obvious disadvantage of not having an up-to-date
`go.mod`.

Enforce clean `go mod tidy` on all `go.mod` by using our make target
rather than the direct root-level command.

* added ent to ce downgrade doc (#19590)

* added ent to ce downgrade doc

* minor fix

* formatting fixes

* fixed doc path

* reformat doc

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>

* added reason for panic in doc

* fixed linking page

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* reformatted nav data

* updated the downgrade steps

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/data/docs-nav-data.json

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* fixed review comments

* fixed typo

* minor fix

* minor fix

* some rewording in downgrade details

* fixed minor fmt issues

* minor fmt

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* changed ```shell to to ```shell-session

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* fixed some fmt issues

* fmt doc

* minor text fix

* fmt doc

* fix fmt

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/ent-to-ce-downgrades.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* added prompt to all shell commands

* fix fmt

* fixed indentation

* Reformatted for consistency with our writing styles

* Apply suggestions from code review

Few more tweaks

---------

Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>
Co-authored-by: David Yu <dyu@hashicorp.com>
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Co-authored-by: trujillo-adam <ajosetru@gmail.com>

* Move test setup out of subtest (#19753)

* [NET-6420] Add MeshConfiguration Controller stub (#19745)

* Add meshconfiguration/controller

* Add MeshConfiguration Registration function

* Fix the TODOs on the RegisterMeshGateway function

* Call RegisterMeshConfiguration

* Add comment to MeshConfigurationRegistration

* Add a test for Reconcile and some comments

* grpc client default in plaintext mode (#19412)

* grpc client default in plaintext mode

* renaming and fix linter

* update the description and remove the context

* trim tests

* [NET-5916] docs: Add locality examples and troubleshooting (#19605)

docs: Add locality examples and troubleshooting

Add further examples and tips for locality-aware routing configuration,
observability, and troubleshooting.

* license file updates (#19750)

* Add Kubebuilder tags to Gatewayclassconfig proto messages (#19725)

* add build tags/import k8s specific proto packages

* fix generated import paths

* fix gomod linting issue

* mod tidy every go mod file

* revert protobuff version, take care of in different pr

* cleaned up new lines

* added newline to end of file

* [NET-5688] APIGateway UI Topology Fixes (#19657)

* Update catalog and ui endpoints to show APIGateway in gateway service
topology view

* Added initial implementation for service view

* updated ui

* Fix topology view for gateways

* Adding tests for gw controller

* remove unused args

* Undo formatting changes

* Fix call sites for upstream/downstream gw changes

* Add config entry tests

* Fix function calls again

* Move from ServiceKey to ServiceName, cleanup from PR review

* Add additional check for length of services in bound apigateway for
IsSame comparison

* fix formatting for proto

* gofmt

* Add DeepCopy for retrieved BoundAPIGateway

* gofmt

* gofmt

* Rename function to be more consistent

* [NET-6725] test: Address occasional flakes in sidecarproxy/controller_test.go (#19760)

test: Address occasional flakes in sidecarproxy/controller_test.go

We've observed an occasional flake in this test where some state check
fails. Adding in some wait wrappers to these state checks will hopefully
address the issue, assuming it is a simple flake.

* docs: Rename locality docs observe section to verification (#19769)

* docs: Rename locality docs observe section to verification

Follow-up to #19605 review.

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* [V2] Move resource field on gateway class config from repeated map to single map (#19773)

Move resource field on gateway class config from repeated map to single
map

* Docs: FIPS - add cluster peering info (#19768)

* Docs: FIPS - add cluster peering info

* Update website/content/docs/enterprise/fips.mdx

Co-authored-by: David Yu <dyu@hashicorp.com>

* Update website/content/docs/enterprise/fips.mdx

Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>

* Update website/content/docs/enterprise/fips.mdx

Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>

---------

Co-authored-by: David Yu <dyu@hashicorp.com>
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>

* Pin lint-consul-retry to v1.3.0 (#19781)

Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com>

* resource: add v2tenancy feature flag to deployer tests (#19774)

* Remove Duplicate UBI Tags (#19737)

- Amalgamate UBI with Dockerhub and Redhat tags into one step
- Avoids a production incident that errors on duplicate tags:
https://github.com/hashicorp/releng-support/issues/123

* NET-6692: Ensure 'upload test results' step is always run (#19783)

* made node parition scoped (#19794)

* made node parition scoped

* removed namespace from node testdata

* Net 6585 (#19797)

Add multi tenancy to sidecar proxy controller

* docs: improvements to v2 catalog explanation (#19678)

* commit

* Addresses comments from review

* added node health resource (#19803)

* [Cloud][CC-6925] Updates to pushing server state (#19682)

* Upgrade hcp-sdk-go to latest version v0.73

Changes:
- go get github.com/hashicorp/hcp-sdk-go
- go mod tidy

* From upgrade: regenerate protobufs for upgrade from 1.30 to 1.31

Ran: `make proto`

Slack: https://hashicorp.slack.com/archives/C0253EQ5B40/p1701105418579429

* From upgrade: fix mock interface implementation

After upgrading, there is the following compile error:

cannot use &mockHCPCfg{} (value of type *mockHCPCfg) as "github.com/hashicorp/hcp-sdk-go/config".HCPConfig value in return statement: *mockHCPCfg does not implement "github.com/hashicorp/hcp-sdk-go/config".HCPConfig (missing method Logout)

Solution: update the mock to have the missing Logout method

* From upgrade: Lint: remove usage of deprecated req.ServerState.TLS

Due to upgrade, linting is erroring due to usage of a newly deprecated field

22:47:56 [consul]: make lint
--> Running golangci-lint (.)
agent/hcp/testing.go:157:24: SA1019: req.ServerState.TLS is deprecated: use server_tls.internal_rpc instead. (staticcheck)
                time.Until(time.Time(req.ServerState.TLS.CertExpiry)).Hours()/24,
                                     ^

* From upgrade: adjust oidc error message

From the upgrade, this test started failing:

=== FAIL: internal/go-sso/oidcauth TestOIDC_ClaimsFromAuthCode/failed_code_exchange (re-run 2) (0.01s)
    oidc_test.go:393: unexpected error: Provider login failed: Error exchanging oidc code: oauth2: "invalid_grant" "unexpected auth code"

Prior to the upgrade, the error returned was:
```
Provider login failed: Error exchanging oidc code: oauth2: cannot fetch token: 401 Unauthorized\nResponse: {\"error\":\"invalid_grant\",\"error_description\":\"unexpected auth code\"}\n
```

Now the error returned is as below and does not contain "cannot fetch token"
```
Provider login failed: Error exchanging oidc code: oauth2: "invalid_grant" "unexpected auth code"

```

* Update AgentPushServerState structs with new fields

HCP-side changes for the new fields are in:
https://github.com/hashicorp/cloud-global-network-manager-service/pull/1195/files

* Minor refactor for hcpServerStatus to abstract tlsInfo into struct

This will make it easier to set the same tls-info information to both
 - status.TLS (deprecated field)
 - status.ServerTLSMetadata (new field to use instead)

* Update hcpServerStatus to parse out information for new fields

Changes:
 - Improve error message and handling (encountered some issues and was confused)
 - Set new field TLSInfo.CertIssuer
 - Collect certificate authority metadata and set on TLSInfo.CertificateAuthorities
 - Set TLSInfo on both server.TLS and server.ServerTLSMetadata.InternalRPC

* Update serverStatusToHCP to convert new fields to GNM rpc

* Add changelog

* Feedback: connect.ParseCert, caCerts

* Feedback: refactor and unit test server status

* Feedback: test to use expected struct

* Feedback: certificate with intermediate

* Feedback: catch no leaf, remove expectedErr

* Feedback: update todos with jira ticket

* Feedback: mock tlsConfigurator

* skip TestCatalogUpgrade for consul versions < 1.18.0 (#19811)

skip TestCatalogUpgrade for conul versions < 1.18.0

* ci: fix test failure Slack notifications (#19766)

- Skip notifications for cancelled workflows. Cancellation can be
manual or caused by branch concurrency limits.
- Fix multi-line JSON parsing error by only printing the summary line
of the commit message. We do not need more than this in Slack.
- Update Slack webhook name to match purpose.

* resource: block default namespace deletion + test refactorings (#19822)

* doc: clarify the portNames used in trafficpermission V2 (#19807)

* doc: clarify the portNames used in trafficpermission V2 and fix broken links and examples

* NET-3860 - [Supportability] consul troubleshoot CLI for verifying ports (#18329)

* init

* udp

* added support for custom port

* removed grpc

* rename constants

* removed udp

* added change log

* fix synopsis

* pr comment chagnes

* make private

* added tests

* added one more test case

* defer close results channel

* removed unwanted comment

* licence update

* updated docs

* fix indent

* fix path

* example update

* Update website/content/commands/troubleshoot/ports.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/commands/troubleshoot/ports.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update command/troubleshoot/ports/troubleshoot_ports.go

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/commands/troubleshoot/ports.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/commands/troubleshoot/index.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update command/troubleshoot/ports/troubleshoot_ports.go

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update command/troubleshoot/ports/troubleshoot_ports.go

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/commands/troubleshoot/ports.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/commands/troubleshoot/ports.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update website/content/commands/troubleshoot/ports.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* pr comment resolved

---------

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* update l7expplicit dest test to test cross tenancy (#19834)

* [NET-6251] Nomad client templated policy (#19827)

* Retry lint fixes (#19151)

* Add a make target to run lint-consul-retry on all the modules
* Cleanup sdk/testutil/retry
* Fix a bunch of retry.Run* usage to not use the outer testing.T
* Fix some more recent retry lint issues and pin to v1.4.0 of lint-consul-retry
* Fix codegen copywrite lint issues
* Don’t perform cleanup after each retry attempt by default.
* Use the common testutil.TestingTB interface in test-integ/tenancy
* Fix retry tests
* Update otel access logging extension test to perform requests within the retry block

* improve client RPC metrics consistency (#19721)

The client.rpc metric now excludes internal retries for consistency
with client.rpc.exceeded and client.rpc.failed. All of these metrics
now increment at most once per RPC method call, allowing for
accurate calculation of failure / rate limit application occurrence.

Additionally, if an RPC fails because no servers are present,
client.rpc.failed is now incremented.

* [NET-6650] Bump go version to 1.20.12 (#19840)

* NET-6643: upgrade test from 1.10 to 1.15 (lts) of a single cluster (#19847)

* NET-6643: upgrade test from 1.10 to 1.15 (lts) of a single cluster

* license header

* ci: fix escaping for Slack failure notifications (#19838)

Allow '()', '#', and other bash-interpretable special characters by
properly quoting the commit message when shortening.

* NET-6784: Adding cli command to list exported services to a peer (#19821)

* Adding cli command to list exported services to a peer

* Changelog added

* Addressing docs comments

* Adding test case for no exported services scenario

* chore: update changelog for patch releases (#19855)

* 1.16.3
* 1.15.7
* 1.14.11

* Net-6730/namespace intg test (#19798)

test: add intg test for namespace lifecycle

* Ensure that the default namespace always exists even prior to resource creation (#19852)

* parse config protocol on write to optimize disco-chain compilation (#19829)

* parse config protocol on write to optimize disco-chain compilation

* add changelog

* Add CE version of Gateway Upstream Disambiguation (#19860)

* Add CE version of gateway-upstream-disambiguation

* Use NamespaceOrDefault and PartitionOrDefault

* Add Changelog entry

* Remove the unneeded reassignment

* Use c.ID()

* Fix a test flake where a retry timer was being reused causing tests after the first to exit early (#19864)

Fix a test flake where a retry timer was being reused causing tests after the first to exit too early.

* upgrade test(LTS): add segments to version 1.10 (#19861)

* Fix xDS missing endpoint race condition. (#19866)

This fixes the following race condition:
- Send update endpoints
- Send update cluster
- Recv ACK endpoints
- Recv ACK cluster

Prior to this fix, it would have resulted in the endpoints NOT existing in
Envoy. This occurred because the cluster update implicitly clears the endpoints
in Envoy, but we would never re-send the endpoint data to compensate for the
loss, because we would incorrectly ACK the invalid old endpoint hash. Since the
endpoint's hash did not actually change, they would not be resent.

The fix for this is to effectively clear out the invalid pending ACKs for child
resources whenever the parent changes. This ensures that we do not store the
child's hash as accepted when the race occurs.

An escape-hatch environment variable `XDS_PROTOCOL_LEGACY_CHILD_RESEND` was
added so that users can revert back to the old legacy behavior in the event
that this produces unknown side-effects. Visit the following thread for some
extra context on why certainty around these race conditions is difficult:
https://github.com/envoyproxy/envoy/issues/13009

This bug report and fix was mostly implemented by @ksmiley with some minor
tweaks.

Co-authored-by: Keith Smiley <ksmiley@salesforce.com>

* ci: sanitize commit message for Slack failure alerts (#19876)

To ensure that shell code cannot be injected, capture the commit message
in an env var, then format it as needed.

Also fix several other issues with formatting and JSON escaping by
wrapping the entire message in a `toJSON` expression.

* security: update supported envoy version 1.28.0 in addition to 1.25.11, 1.26.6, 1.27.2, 1.28.0 to address CVE-2023-44487 (#19879)

* update too support envoy 1.28.0

* add changelog

* update docs

* Fix ClusterLoadAssignment timeouts dropping endpoints. (#19871)

When a large number of upstreams are configured on a single envoy
proxy, there was a chance that it would timeout when waiting for
ClusterLoadAssignments. While this doesn't always immediately cause
issues, consul-dataplane instances appear to consistently drop
endpoints from their configurations after an xDS connection is
re-established (the server dies, random disconnect, etc).

This commit adds an `xds_fetch_timeout_ms` config to service registrations
so that users can set the value higher for large instances that have
many upstreams. The timeout can be disabled by setting a value of `0`.

This configuration was introduced to reduce the risk of causing a
breaking change for users if there is ever a scenario where endpoints
would never be received. Rather than just always blocking indefinitely
or for a significantly longer period of time, this config will affect
only the service instance associated with it.

* [NET-6842] splitting go version on different lines (#19887)

* Add documentation for proxy-config-map and xds_fetch_timeout_ms. (#19893)

Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>

* docs: Updates to required ports (#19755)

* improvements

* Anchor link fixes

* Apply suggestions from code review

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Explicit list of six ports

* Apply suggestions from code review

---------

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Remove warning for consul 1.17 deprecation (#19897)

* fix: remove test to unblock CI (#19908)

* NET 6761 (#19837)

NET-6761 explicit destinations tests updated

* NET-6771 - Adding sameness group protobuff in consul CE (#19883)

 Adding sameness group protobuff in consul CE

* Refactor exported services controller tests (#19906)

* Move enterprise multicluster types to Register function  (#19913)

* Move enterprise types to Register function

* Fix function name

* Address comments

* Hash based config entry replication (#19795)

* add a hash to config entries when normalizing

* add GetHash and implement comparing hashes

* only update if the Hash is different

* only update if the Hash is different and not 0

* fix proto to include the Hash

* fix proto gen

* buf format

* add SetHash and fix tests

* fix config load tests

* fix state test and config test

* recalculate hash when restoring config entries

* fix snapshot restore test

* add changelog

* fix missing normalize, fix proto indexes and add normalize test

* NET-6900: stop reconciling services when peering is enabled (#19907)

stop reconciling services when peering is enabled

* fix: token list in Role details page is updated with tokens linked to… (#19912)

* fix actions to no longer use envoy 1.24.x to match supported versions. (#19918)

* resource: add partition resource to proto-public to keep ENT and CE in sync (#19920)

* docs: service rate limiting examples (#19925)

* Include examples on usage page.

* Description/example alignment

* Add Common Controller Caching Infrastructure (#19767)

* Add Common Controller Caching Infrastructure

* NET 6762 (#19931)

NET-6762

* Upgrade test(LTS): use network area to federate cluster (#19934)

- Join areas
- wait for members alive and validate cross area service discovery

* added tenancy to TestBuildL4TrafficPermissions (#19932)

* NET-6785: updating peering docs to include stream status and remote data (#19929)

Updating peering docs to include stream status and remote data

* Update telemetry.mdx RPC Metrics (#19593)

* Update telemetry.mdx RPC Metrics

Update Server Workload telemetry section to demonstrate explicitly enabling metric emission as they're [default disabled](https://github.com/hashicorp/consul/blob/f5bf256425e33c0da805eda6a2fc5ea05100d491/agent/config/builder.go#L2763C1-L2763C1).

* Update telemetry.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

* Update telemetry.mdx

Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>

--------…
dboslee added a commit to gravitational/gateway that referenced this issue Mar 19, 2024
It appears that when a cluster is updated without a following endpoint
update a cluster can lose its endpoint configuration.

I attempted to update endpoints on each cluster change to prevent
this but it was still possible for the cluster update to happen after
the endpoint update.

This appears related to envoyproxy/envoy#13009

By using static clusters the endpoints are included in the cluster update.
In testing this appears to resolve the issue where endpoints disappear
since we no longer rely on EDS.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests