You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Defence in depth principle requires a protection layer between containers running separate microservers, even being deployed in the same security perimeter and inside the same hardware server in a private cloud.
Need to implement a functional test and run it in CI for 3 containers running microservices communicating via HTTP. Tempesta FW must be run in the host system and in collaboration with nft enforce HTTP communications rules for the containers:
container 3 has no access to any port of container 1;
container 2 has access to container 1 only by port 80 and only for methods GET with URI prefix /foo or PUT with URI prefix /bar and a header X-Bar: bar.
Tempesta FW distributes traffic among container 2 and 3 based on Host header value
Tempesta FW distributes traffic among container 2 and 3 based on URI path
At least one microservice should be cached with Tempesta FW (see Where's my cache)
If HTTP tables doesn't allow the 2nd rule, then an enhancement issue must be created.
Please create a new Wiki page for microservices, including the protection, communications optimization scenarios, microservices caching, and load balancing with appropriate configuration examples.
The text was updated successfully, but these errors were encountered:
In typical service mesh there are huge communication overheads: microservice <-> Envoy <-> Envoy <-> microservice. All the communications are done over TLS and there are 6 context switches and full TCP/IP processings. Envoy provides UNIX sockets, but communications are still very expensive. Having that there is a trend to replace monolithic applications by micorservices, the performance penalty is unacceptable by many applications, so the microservice architecture could be even more sensitive to performance than the typical edge case for Tempesta FW.
The recent questionary revealed that the most companies, even with service mesh, don't experience issues with network I/O. Most of the respondents balance between microservices and a monolith having about 5 services involved in a request processing. From the other hand, Kubernetes provides network I/O plugins and poor HTTP/2 load balancing. Several respondents mentioned network I/O issues in Kubernetes. But all the cases were about the K8S infrastructure, not about Linux network I/O, data copies, TLS or data serialization.
Defence in depth principle requires a protection layer between containers running separate microservers, even being deployed in the same security perimeter and inside the same hardware server in a private cloud.
Need to implement a functional test and run it in CI for 3 containers running microservices communicating via HTTP. Tempesta FW must be run in the host system and in collaboration with nft enforce HTTP communications rules for the containers:
/foo
or PUT with URI prefix/bar
and a headerX-Bar: bar
.If HTTP tables doesn't allow the 2nd rule, then an enhancement issue must be created.
Please create a new Wiki page for microservices, including the protection, communications optimization scenarios, microservices caching, and load balancing with appropriate configuration examples.
The text was updated successfully, but these errors were encountered: