Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Microservices/container security #1084

Open
krizhanovsky opened this issue Oct 24, 2018 · 1 comment
Open

Microservices/container security #1084

krizhanovsky opened this issue Oct 24, 2018 · 1 comment

Comments

@krizhanovsky
Copy link
Contributor

krizhanovsky commented Oct 24, 2018

Defence in depth principle requires a protection layer between containers running separate microservers, even being deployed in the same security perimeter and inside the same hardware server in a private cloud.

Need to implement a functional test and run it in CI for 3 containers running microservices communicating via HTTP. Tempesta FW must be run in the host system and in collaboration with nft enforce HTTP communications rules for the containers:

  1. container 3 has no access to any port of container 1;
  2. container 2 has access to container 1 only by port 80 and only for methods GET with URI prefix /foo or PUT with URI prefix /bar and a header X-Bar: bar.
  3. Tempesta FW distributes traffic among container 2 and 3 based on Host header value
  4. Tempesta FW distributes traffic among container 2 and 3 based on URI path
  5. At least one microservice should be cached with Tempesta FW (see Where's my cache)

If HTTP tables doesn't allow the 2nd rule, then an enhancement issue must be created.

Please create a new Wiki page for microservices, including the protection, communications optimization scenarios, microservices caching, and load balancing with appropriate configuration examples.

@krizhanovsky
Copy link
Contributor Author

krizhanovsky commented Mar 11, 2019

In typical service mesh there are huge communication overheads: microservice <-> Envoy <-> Envoy <-> microservice. All the communications are done over TLS and there are 6 context switches and full TCP/IP processings. Envoy provides UNIX sockets, but communications are still very expensive. Having that there is a trend to replace monolithic applications by micorservices, the performance penalty is unacceptable by many applications, so the microservice architecture could be even more sensitive to performance than the typical edge case for Tempesta FW.

The recent questionary revealed that the most companies, even with service mesh, don't experience issues with network I/O. Most of the respondents balance between microservices and a monolith having about 5 services involved in a request processing. From the other hand, Kubernetes provides network I/O plugins and poor HTTP/2 load balancing. Several respondents mentioned network I/O issues in Kubernetes. But all the cases were about the K8S infrastructure, not about Linux network I/O, data copies, TLS or data serialization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant