Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add networking slis for services #8142

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions sig-scalability/slos/slos.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,9 @@ __TODO: Cluster churn should be moved to scalability thresholds.__
| __WIP__ | Latency of programming dns instance, measured from when service spec or list of its `Ready` pods change to when it is reflected in that dns instance, measured as 99th percentile over last 5 minutes aggregated across all dns instances | In default Kubernetes installation, 99th percentile per cluster-day<sup>[1](#footnote1)</sup> <= X | [Details](./dns_programming_latency.md) |
| __WIP__ | In-cluster network latency from a single prober pod, measured as latency of per second ping from that pod to "null service", measured as 99th percentile over last 5 minutes. | In default Kubernetes installataion with RTT between nodes <= Y, 99th percentile of (99th percentile over all prober pods) per cluster-day<sup>[1](#footnote1)</sup> <= X | [Details](./network_latency.md) |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
| __WIP__ | In-cluster network latency from a single prober pod, measured as latency of per second ping from that pod to "null service", measured as 99th percentile over last 5 minutes. | In default Kubernetes installataion with RTT between nodes <= Y, 99th percentile of (99th percentile over all prober pods) per cluster-day<sup>[1](#footnote1)</sup> <= X | [Details](./network_latency.md) |
| __WIP__ | In-cluster network latency from a single prober pod, measured as latency of per second ping from that pod to "null service", measured as 99th percentile over last 5 minutes. | In default Kubernetes installation with RTT between nodes <= Y, 99th percentile of (99th percentile over all prober pods) per cluster-day<sup>[1](#footnote1)</sup> <= X | [Details](./network_latency.md) |

| __WIP__ | In-cluster dns latency from a single prober pod, measured as latency of per second DNS lookup for "null service" from that pod, measured as 99th percentile over last 5 minutes. | In default Kubernetes installataion with RTT between nodes <= Y, 99th percentile of (99th percentile over all prober pods) per cluster-day<sup>[1](#footnote1)</sup> <= X | [Details](./dns_latency.md) |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
| __WIP__ | In-cluster dns latency from a single prober pod, measured as latency of per second DNS lookup for "null service" from that pod, measured as 99th percentile over last 5 minutes. | In default Kubernetes installataion with RTT between nodes <= Y, 99th percentile of (99th percentile over all prober pods) per cluster-day<sup>[1](#footnote1)</sup> <= X | [Details](./dns_latency.md) |
| __WIP__ | In-cluster dns latency from a single prober pod, measured as latency of per second DNS lookup for "null service" from that pod, measured as 99th percentile over last 5 minutes. | In default Kubernetes installation with RTT between nodes <= Y, 99th percentile of (99th percentile over all prober pods) per cluster-day<sup>[1](#footnote1)</sup> <= X | [Details](./dns_latency.md) |

| __WIP__ | Time to First Packet (TTFP) Latency in milliseconds (ms) from the client initiating the TCP connection to a Service (sending the SYN packet) to the client receiving the first packet from the Service backend (typically the SYN-ACK packet in the three-way handshake) measured as 99th percentile over last 5 minutes aggregated across all the node instances. | In default Kubernetes installation with RTT between nodes <= Y, 99th percentile of (99th percentile over all nodes) per cluster-day <= X | [Details](./time_to_first_packet.md) |
| __WIP__ | The time elapsed in seconds (s) or minutes (min) from the successful establishment of a TCP connection to a Kubernetes service to the connection being closed measured as 99th percentile over last 5 minutes aggregated across all the node instances. | In default Kubernetes installation with RTT between nodes <= Y, 99th percentile of (99th percentile over all nodes) per cluster-day<sup>[1](#footnote1)</sup> <= X | [Details](./time_to_last_packet.md) |
| __WIP__ | The rate of successful data transfer over a TCP connection to services, measured in bits per second (bps), kilobits per second (kbps), megabits per second (Mbps), or gigabits per second (Gbps) measured as 99th percentile over last 5 minutes aggregated across all the connections to services in a node. | In default Kubernetes installation with RTT between nodes <= Y, 99th percentile of (99th percentile over all nodes) per cluster-day<sup>[1](#footnote1)</sup> <= X | [Details](./throughput.md) |

<a name="footnote1">\[1\]</a> For the purpose of visualization it will be a
sliding window. However, for the purpose of SLO itself, it basically means
Expand Down
24 changes: 24 additions & 0 deletions sig-scalability/slos/throughput.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
## Throughput

### Definition

| Status | SLI |
| --- | --- |
| WIP |The rate of successful data transfer over a TCP connection to services, measured in bits per second (bps), kilobits per second (kbps), megabits per second (Mbps), or gigabits per second (Gbps) measured as 99th percentile over last 5 minutes aggregated across all the connections to services in a node.|

### User stories

- As a user of vanilla Kubernetes, I want some visibility to ensure my applications meet my performance requirements when connection to services
- As a user of vanilla Kubernetes, I want to understan when my applications meet my performance requirements when connection to services

### Other notes

The aggregated throughput help to understand if the cluster network and applications can handle the required data transfer rates and to identify any bottlenecks limiting throughput.

### How to measure the SLI.

Requires tto collect both the time duration of the connection and the amount of data transferred during that time. This can be done:

- Client-side: In the application code or using a benchmark application.
- Network devices: Packet inspection and analysis on nodes along the network datapath.

32 changes: 32 additions & 0 deletions sig-scalability/slos/time_to_first_packet.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
## Time To First Packet SLI details

### Definition

| Status | SLI |
| --- | --- |
| WIP | Time to First Packet (TTFP) Latency in milliseconds (ms) from the client initiating the TCP connection to a Service (sending the SYN packet) to the client receiving the first packet from the Service backend (typically the SYN-ACK packet in the three-way handshake) measured as 99th percentile over last 5 minutes aggregated across all the node instances.|

### User stories

- As a user of vanilla Kubernetes, I want some guarantees on how quickly my pods can connect
to the service backends

### Other notes

TTFP is a more user-centric metric than just the full connection establishment time. It reflects the initial perceived delay. A fast TTFP makes your application feel snappy, even if the full handshake takes a bit longer.

### How to measure the SLI.

Requires precise timestamps for when the client sends the SYN packet and when it receives the first packet from the server. This can be done:

- Client-side: In the application code or using a benchmark application.
- Network devices: Packet inspection and analysis on nodes along the network datapath.

### Caveats

Important Considerations:

- Network Latency: geographic distance, routing, and network congestion.
- How quickly the server can process the SYN packet and send the SYN-ACK also contributes to TTFP.
- Other traffic on the network can delay the SYN-ACK, even if the server responds quickly.
- Client-side processing and network conditions on the client side can also introduce minor delays.
32 changes: 32 additions & 0 deletions sig-scalability/slos/time_to_last_packet.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
## Time To Last Packet SLI details

### Definition

| Status | SLI |
| --- | --- |
| WIP | The time elapsed in seconds (s) or minutes (min) from the successful establishment of a TCP connection to a Kubernetes service to the connection being closed measured as 99th percentile over last 5 minutes aggregated across all the node instances.|

### User stories

- As a user of vanilla Kubernetes, I want some visibility on how longs my pods are connected
to the services

### Other notes

The total connection duration can help to understand how clients interact with services, optimize resource usage, and identify potential issues like connection leaks or excessive short-lived connections.

### How to measure the SLI.

Requires precise timestamps for when the client sends the SYN packet and when it receives the last packet from the server. This can be done:

- Client-side: In the application code or using a benchmark application.
- Network devices: Packet inspection and analysis on nodes along the network datapath.

### Caveats

Important Considerations:

- Network Latency: geographic distance, routing, and network congestion.
- How quickly the server can process the SYN packet and send the SYN-ACK also contributes to TTFP.
- Other traffic on the network can delay the SYN-ACK, even if the server responds quickly.
- Client-side processing and network conditions on the client side can also introduce minor delays.