dedicated event etcd draft PoC for ShiftWeek#1505
dedicated event etcd draft PoC for ShiftWeek#1505tjungblu wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
Skipping CI for Draft Pull Request. |
|
Important Review skippedAuto reviews are limited based on label configuration. 🚫 Review skipped — only excluded labels are configured. (1)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the
✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: tjungblu The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
5920b8e to
689045c
Compare
|
Some quick benchmark results using:
This is running against the normal etcd: This is running against localhost in-memory: Now this is pretty crappy, let's try some tuning:
seems has no effect
also no real effect, leading me to believe this benchmark is not really disk bound in the first place. Checking the allocation route with:
Performs only slightly better. After CPU profiling, some interesting findings:
Unfortunately the grpc options for buffer sizes (R+W are at 32K each) need recompilation, so I won't be able to max this out any further today. |
| --- OR via SVC, as done below --- | ||
| - "/events#https://events-etcd.openshift-etcd.svc:20379" |
There was a problem hiding this comment.
this requires dnsPolicy=ClusterFirstWithHostNet on the kube-apiserver static pods
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
This PR contains a dedicated in-memory etcd deployment that will run on one control plane host and configures the kube-apiserver to send events to it. Signed-off-by: Thomas Jungblut <tjungblu@redhat.com>
|
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
|
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
/hold
just here for CI runs and cluster bot builds