Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High load due to ksoftirqd, growing iptables rules #3117

Closed
wursterje opened this issue Mar 25, 2021 · 83 comments
Closed

High load due to ksoftirqd, growing iptables rules #3117

wursterje opened this issue Mar 25, 2021 · 83 comments
Assignees
Labels
kind/bug Something isn't working kind/upstream-issue This issue appears to be caused by an upstream bug
Milestone

Comments

@wursterje
Copy link

wursterje commented Mar 25, 2021

Environmental Info:
K3s Version: k3s version v1.20.4+k3s1 (838a906)
go version go1.15.8

Node(s) CPU architecture, OS, and Version: Linux 4.19.0-14-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux

Cluster Configuration: 1 master, 2 workers

Describe the bug: After some time we get high loads one the machine due to high soft irqs:

Screenshot from 2021-03-25 09 11 41

Output of perf report:

Screenshot from 2021-03-25 09 14 33

Something goes wrong with the iptables rules:

iptables -L produces 7.0 MB of rules (increasing more and more over time):

Screenshot from 2021-03-25 09 18 01

Steps To Reproduce:

  • Installed K3s: We are using the embed etcd.
@brandond
Copy link
Member

Can you attach an actual listing of the iptables rules? It's hard to troubleshoot via a screenshot. Since it's 70+mb, compressing the file before attaching it may be useful.

Are you running anything else on this node that manages iptables rules? kube-proxy and flannel should be the only thing touching the rules; I suspect something is interfering with their ability to sync rules so they keep creating new ones.

@wursterje
Copy link
Author

@brandond

Can you attach an actual listing of the iptables rules? It's hard to troubleshoot via a screenshot. Since it's 70+mb, compressing the file before attaching it may be useful.

Here is the file: iptables.log.gz

Are you running anything else on this node that manages iptables rules? kube-proxy and flannel should be the only thing touching the rules; I suspect something is interfering with their ability to sync rules so they keep creating new ones.

fail2ban is installed.

@brandond
Copy link
Member

Hmm this appears to be 7mb, not 70mb but still - there are a lot of duplicates in the KUBE-ROUTER-INPUT table. This comes from the network policy controller, but I can't see anything on the code side that would cause this to occur.

Can you try disabling fail2ban (ensuring that it does not start again on startup) and restart the node? If the duplicate entries don't come back without fail2ban running then I am guessing that it is doing something to the ruleset that's causing duplicate rules to be created.

@wursterje
Copy link
Author

wursterje commented Mar 26, 2021

@brandond Hmm in my initial comment I've mentioned 7 dot 0 MB. Sorry for the misunderstanding...

I've disabled fail2ban but the duplicates rules still increasing over time. We've this issue on all machines running k3s version v1.20.4+k3s1 but not on v1.19.8+k3s1. All machines are configured identically.

Here is a "iptables -L | wc -l" stat:

#1 Cluster
563 v1.20.4+k3s1
561 v1.20.4+k3s1
620 v1.20.4+k3s1
18 Pods

#2 Cluster
74 v1.19.8+k3s1
85 v1.19.8+k3s1
4526 v1.20.4+k3s1
59 Pods

#3 Cluster
1235 v1.20.4+k3s1
1235 v1.20.4+k3s1
1252 v1.20.4+k3s1
87 Pods

#4 Cluster
2617 v1.20.4+k3s1
67 v1.19.8+k3s1
2613 v1.20.4+k3s1
58 Pods

@brandond
Copy link
Member

The code ensures that the cluster IP and node port rules are the first three in that chain; I'm not really sure how that could go awry unless something else is manipulating the rules. What Debian release are you running on these nodes? What does iptables --version show?

whitelistServiceVips := []string{"-m", "comment", "--comment", "allow traffic to cluster IP", "-d", npc.serviceClusterIPRange.String(), "-j", "RETURN"}
uuid, err := addUUIDForRuleSpec(kubeInputChainName, &whitelistServiceVips)
if err != nil {
glog.Fatalf("Failed to get uuid for rule: %s", err.Error())
}
ensureRuleAtPosition(kubeInputChainName, whitelistServiceVips, uuid, 1)
whitelistTCPNodeports := []string{"-p", "tcp", "-m", "comment", "--comment", "allow LOCAL TCP traffic to node ports", "-m", "addrtype", "--dst-type", "LOCAL",
"-m", "multiport", "--dports", npc.serviceNodePortRange, "-j", "RETURN"}
uuid, err = addUUIDForRuleSpec(kubeInputChainName, &whitelistTCPNodeports)
if err != nil {
glog.Fatalf("Failed to get uuid for rule: %s", err.Error())
}
ensureRuleAtPosition(kubeInputChainName, whitelistTCPNodeports, uuid, 2)
whitelistUDPNodeports := []string{"-p", "udp", "-m", "comment", "--comment", "allow LOCAL UDP traffic to node ports", "-m", "addrtype", "--dst-type", "LOCAL",
"-m", "multiport", "--dports", npc.serviceNodePortRange, "-j", "RETURN"}
uuid, err = addUUIDForRuleSpec(kubeInputChainName, &whitelistUDPNodeports)
if err != nil {
glog.Fatalf("Failed to get uuid for rule: %s", err.Error())
}
ensureRuleAtPosition(kubeInputChainName, whitelistUDPNodeports, uuid, 3)

@wursterje
Copy link
Author

wursterje commented Mar 27, 2021

Debian Buster
iptables v1.8.2 (nf_tables)

I've used the code snippet above for a little test program. The problem is that

iptablesCmdHandler.Exists("filter", chain, ruleSpec...)

always returns false.
Maybe related to this issue: coreos/go-iptables#79

The workaround for us is to periodically flush the iptable rules.

ensureRuleAtPosition_test.go.gz

@clrxbl
Copy link

clrxbl commented Mar 27, 2021

I've been having the same issue where there's tons of duplicate iptables rules being created. I've had servers with up to 40.000 iptables rules created. Disabling the network policy controller (since I use Cilium as CNI, this isn't necessary for me) fixes it.

All of my nodes are running v1.20.4+k3s1

@brandond
Copy link
Member

@clrxbl what os distribution and iptables version?

@clrxbl
Copy link

clrxbl commented Mar 27, 2021

@clrxbl what os distribution and iptables version?

This node has 13549 iptables rules, the majority of them in the KUBE-ROUTER-INPUT chain.

iptables -V
iptables v1.8.2 (nf_tables)

uname -r
4.19.0-13-amd64

cat /etc/debian_version
10.7

All of my nodes run the same software versions.

@clrxbl
Copy link

clrxbl commented Mar 27, 2021

Would also like to say that I'm getting the exact same duplicate iptables rules created aswell.
It's all just the following rules repeated over and over again:

RETURN     udp  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL multiport dports 30000:32767 /* allow LOCAL UDP traffic to node ports - 76UCBPIZNGJNWNUZ */
RETURN     tcp  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL multiport dports 30000:32767 /* allow LOCAL TCP traffic to node ports - LR7XO7NXDBGQJD2M */

@brandond
Copy link
Member

brandond commented Mar 27, 2021

Interesting, debian nftables seems to be the commonality then. I think that go-iptables issue is probably what we're running into.

Disabling the network policy controller should be an acceptable workaround, assuming you don't need policy enforcement.

@brandond brandond added this to the v1.20.6+k3s1 milestone Mar 30, 2021
@brandond brandond added kind/bug Something isn't working kind/upstream-issue This issue appears to be caused by an upstream bug labels Mar 30, 2021
@brandond brandond self-assigned this Mar 30, 2021
@brandond
Copy link
Member

brandond commented Mar 30, 2021

I have been able to duplicate this on Debian Buster. There appears to be a bug in Debian's nftables package that prevents it from properly checking iptables rules; it seems to reorder the modules so that they cannot be checked for in the order originally input:

root@debian10:~# /usr/sbin/iptables -t filter -I KUBE-ROUTER-INPUT 2 -p tcp -m addrtype --dst-type LOCAL -m comment --comment "allow LOCAL TCP traffic to node ports" -m multiport --dports 30000:32767 -j RETURN
root@debian10:~# /usr/sbin/iptables -t filter -C KUBE-ROUTER-INPUT   -p tcp -m addrtype --dst-type LOCAL -m comment --comment "allow LOCAL TCP traffic to node ports" -m multiport --dports 30000:32767 -j RETURN
iptables: Bad rule (does a matching rule exist in that chain?).
root@debian10:~# /usr/sbin/iptables -t filter -C KUBE-ROUTER-INPUT   -p tcp -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -m comment --comment "allow LOCAL TCP traffic to node ports" -j RETURN

This works properly after running update-alternatives --set iptables /usr/sbin/iptables-legacy:

root@debian10:~# /usr/sbin/iptables -t filter -I KUBE-ROUTER-INPUT 2 -p tcp -m addrtype --dst-type LOCAL -m comment --comment "allow LOCAL TCP traffic to node ports" -m multiport --dports 30000:32767 -j RETURN
root@debian10:~# /usr/sbin/iptables -t filter -C KUBE-ROUTER-INPUT   -p tcp -m addrtype --dst-type LOCAL -m comment --comment "allow LOCAL TCP traffic to node ports" -m multiport --dports 30000:32767 -j RETURN
root@debian10:~# /usr/sbin/iptables -t filter -C KUBE-ROUTER-INPUT   -p tcp -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -m comment --comment "allow LOCAL TCP traffic to node ports" -j RETURN
iptables: Bad rule (does a matching rule exist in that chain?).

Since this appears to be a bug in the kernel iptables-nft code, I don't think either K3s or go-iptables can fix this - iptables on Debian should be put in legacy mode until this is resolved upstream.

@clrxbl
Copy link

clrxbl commented Mar 30, 2021

In that case I do think there should be some sort of warning placed during K3s installation when iptables is pointing to the Debian nftables backend until it's resolved.

@brandond
Copy link
Member

brandond commented Mar 30, 2021

Just validated that this works properly on Ubuntu 20.10:

root@seago:~# /usr/sbin/iptables -t filter -N KUBE-ROUTER-INPUT
root@seago:~# /usr/sbin/iptables -t filter -A KUBE-ROUTER-INPUT -p tcp -m addrtype --dst-type LOCAL -m comment --comment "allow LOCAL TCP traffic to node ports" -m multiport --dports 30000:32767 -j RETURN
root@seago:~# /usr/sbin/iptables -t filter -C KUBE-ROUTER-INPUT -p tcp -m addrtype --dst-type LOCAL -m comment --comment "allow LOCAL TCP traffic to node ports" -m multiport --dports 30000:32767 -j RETURN
RETURN  tcp opt -- in * out *  0.0.0.0/0  -> 0.0.0.0/0   ADDRTYPE match dst-type LOCAL /* allow LOCAL TCP traffic to node ports */ multiport dports 30000:32767
root@seago:~# /usr/sbin/iptables -t filter -C KUBE-ROUTER-INPUT -p tcp -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -m comment --comment "allow LOCAL TCP traffic to node ports" -j RETURN
iptables: Bad rule (does a matching rule exist in that chain?).
root@seago:~# iptables -V
iptables v1.8.5 (nf_tables)
root@seago:~# uname -a
Linux seago.lan.khaus 5.11.8-051108-generic #202103200636 SMP Sat Mar 20 11:17:32 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
root@seago:~# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.10
DISTRIB_CODENAME=groovy
DISTRIB_DESCRIPTION="Ubuntu 20.10"

@brandond
Copy link
Member

brandond commented Mar 31, 2021

@clrxbl actually it looks like it's not even a kernel thing - it's just a bug in the version of the nftables package that Debain is shipping. If you apt remove iptables nftables -y and reboot the node, K3s will use its packaged version of the iptables/nftables tools which work properly:

root@debian10:~# export PATH="/var/lib/rancher/k3s/data/current/bin/:/var/lib/rancher/k3s/data/current/bin/aux:$PATH"
root@debian10:~# which iptables
/var/lib/rancher/k3s/data/current/bin/aux/iptables
root@debian10:~# iptables -V
iptables v1.8.5 (nf_tables)
root@debian10:~# iptables -vnL KUBE-ROUTER-INPUT
# Warning: iptables-legacy tables present, use iptables-legacy to see them
Chain KUBE-ROUTER-INPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0/0            10.43.0.0/16         /* allow traffic to cluster IP - M66LPN4N3KB5HTJR */
    0     0 RETURN     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* allow LOCAL TCP traffic to node ports - LR7XO7NXDBGQJD2M */ ADDRTYPE match dst-type LOCAL multiport dports 30000:32767
    0     0 RETURN     udp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* allow LOCAL UDP traffic to node ports - 76UCBPIZNGJNWNUZ */ ADDRTYPE match dst-type LOCAL multiport dports 30000:32767
root@debian10:~# uname -a
Linux debian10 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux
root@debian10:~#

@wursterje
Copy link
Author

Putting iptables in legacy mode does not resolve the under laying issue with nftables for us.

Rules are apparently not duplicated...

iptables -L | wc -l 59

... but the output of nft tells us something different:

nft list table ip filter | wc -l 5858

@dyipon
Copy link

dyipon commented Apr 14, 2021

Had the same issue under Debian 10. Switched to legacy iptables, but did not help.
I had to reinstall the k3s cluster with calico, and now works fine.

@brandond
Copy link
Member

You might try uninstalling the debian iptables/nftables packages, rather than just switching to legacy mode.

@dodwyer
Copy link

dodwyer commented Apr 27, 2021

@brandond thanks for investigating the issue. Do you have a link to more information on the nftables bug? Ideally we can push for this to be patched so this triage is not needed

@brandond
Copy link
Member

brandond commented Apr 27, 2021

I haven't gotten as far as tracking it down to a specific commit in the upstream packages that fixed it, I just know that iptables v1.8.2 (nf_tables) from Debian has the incorrect behavior while iptables v1.8.5 (nf_tables) that we ship, and that is currently available on Ubuntu >= 20.04, behaves correctly.

@ffly90
Copy link

ffly90 commented Aug 30, 2022

@kannanvr thank you for you tip, but we are not using kube router. We think however that we found the cause of our ongoing problems. After some additional digging we realized, that die duplicate rules are not a result of a call of the iptables command.

We tried to find out what else is manipulating the iptables rules. Besides the iptables command itself, there are some other iptables commands that were still pointing to the 1.8.4 version of xtables-nft-multi.
The output of the update-alternatives --list now looks like this:

[...]
iptables                        auto    /var/lib/rancher/k3s/data/current/bin/aux/iptables-nft
ip6tables                       auto    /var/lib/rancher/k3s/data/current/bin/aux/ip6tables-nft
xtables-nft-multi               auto    /var/lib/rancher/k3s/data/current/bin/aux/xtables-nft-multi
iptables-apply                  auto    /var/lib/rancher/k3s/data/current/bin/aux/iptables-apply
iptables-restore                auto    /var/lib/rancher/k3s/data/current/bin/aux/iptables-restore
iptables-restore-translate      auto    /var/lib/rancher/k3s/data/current/bin/aux/iptables-restore-translate
iptables-translate              auto    /var/lib/rancher/k3s/data/current/bin/aux/iptables-translate
iptables-save                   auto    /var/lib/rancher/k3s/data/current/bin/aux/iptables-save

After that change, we did not get any new duplicates. This implies, that one of the other commands is causing the bug in EL 8.

@knweiss
Copy link

knweiss commented Aug 30, 2022

Side note: According to Additional iptables-nft 1.8.0-1.8.3 compatibility problems iptables version 1.8.0 to 1.8.3 have known problems and 1.8.4 should be fine.

  • In some cases it was possible to add a rule with iptables -A but then have iptables -C claim that the rule did not exist. (This led to kubelet repeatedly creating more and more copies of the same rule, thinking it had not been created yet.)

iptables 1.8.3 fixed these compatibility problems, but had a slightly different problem, which is that iptables-nft would get stuck in an infinite loop if it couldn't load the kernel nf_tables module.

iptables 1.8.4 and later have no known problems that affect Kubernetes.

However, our tests on Rocky Linux 8.6 indicate that the 1.8.4 still has (another) issue in one of its commands.

@cwayne18 cwayne18 modified the milestones: v1.21 - Backlog, Backlog Aug 31, 2022
@zhaileilei123
Copy link
Contributor

I had the same problem.
Environmental Information:
iptables --version
iptables v1.8.2 (nf_tables)
uname -a
Linux debian 4.19.0-8-cloud-amd64 #1 SMP Debian 4.19.98-1 (2020-01-26) x86_64 GNU/Linux

I tried to switch the version of iptables.Because I have two local versions of iptables
debian:# update-alternatives --list iptables
/usr/sbin/iptables-legacy
/usr/sbin/iptables-nft
debian:
# update-alternatives --config iptables
debian:~# update-alternatives --config iptables
There are 2 choices for the alternative iptables (providing /usr/sbin/iptables).

Selection Path Priority Status

0 /usr/sbin/iptables-nft 20 auto mode

  • 1 /usr/sbin/iptables-legacy 10 manual mode
    2 /usr/sbin/iptables-nft 20 manual mode

systemctl restart k3s
iptables -L -n |grep RETURN

@mogoman
Copy link

mogoman commented Sep 26, 2022

@firefly-serenity we've been struggling with this problem for a while now, I'll try setting more alternatives to see if this helps, thanks. Just one thing I did want to mention, in case it sheds some light. We are running about 8 separate k3s clusters, all on Centos8-stream. All clusters except one are virtual machines - the last cluster having physical worker nodes (with virtual master).

It is the physical cluster that has this issue (k3s-1.24.4, but also earlier versions) - every once in a while high load, iptables creating duplicate rules forcing us to reboot a node - never seen this problem on the VM based clusters. All nodes are installed/patched the same way regardless physical or virtual.

@knweiss
Copy link

knweiss commented Sep 27, 2022

@mogoman FWIW: The k3s system where @firefly-serenity and I see/saw this issue is running on six virtual nodes (Rocky Linux 8.6).

(Since the last (extended) update-alternatives change it is working fine - so far.)

@RudiMT
Copy link

RudiMT commented Sep 29, 2022

Regular outages due to unresponsive servers seem pretty severe to me. The docs contain instructions on how to remove iptables or switch into legacy mode for Debian, but not for RHEL 8.

Since iptables is a core application of RHEL 8 it is in many cases not possible to just remove the OS package from the installation. I could not find out yet if there is an option to use iptales-legacy with RHEL 8.

What could be done in my opinion:

  1. An official suggestion for a workaround, that does not include removing the iptables that is provides by the distro and update the documentation accordingly.
  2. Tracking down the bug in iptables 1.8.4, so we can open a bug report with Red Hat. It is not unlikely that a patch will be provided shortly after that.

Yes, official advice if and how bundled and distro iptables can coexist should be part of the first suggestion. It would help if the iptables (distro or bundle) used during the first k3s run would be given precedence in the PATH for later runs also.

Prepending the bundled iptables to the PATH in k3s.service has been working for us on RHEL8 now for some time. The number of other bundled binaries under that PATH that could interfere with what the distro provides is really quite limited.

@mogoman
Copy link

mogoman commented Oct 17, 2022

@knweiss thanks. In between I've seen the problem on virtual nodes now too. The swapping out iptables solution is still holding and I've now rolled out to all clusters now (all running CentOS Stream 8).

@caroline-suse-rancher
Copy link
Contributor

@dereknola does your new flag resolve this issue completely?

@brandond
Copy link
Member

Yes, the new flag should allow users who are stuck with buggy versions of iptables to work around the issue. We still need some docs for this though.

@ShylajaDevadiga
Copy link
Contributor

Validated on k3s version v1.26.0-rc2+k3s1
Passing the prefer-bundled-bin, PATH used by the k3s process is updated to use the k3s bundle first before OS binaries

sudo cat /proc/36818/environ | xargs -0 echo | grep PATH 

PATH=/var/lib/rancher/k3s/data/e936377912d1958c29d8e5cf1cbd92a26b9de2520a152d852aec8c3685fdfbd2/bin:/var/lib/rancher/k3s/data/e936377912d1958c29d8e5cf1cbd92a26b9de2520a152d852aec8c3685fdfbd2/bin/aux:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin NOTIFY_SOCKET=/run/systemd/notify INVOCATION_ID=bba6f6aed8f84fa0949734676a4e26d4 JOURNAL_STREAM=9:527273 RES_OPTIONS=  K3S_DATA_DIR=/var/lib/rancher/k3s/data/e936377912d1958c29d8e5cf1cbd92a26b9de2520a152d852aec8c3685fdfbd2              

Without prefer-bundled-bin uses PATH has OS path first followed by k3s bundle

sudo cat /proc/4278/environ | xargs -0 echo | grep PATH

PATH=/var/lib/rancher/k3s/data/e936377912d1958c29d8e5cf1cbd92a26b9de2520a152d852aec8c3685fdfbd2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/var/lib/rancher/k3s/data/e936377912d1958c29d8e5cf1cbd92a26b9de2520a152d852aec8c3685fdfbd2/bin/aux NOTIFY_SOCKET=/run/systemd/notify INVOCATION_ID=d894b3f9e9e54955a6cf040c139a3b7d JOURNAL_STREAM=9:48105 RES_OPTIONS=  K3S_DATA_DIR=/var/lib/rancher/k3s/data/e936377912d1958c29d8e5cf1cbd92a26b9de2520a152d852aec8c3685fdfbd2

@ShylajaDevadiga
Copy link
Contributor

On k3s version v1.25.5-rc3+k3s1

sudo cat /proc/43290/environ | xargs -0 echo | grep PATH
                 PATH=/var/lib/rancher/k3s/data/60cc79886e7804a321fd2134fb53cb9c83ad389ac68d28ff7e4b388902505e02/bin:/var/lib/rancher/k3s/data/60cc79886e7804a321fd2134fb53cb9c83ad389ac68d28ff7e4b388902505e02/bin/aux:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin NOTIFY_SOCKET=/run/systemd/notify INVOCATION_ID=585661567bf14a67ad3a95df54f98c36 JOURNAL_STREAM=9:583097 RES_OPTIONS=  K3S_DATA_DIR=/var/lib/rancher/k3s/data/60cc79886e7804a321fd2134fb53cb9c83ad389ac68d28ff7e4b388902505e02

@ShylajaDevadiga
Copy link
Contributor

Replicated issue on Debian 10 and validated using the flag new duplicates were not added

admin@ip-172-31-19-136:~$ sudo iptables -L |wc -l
# Warning: iptables-legacy tables present, use iptables-legacy to see them
209
admin@ip-172-31-19-136:~$ sudo iptables -L |wc -l
# Warning: iptables-legacy tables present, use iptables-legacy to see them
223
admin@ip-172-31-19-136:~$ sudo iptables -L |wc -l
# Warning: iptables-legacy tables present, use iptables-legacy to see them
248
admin@ip-172-31-19-136:~$ sudo iptables -L |sort |grep 'KUBE-POD-FW-V6LPER2Y23JR2PTH'
# Warning: iptables-legacy tables present, use iptables-legacy to see them
Chain KUBE-POD-FW-V6LPER2Y23JR2PTH (7 references)
KUBE-POD-FW-V6LPER2Y23JR2PTH  all  --  anywhere             ip-10-42-0-6.us-east-2.compute.internal  /* rule to jump traffic destined to POD name:local-path-provisioner-79f67d76f8-kz6vd namespace: kube-system to chain KUBE-POD-FW-V6LPER2Y23JR2PTH */
KUBE-POD-FW-V6LPER2Y23JR2PTH  all  --  anywhere             ip-10-42-0-6.us-east-2.compute.internal  /* rule to jump traffic destined to POD name:local-path-provisioner-79f67d76f8-kz6vd namespace: kube-system to chain KUBE-POD-FW-V6LPER2Y23JR2PTH */
KUBE-POD-FW-V6LPER2Y23JR2PTH  all  --  anywhere             ip-10-42-0-6.us-east-2.compute.internal  PHYSDEV match --physdev-is-bridged /* rule to jump traffic destined to POD name:local-path-provisioner-79f67d76f8-kz6vd namespace: kube-system to chain KUBE-POD-FW-V6LPER2Y23JR2PTH */
KUBE-POD-FW-V6LPER2Y23JR2PTH  all  --  ip-10-42-0-6.us-east-2.compute.internal  anywhere             /* rule to jump traffic from POD name:local-path-provisioner-79f67d76f8-kz6vd namespace: kube-system to chain KUBE-POD-FW-V6LPER2Y23JR2PTH */
KUBE-POD-FW-V6LPER2Y23JR2PTH  all  --  ip-10-42-0-6.us-east-2.compute.internal  anywhere             /* rule to jump traffic from POD name:local-path-provisioner-79f67d76f8-kz6vd namespace: kube-system to chain KUBE-POD-FW-V6LPER2Y23JR2PTH */
KUBE-POD-FW-V6LPER2Y23JR2PTH  all  --  ip-10-42-0-6.us-east-2.compute.internal  anywhere             /* rule to jump traffic from POD name:local-path-provisioner-79f67d76f8-kz6vd namespace: kube-system to chain KUBE-POD-FW-V6LPER2Y23JR2PTH */
KUBE-POD-FW-V6LPER2Y23JR2PTH  all  --  ip-10-42-0-6.us-east-2.compute.internal  anywhere             PHYSDEV match --physdev-is-bridged /* rule to jump traffic from POD name:local-path-provisioner-79f67d76f8-kz6vd namespace: kube-system to chain KUBE-POD-FW-V6LPER2Y23JR2PTH */
admin@ip-172-31-19-136:~$ sudo iptables -L |sort |uniq -d |wc -l
# Warning: iptables-legacy tables present, use iptables-legacy to see them
23
admin@ip-172-31-19-136:~$ sudo iptables -V
iptables v1.8.2 (nf_tables)
admin@ip-172-31-19-136:~$

With the flag to use k3s bundle

$ sudo /var/lib/rancher/k3s/data/60cc79886e7804a321fd2134fb53cb9c83ad389ac68d28ff7e4b388902505e02/bin/aux/iptables -L |wc -l
171

Repository owner moved this from Documentation to Closed in K3s Backlog Dec 15, 2022
@k3s-io k3s-io locked as resolved and limited conversation to collaborators Dec 15, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Something isn't working kind/upstream-issue This issue appears to be caused by an upstream bug
Projects
Status: Closed
Development

No branches or pull requests