Closed
Description
We've observed a problem with clustered etcd, that after recreating one of the instances, it is entering crashloop and panicing with:
2017-11-29 05:47:07.905742 I | etcdserver: advertise client URLs = http://127.0.0.1:2379
2017-11-29 05:47:08.076040 I | etcdserver: restarting member 9af8ed310dba8214 in cluster 488ce3f49fab5e77 at commit index 20707
2017-11-29 05:47:08.077815 C | raft: 9af8ed310dba8214 state.commit 20707 is out of range [10001, 11535]
panic: 9af8ed310dba8214 state.commit 20707 is out of range [10001, 11535]
What exactly is happening is that we are recreating a VM where the etcd is running (without calling remove/add member) preserving all the addresses, etcd data etc. So after recreation, it the etcd is being started as the old member (similarly as it was restarted, though with a big bigger downtime).
We are running 3.0.17 version.
I found this #5664, which seems to be a similar (same?) problem. Has this been fixed? If so, in what release?