Skip to content

Commit 9309992

Browse files
authored
Known issues change (#147)
* added known issues for sharding * added the known issue change
1 parent 2479f3c commit 9309992

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

docs/sharding/README.md

+8
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ Following sections provide the details for deploying Oracle Globally Distributed
3333
* [Provisioning Oracle Globally Distributed Database System-Managed Sharding with Raft replication enabled in a Cloud-Based Kubernetes Cluster](#provisioning-oracle-globally-distributed-database-topology-with-system-managed-sharding-and-raft-replication-enabled-in-a-cloud-based-kubernetes-cluster)
3434
* [Connecting to Shard Databases](#connecting-to-shard-databases)
3535
* [Debugging and Troubleshooting](#debugging-and-troubleshooting)
36+
* [Known Issues](#known-issues)
3637

3738
**Note** Before proceeding to the next section, you must complete the instructions given in each section, based on your enviornment, before proceeding to next section.
3839

@@ -187,3 +188,10 @@ After the Oracle Globally Distributed Database Topology has been provisioned usi
187188
## Debugging and Troubleshooting
188189

189190
To debug the Oracle Globally Distributed Database Topology provisioned using the Sharding Controller of Oracle Database Kubernetes Operator, follow this document: [Debugging and troubleshooting](./provisioning/debugging.md)
191+
192+
## Known Issues
193+
194+
* For both ENTERPRISE and FREE Images, if the GSM POD is stopped using `crictl stopp` at the worker node level, it leaves GSM in failed state with the `gdsctl` commands failing with error **GSM-45034: Connection to GDS catalog is not established**. It is beacause with change, the network namespace is lost if we check from the GSM Pod.
195+
* For both ENTERPRISE and FREE Images, reboot of node running CATALOG using `/sbin/reboot -f` results in **GSM-45076: GSM IS NOT RUNNING**. Once you hit this issue, after waiting for a certain time, the `gdsctl` commands start working as the DB connection start working. Once the stack comes up fine after the node reboot, after some time, unexpected restart of GSM Pod is also observed.
196+
* For both ENTERPRISE and FREE Images, reboot of node running the SHARD Pod using `/sbin/reboot -f` or stopping the Shard Database Pod from worker node using `crictl stopp` command leaves the shard in error state.
197+
* For both ENTERPRISE and FREE Images, GSM pod restarts multiple times after force rebooting the node running GSM Pod. Its because when the worker node comes up, the GSM pod was recreated but it does not get DB connection to Catalog and meanwhile, the Liveness Probe fails which restart the Pod.

0 commit comments

Comments
 (0)