Skip to content

Clarify lifecycle of monitoring exporter integration #2366

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 19, 2021

Conversation

rjeberhard
Copy link
Member

No description provided.

Copy link
Member

@rosemarymarano rosemarymarano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nits, otherwise fine.

@@ -37,7 +37,7 @@ The specification of the operation of the WebLogic domain. Required.
| `managedServers` | array of [Managed Server](#managed-server) | Lifecycle options for individual Managed Servers, including Java options, environment variables, additional Pod content, and the ability to explicitly start, stop, or restart a named server instance. The `serverName` field of each entry must match a Managed Server that already exists in the WebLogic domain configuration or that matches a dynamic cluster member based on the server template. |
| `maxClusterConcurrentShutdown` | number | The default maximum number of WebLogic Server instances that a cluster will shut down in parallel when it is being partially shut down by lowering its replica count. You can override this default on a per cluster basis by setting the cluster's `maxConcurrentShutdown` field. A value of 0 means there is no limit. Defaults to 1. |
| `maxClusterConcurrentStartup` | number | The maximum number of cluster member Managed Server instances that the operator will start in parallel for a given cluster, if `maxConcurrentStartup` is not specified for a specific cluster under the `clusters` field. A value of 0 means there is no configured limit. Defaults to 0. |
| `monitoringExporter` | [Monitoring Exporter Specification](#monitoring-exporter-specification) | Configuration for the use of the WebLogic Monitoring Exporter as part of this domain. |
| `monitoringExporter` | [Monitoring Exporter Specification](#monitoring-exporter-specification) | Automatic deployment and configuration of the WebLogic Monitoring Exporter. If specified, the operator will deploy a sidecar container alongside each WebLogic server instance that runs the exporter. WebLogic Server instances that are already running when the `monitoringExporter` field is created or deleted will not be restarted simply to provision or remove the exporter's sidecar container. When any given server is restarted for another reason, such as a change to the `restartVersion`, then the newly created pod will have the exporter sidecar or not, as appropriate. See https://github.com/oracle/weblogic-monitoring-exporter. |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WebLogic server -> WebLogic Server
created or deleted -> created or deleted, (comma)

@tbarnes-us
Copy link

Related note:

IMO, it'd be helpful if the https://github.com/oracle/weblogic-monitoring-exporter documentation directly cross-referenced this new related information in the WKO doc. There are two places an x-ref would help: the introduction bullet, and the 'Sidecar' section's intro (which should be expanded to indicate that the sidecar feature is primarily intended to help with WKO integration).

@rjeberhard
Copy link
Member Author

@tbarnes-us, I'm going to work on some simple monitoring exporter integration documentation next (for WKO) and then I'll have a more appropriate page to link from the monitoring exporter site.

@@ -37,7 +37,7 @@ The specification of the operation of the WebLogic domain. Required.
| `managedServers` | array of [Managed Server](#managed-server) | Lifecycle options for individual Managed Servers, including Java options, environment variables, additional Pod content, and the ability to explicitly start, stop, or restart a named server instance. The `serverName` field of each entry must match a Managed Server that already exists in the WebLogic domain configuration or that matches a dynamic cluster member based on the server template. |
| `maxClusterConcurrentShutdown` | number | The default maximum number of WebLogic Server instances that a cluster will shut down in parallel when it is being partially shut down by lowering its replica count. You can override this default on a per cluster basis by setting the cluster's `maxConcurrentShutdown` field. A value of 0 means there is no limit. Defaults to 1. |
| `maxClusterConcurrentStartup` | number | The maximum number of cluster member Managed Server instances that the operator will start in parallel for a given cluster, if `maxConcurrentStartup` is not specified for a specific cluster under the `clusters` field. A value of 0 means there is no configured limit. Defaults to 0. |
| `monitoringExporter` | [Monitoring Exporter Specification](#monitoring-exporter-specification) | Configuration for the use of the WebLogic Monitoring Exporter as part of this domain. |
| `monitoringExporter` | [Monitoring Exporter Specification](#monitoring-exporter-specification) | Automatic deployment and configuration of the WebLogic Monitoring Exporter. If specified, the operator will deploy a sidecar container alongside each WebLogic Server instance that runs the exporter. WebLogic Server instances that are already running when the `monitoringExporter` field is created or deleted, will not be restarted simply to provision or remove the exporter's sidecar container. When any given server is restarted for another reason, such as a change to the `restartVersion`, then the newly created pod will have the exporter sidecar or not, as appropriate. See https://github.com/oracle/weblogic-monitoring-exporter. |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its not 100% if the monitoring exporter side car will or will not be added to existing pods that are already running a managed server. I think that you are saying that they will NOT be added. But, you say that they will not be restarted (which is also good to know).

@rjeberhard rjeberhard merged commit 499304b into main May 19, 2021
@rjeberhard rjeberhard deleted the exporter-integration-clarification branch January 31, 2022 14:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants