Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions src/azure-cli/HISTORY.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@

Release History
===============
**Azure Red Hat OpenShift**

* Add `--workspace-resource-id` flag to allow creation of Azure Red Hat Openshift cluster with monitoring
* Add `monitor_profile` to create Azure Red Hat OpenShift cluster with monitoring

**AKS**

Expand Down
5 changes: 5 additions & 0 deletions src/azure-cli/azure/cli/command_modules/acs/_help.py
Original file line number Diff line number Diff line change
Expand Up @@ -896,6 +896,9 @@
- name: --customer-admin-group-id
type: string
short-summary: The Object ID of an Azure Active Directory Group that memberships will get synced into the OpenShift group "osa-customer-admins". If not specified, no cluster admin access will be granted.
- name: --workspace-resource-id
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would something like --loganalytics-workspace-resource-id be better here? I know it's a lot to type. Don't mind whether it's loganalytics or monitoring or some other name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To reduce complexity, I'm okay with the long --loganalytics-workspace-resource-id name. I can change it to that to provide a concise naming convention.

type: string
short-summary: The resource ID of an existing Log Analytics Workspace to use for storing monitoring data.


examples:
Expand All @@ -907,6 +910,8 @@
text: az openshift create -g MyResourceGroup -n MyManagedCluster --aad-client-app-id {APP_ID} --aad-client-app-secret {APP_SECRET} --aad-tenant-id {TENANT_ID} --compute-count 5
- name: Create an Openshift cluster using a custom vnet
text: az openshift create -g MyResourceGroup -n MyManagedCluster --vnet-peer "/subscriptions/0000000-0000-0000-0000-000000000000/resourceGroups/openshift-vnet/providers/Microsoft.Network/virtualNetworks/test"
- name: Create an Openshift cluster with Log Analytics monitoring enabled
text: az openshift create -g MyResourceGroup -n MyManagedCluster --workspace-resource-id {WORKSPACE_RESOURCE_ID}
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From reviewing the code, I understand that, customer can only add the monitoring as part of cluster creation. what about the scenario to add monitoring for existing ARO cluster?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the RP will support changing the value - @troy0820 the az client will need to as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is correct. I think that scenario to add monitoring to an existing cluster is valid and I am open up to feedback on how AKS does this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tested on update if you add the monitor profile, or have one and enable it, it will add the agent to the cluster and logs will flow. Simarily you can change the workspace and the agents will be updated and logs will flow to the new namespace


helps['openshift delete'] = """
Expand Down
1 change: 1 addition & 0 deletions src/azure-cli/azure/cli/command_modules/acs/_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -321,6 +321,7 @@ def load_arguments(self, _):
c.argument('name', validator=validate_linux_host_name)
c.argument('compute_vm_size', options_list=['--compute-vm-size', '-s'])
c.argument('customer_admin_group_id', options_list=['--customer-admin-group-id'])
c.argument('workspace_resource_id')


def _get_default_install_location(exe_name):
Expand Down
39 changes: 31 additions & 8 deletions src/azure-cli/azure/cli/command_modules/acs/custom.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,8 @@
from azure.mgmt.containerservice.v2019_04_30.models import OpenShiftRouterProfile
from azure.mgmt.containerservice.v2019_04_30.models import OpenShiftManagedClusterAuthProfile
from azure.mgmt.containerservice.v2019_04_30.models import NetworkProfile
from azure.mgmt.containerservice.v2019_09_30_preview.models import OpenShiftManagedCluster as OpenShiftManagedClusterMonitor # pylint: disable=line-too-long
from azure.mgmt.containerservice.v2019_09_30_preview.models import OpenShiftManagedClusterMonitorProfile

from ._client_factory import cf_container_services
from ._client_factory import cf_resource_groups
Expand Down Expand Up @@ -3124,6 +3126,7 @@ def openshift_create(cmd, client, resource_group_name, name, # pylint: disable=
vnet_peer=None,
tags=None,
no_wait=False,
workspace_resource_id=None,
customer_admin_group_id=None):

if location is None:
Expand Down Expand Up @@ -3194,17 +3197,37 @@ def openshift_create(cmd, client, resource_group_name, name, # pylint: disable=
namespace='Microsoft.Network', type='virtualNetwork',
name=vnet_peer
)
if workspace_resource_id is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In AKS, we create default Azure Log Analytics workspace if the customer passed on the "monitoring" addon and with no workspace-resource-id. Looks like in ARO case, there is no monitor addon concept and enabling monitoring governed by --workspace-resource-id parameter. is that, right? or we will be going with the same behavior as AKS i.e. creating default log analytics workspace when customer provided no workspace-resource-id parameter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ARO doesn't have a concept of add-ons at the moment. I do not believe we want to create a default log analytics workspace if they do not pass in the workspace-resource-id. We will not enable it if it's not passed in.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@troy0820, ok, thanks.

workspace_resource_id = workspace_resource_id.strip()
if not workspace_resource_id.startswith('/'):
workspace_resource_id = '/' + workspace_resource_id
if workspace_resource_id.endswith('/'):
workspace_resource_id = workspace_resource_id.rstrip('/')
monitor_profile = OpenShiftManagedClusterMonitorProfile(enabled=True, workspace_resource_id=workspace_resource_id) # pylint: disable=line-too-long
else:
monitor_profile = None

network_profile = NetworkProfile(vnet_cidr=vnet_prefix, peer_vnet_id=vnet_peer)

osamc = OpenShiftManagedCluster(
location=location, tags=tags,
open_shift_version="v3.11",
network_profile=network_profile,
auth_profile=auth_profile,
agent_pool_profiles=agent_pool_profiles,
master_pool_profile=agent_master_pool_profile,
router_profiles=[default_router_profile])
if monitor_profile is not None:
osamc = OpenShiftManagedClusterMonitor(
location=location, tags=tags,
open_shift_version="v3.11",
network_profile=network_profile,
auth_profile=auth_profile,
agent_pool_profiles=agent_pool_profiles,
master_pool_profile=agent_master_pool_profile,
router_profiles=[default_router_profile],
monitor_profile=monitor_profile)
else:
osamc = OpenShiftManagedCluster(
location=location, tags=tags,
open_shift_version="v3.11",
network_profile=network_profile,
auth_profile=auth_profile,
agent_pool_profiles=agent_pool_profiles,
master_pool_profile=agent_master_pool_profile,
router_profiles=[default_router_profile])

try:
# long_running_operation_timeout=300
Expand Down