Skip to content

Commit d64a14d

Browse files
committed
Release 2.7.5. See Changelog for more details
1 parent e1617b1 commit d64a14d

File tree

107 files changed

+8595
-5253
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

107 files changed

+8595
-5253
lines changed

CHANGELOG.md

Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,89 @@ All notable changes to this project will be documented in this file.
44
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
55
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
66

7+
## [2.7.5] - 2024-04-10
8+
9+
### Features
10+
- Support for [Amazon Linux 2023](https://aws.amazon.com/linux/amazon-linux-2023/) as a BaseOS for compute nodes
11+
- eVDI on Amazon Linux 2023 is not currently supported
12+
- Support for `5 new AWS Regions`: `ap-northeast-3`, `ap-southeast-4`, `eu-central-2`, `eu-south-2`, and `il-central-1`.
13+
- Note that not all Base OSes are available in All regions
14+
- Support for `RHEL8`, `RHEL9`, `Rocky8`, and `Rocky9` operating systems for both DCV and compute nodes
15+
- Support for newer AWS Instance types/families. This includes `hpc7a`, `hpc7g`, `r7iz`, `g6`, `gr6`, `g5`, `g5g`, `c7i`, `p5`, and many more (where supported in the region)
16+
- Support for [AWS GovCloud](https://aws.amazon.com/govcloud-us/) Partition installation by default
17+
- Include AMIs for regions `us-gov-west-1` and `us-gov-east-1`
18+
- Note that not all Base OSes are available in All regions within GovCloud
19+
- Set environment variable `AWS_DEFAULT_REGION` to a GovCloud region prior to invoking `soca_installer.sh`
20+
- Improve compatibility and support SOCA deployments on `AWS Outposts` (compute, eVDI)
21+
- The default `VolumeType` in Secrets Manager needs to be configured to reflect the AWS Outposts `gp2` support
22+
- Support has been added for multi-interface EFA instances such as the `p5.48xlarge`. For compute instances that support multiple EFA interfaces - all EFA interfaces will be created during provisioning.
23+
- The SOCA Administrator can now define the list of approved eVDI instances via new configuration parameters:
24+
- `DCVAllowedInstances` - A list of patterns for allowed instance names. For example `["m7i-flex.*", "m7i.*", "m6i.*", "m5.*", "g6.*", "gr6.*", "g5.*", "g5g.*", "g4dn.*", "g4ad.*"]`
25+
- (Optional) `DCVAllowBareMetal` (defaults to `False`) - Allow listing of Bare Metal instances for eVDI
26+
- (Optional) `DCVAllowPreviousGenerations` (defaults to `False`) - Allow listing of previous generation(s) of instances for eVDI
27+
28+
### Changed
29+
- Improved user experience using `soca_installer.sh` in high-density VPC/subnet environments
30+
- Improved the log message for an Invalid `subnet_id` during job submission to include the specific `subnet_id` that triggered the error
31+
- Updated Python from `3.9.16` to `3.9.19`
32+
- Updated AWS Boto3/botocore from `1.26.91` to `1.34.71`
33+
- Updated OpenMPI from `4.1.5` to `5.0.2`
34+
- Updated OpenPBS from `22.05.11` to `23.06.06`
35+
- Updated Monaco-Editor from `0.36.1` to `0.46.0`
36+
- Updated AWS EFA installer from `1.22.1` to `1.31.0`
37+
- Updated NICE DCV from `2023.0-14852` to `2023.1-16388`
38+
- Update NVM from `0.39.3` to `0.39.7`
39+
- Updated Node from `16.15.0` to `16.20.2`
40+
- Updated Lambda Runtimes to Python `3.11` where applicable
41+
- Misc Python 3rd party module version updates
42+
- Refactor installation items for newer AWS CDK methods
43+
- Updated default `OpenSearch` engine version to `2.11` when creating an OpenSearch deployment
44+
- The use of `add_nodes.py` to add `AlwaysOn` nodes now allows the parameter `--instance_ami` to be optional and will default to the `CustomAMI` in the cluster configuration
45+
- Download/install/configure `Redis` version `7.2.4` for new SOCA cache backend
46+
- The SOCA ELB/ALB is now created with the option `drop_invalid_headers` set to `True` by default.
47+
- Several UWSGI application server adjustments
48+
- Activate UWSGI `stats` server on `127.0.0.1:9191`
49+
- Activate UWSGI `offload-threads`
50+
- Activate UWSGI `threaded-logger`
51+
- Activate UWSGI `memory-report`
52+
- Activate UWSGI `microsecond logging`
53+
- Activate UWSGI logging of the `X-Forwarded-For` headers so that the client IP address is captured versus the ELB IP Address
54+
- Added `uwsgitop` to assist in UWSGI performance investigations. This can be accessed via the command `uwsgitop localhost:9191` from the scheduler.
55+
- Adjusted Flask session backend from `SQLite` to `redis`. This results in a much faster WebUI/session handling.
56+
- **NOTE** - Upgrade scenarios should take UWSGI changes into account and manually perform Redis installation/configuration and session migration.
57+
- `Launch Tenancy` and `Launch Host` have been added as options when registering an AMI in SOCA. These will be used during DCV session creation.
58+
- For more information on launch tenancy - see the [documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html).
59+
- Updated default OpenSearch instance from `m5.large.search` to `m6g.large.search`
60+
- Updated default VDI choices from `m5` to `m6i` instance family
61+
- `instance_ami` is no longer mandatory when specifying a custom `base_os`. SOCA will determine which default AMI to use automatically via the `CustomAMIMap` configuration stored on Secrets Manager.
62+
- Changed default `instance_type` for all base HPC queues from `c5` to `c6i` instance family
63+
- Updated DCV Session default `Storage Size` to `40GB` to accommodate additional locally installed software such as GPU drivers, libs, etc.
64+
65+
### Fixes
66+
- `DryRun` job submission was not taking into account the `IMDS` settings for the cluster. This could cause job submission to fail `DryRun` and not be submitted.
67+
- Installation using an existing `OpenSearch` / `ElasticSearch` domain was not working as expected. This has been fixed.
68+
- Avoid sending `CpuOptions` with `hpc7a`, `hpc7g`, `g5`, `g5g` instances. This will fix launching on these instance families.
69+
- Properly detect newer AWS metal instances for determining if `CpuOptions` is supported during instance launch. This will allow launching `c7i.metal-24xl`, `c7i.metal-48xl` (and others) to function properly
70+
- On the `scheduler` Post-Install - extract/compile `OpenMPI` on a local EBS volume instead of EFS (can reduce compile time by `50%+`)
71+
- During HPC Job submission within the WebUI - the multi-select UI element `Checkbox Group` was not passed correctly to the underlying job scripting
72+
- `Checkbox Group` element values will be delimited by comma by default (e.g. `option1,option2`).
73+
- Care should be taken to not have option values contain the delimiter character. This can be updated in `submit_job.py` as needed. (Option name fields can contain the delimiter character)
74+
- During DCV Session creation - the user was allowed to enter a session name that exceeded the allowable length for a CloudFormation stack name. This has been adjusted to trim the session name to appropriate length (32 characters).
75+
- During DCV Session creation - if the session contained an underscore (_) the session would produce an error and not be created.
76+
- During DCV Session creation - The `Storage Size` was allowed to be lower than a stored AMI. This will now default / auto-size to the AMI specification.
77+
- Bootstrap tooltips are now displayed using the correct CSS in the Remote Desktop pages
78+
- Previously during invocation of `soca_installer.sh` with existing resources - only VPCs and Subnets with AWS `Name` tags would be selectable. This restriction has been eased to allow resources without `Name` tags to be selectable.
79+
- Under certain conditions in an Active Directory (AD) environment - the `scheduler` computer object could be mistakenly replaced in AD by an incoming compute or VDI node. This was due to NetBIOS name length restrictions causing name conflicts. This has been corrected.
80+
81+
### Known Caveats
82+
- Web Sessions can be stored in the back-end (redis) that relate to API calls or other situations where return of the session is not expected. These sessions will be cleaned up automatically by Redis when the TTL expires (24hours).
83+
- On the Remote Desktop selection for Instance Types - sorting, grouping, and custom names of the AWS instances is not configurable by the SOCA Administrator for wildcard instances allowed via wildcard (e.g. `g5.*`).
84+
- This can cause 'selection fatigue' for end-users when a large number of instances types are allowed.
85+
- The SOCA Administrator can configure the static list at the top before the generated list appears. See the `cluster_web_ui/templates/remote_desktop.html` (Linux) and `cluster_web_ui/templates/remote_desktop_windows.html` (Windows) files for examples/defaults.
86+
- The SOCA Administrator can reduce the default instances allowed by editing the AWS Secrets Manager configuration entry for the cluster and refreshing the configuration on the cluster.
87+
88+
89+
790
## [2.7.4] - 2023-05-08
891

992
### Features

docs/tutorials/install-soca-cluster.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Execute `soca_install.sh` script located in the `installer` folder:
4747

4848
~~~bash
4949
# Assuming your current working directory is the root level of SOCA
50-
./installer/soca_installer.sh
50+
./installer/soca_install.sh
5151
~~~
5252

5353
You will then be prompted for your cluster parameters. Follow the instructions and choose a S3 bucket you own, the name of your cluster, the SSH keypair to use and other cluster parameters.

docs/tutorials/integration-ec2-job-parameters.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -239,12 +239,15 @@ Below is a list of parameters you can specify when you request your simulation t
239239
#### fsx_lustre_deployment_type
240240

241241
- Description: Choose what type of FSx for Lustre you want to deploy
242-
- Allowed Valuess: `SCRATCH_1` `SCRATCH_2` `PERSISTENT_1` (case insensitive)
242+
- Allowed Values: `SCRATCH_1` `SCRATCH_2` `PERSISTENT_1` `PERSISTENT_2` (case insensitive)
243243
- Default Value: `SCRATCH_2`
244-
- Example: `-l fsx_lustre_deployment_type=scratch_2`: Provision a FSx for Lustre with SCRATCH_2 type
244+
- Example: `-l fsx_lustre_deployment_type=scratch_2`: Provision a FSx for Lustre with `SCRATCH_2` type
245245

246246
!!!note
247-
If `fsx_lustre_size` is not specified, default to 1200 GB (smallest size supported)
247+
If `fsx_lustre_size` is not specified, default to 1200 GB (the smallest size supported)
248+
249+
!!!note
250+
Confirm supported region deployment types in the FSx/Lustre User Guide - https://docs.aws.amazon.com/fsx/latest/LustreGuide/using-fsx-lustre.html
248251

249252
!!!warning "Pre-Requisite"
250253
This parameter is ignored unless you have specified `fsx_lustre=True`
@@ -257,7 +260,7 @@ Below is a list of parameters you can specify when you request your simulation t
257260
- Example: `-l fsx_lustre_per_unit_throughput=250`:
258261

259262
!!!note
260-
Per Unit Throughput is only avaible when using `PERSISTENT_1` FSx for Lustre
263+
Per Unit Throughput is only available when using `PERSISTENT_1` or `PERSISTENT_2` deployment types.
261264

262265
!!!warning "Pre-Requisite"
263266
This parameter is ignored unless you have specified `fsx_lustre=True`

docs/workshops/Synopsys-Physical-Verification/modules/01-web-login.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ The goal of this module is to login to SOCA web interface and start a remote des
44

55
## Step 1: Login to SOCA Web UI
66

7-
1. Click one of the links below depending on the session you're attending to login to corresponding SOCA web interface.
7+
1. Click one of the links below depending on the session you're attending to log in to corresponding SOCA web interface.
88

99
Workshop sessions are not active at this time!!
1010

docs/workshops/Synopsys-Physical-Verification/modules/02-login-copy.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Lab 2: Login to Remote Desktop and Copy Lab Data
22

3-
The goal with this lab is to login to the remote cloud desktop visualization and experience using it. You'll also copy the data required for the subsequent labs.
3+
The goal with this lab is to log in to the remote cloud desktop visualization and experience using it. You'll also copy the data required for the subsequent labs.
44

55
## Step 1: Log into your session
66

@@ -28,6 +28,6 @@ By now your remote desktop session should be ready and you should see the follow
2828

2929
1. Source environment settings by typing `source setup.csh` and hit enter
3030

31-
In this lab you learned how to login to desktop cloud visualiztion instance, and copied the lab data.
31+
In this lab you learned how to log in to desktop cloud visualiztion instance, and copied the lab data.
3232

3333
You've completed this lab. Click **Next** to move to the next lab.

docs/workshops/Synopsys-Physical-Verification/modules/03-submit-elasti.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Synopsys IC Validator (ICV) has the ability to request CPUs when it needs additi
1414

1515
1. Run the `qstat` command to view the status of the jobs.
1616

17-
1. You can also view job status by clicking on **My Job Queue** in the left side navigation bar in SOCA portal under **PROFILE** section as shown in the screen shot below:
17+
1. You can also view job status by clicking on **My Job Queue** on the left side navigation bar in SOCA portal under **PROFILE** section as shown in the screen shot below:
1818

1919
![](../imgs/my-job-queue.png)
2020

@@ -41,7 +41,7 @@ Synopsys IC Validator (ICV) has the ability to request CPUs when it needs additi
4141

4242
1. Monitor the progress of the ELASTI test case by typing this command: `icv_dashboard -keys hSdSVaCfhpv elastic_run/run_details/saed32nm_1p9m_drc_rules.dp.log &`
4343

44-
1. As the job progresses, ICV will request more CPU resources or release idle resources. In this example, it will submit a new job to the cluster so it can obtain additional resources dynamically. You an monitor the status of the jobs by typing `` watch -n 10 "qstat -u `whoami`" `` command in the terminal to keep monitoring the status of jobs every 10 seconds.
44+
1. As the job progresses, ICV will request more CPU resources or release idle resources. In this example, it will submit a new job to the cluster so it can obtain additional resources dynamically. You can monitor the status of the jobs by typing `` watch -n 10 "qstat -u `whoami`" `` command in the terminal to keep monitoring the status of jobs every 10 seconds.
4545

4646
1. Depending on resource availability in the cluster, SOCA might need to create additional instances for the new job. Once the resources become available and the job status changes to running, the CPU history section in the ICV dashboard would be updated to reflect the additioanl CPUs as shown below.
4747

docs/workshops/Synopsys-Physical-Verification/modules/04-submit-explorer.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ IC Validator offers Explorer functions both on DRC and LVS. This lab only talks
2828

2929
![](../imgs/icvwb-icv-vue.jpg)
3030

31-
1. Load the DRC explorer vue file by browsing to EXP_TOP.vue. Click on the browse icon then double click on explorer_run directory and select EXP_TOP.vue
31+
1. Load the DRC explorer vue file by browsing to EXP_TOP.vue. Click on the browse icon then double-click on explorer_run directory and select EXP_TOP.vue
3232

3333
## Step 4: Heat Map
3434

@@ -53,7 +53,7 @@ IC Validator offers Explorer functions both on DRC and LVS. This lab only talks
5353

5454
![](../imgs/icvwb-vue-heat-map-overlay-layout.jpg)
5555

56-
1. You can highlight error marker from the heat map. Select **M4.S.1** rule from the violation section. Then right click on the heat map window, then click on "Highlight top-cell error in current window". This option highlights errors for violations in the current zoomed violation heat map window.
56+
1. You can highlight error marker from the heat map. Select **M4.S.1** rule from the violation section. Then right-click on the heat map window, then click on "Highlight top-cell error in current window". This option highlights errors for violations in the current zoomed violation heat map window.
5757

5858
![](../imgs/icvwb-vue-heat-map-m4.jpg)
5959

docs/workshops/Synopsys-Verification/Section-1/01-deploy-env.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ This automated AWS CloudFormation template deploys a scale-out computing environ
2424
!!! warning
2525
The stack name must be less than 20 characters and must be lower-case only.
2626

27-
1. Under **Parameters**, modify the the last four parameters, which are marked with **REQUIRED**. Leave all other fields with their default values. These are variables passed the CloudFormation automation that deploys the environment.
27+
1. Under **Parameters**, modify the last four parameters, which are marked with **REQUIRED**. Leave all other fields with their default values. These are variables passed the CloudFormation automation that deploys the environment.
2828

2929
|Parameter|Default|Description
3030
----------|-------|-----------
@@ -53,7 +53,7 @@ This automated AWS CloudFormation template deploys a scale-out computing environ
5353

5454
You can view the status of the stack in the AWS CloudFormation console in the **Status** column. You should see a status of `CREATE_COMPLETE` in approximately 35 minutes.
5555

56-
By now you've learned how to deploy Scale-Out Computing on AWS to create a compute cluster for EDA Workloads in an AWS account. For the remaining portion of the this tutorial, you'll login to a different pre-built cluster that has the following items:
56+
By now you've learned how to deploy Scale-Out Computing on AWS to create a compute cluster for EDA Workloads in an AWS account. For the remaining portion of this tutorial, you'll log in to a different pre-built cluster that has the following items:
5757

5858
* Synopsys VCS and Verdi software pre-installed,
5959

docs/workshops/Synopsys-Verification/Section-2/02-web-login.md

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,18 @@
11
# Lab 2: Login to SOCA Web UI and Launch Remote Desktop Session
22

3-
The goal of this module is to login to SOCA web interface and start a remote desktop session from which you will run applications and submit jobs into the cluster. You will use the cluster's management portal to start and monitor the session.
3+
The goal of this module is to log in to SOCA web interface and start a remote desktop session from which you will run applications and submit jobs into the cluster. You will use the cluster's management portal to start and monitor the session.
44

55
## Step 1: Login to SOCA Web UI
66

7-
1. Click one of the links below depending on the session you're attending to login to corresponding SOCA web interface
7+
1. Click one of the links below depending on the session you're attending to log in to corresponding SOCA web interface
88

9-
Workshop sessions are not active at this time!!
10-
11-
[]: # '[**Click here for North America Sessions**](https://soca-tko260-viewer-1219550143.us-west-2.elb.amazonaws.com/login){target=_blank}'
9+
[**Click here for North America Sessions**](https://soca-tko260-viewer-1219550143.us-west-2.elb.amazonaws.com/login){target=_blank}
1210

1311
[]: # '[**Click here for North America - Private Session**](https://soca-vcs-viewer-1127745173.us-east-1.elb.amazonaws.com/login){target=_blank}'
1412

15-
[]: # '[**Click here for Israel/EMEA Sessions**](https://soca-261-frankfurt-viewer-601308495.eu-central-1.elb.amazonaws.com/login){target=_blank}'
13+
[**Click here for Israel/EMEA Sessions**](https://soca-261-frankfurt-viewer-601308495.eu-central-1.elb.amazonaws.com/login){target=_blank}
1614

17-
[]: # '[**Click here for Asia Sessions**](http://soca-workshop-viewer-1241784048.ap-southeast-1.elb.amazonaws.com/login){target=_blank}'
15+
[**Click here for Asia Sessions**](http://soca-workshop-viewer-1241784048.ap-southeast-1.elb.amazonaws.com/login){target=_blank}
1816

1917
![SOCA Web UI](../imgs/soca-console-login.png)
2018

@@ -46,7 +44,7 @@ Under **Linux Session #1** group:
4644

4745
After you click **Launch my session**, the SOCA solution will create a new EC2 instance with 8 vCPUs and 32GB of memory and install all desktop required packages including Gnome.
4846

49-
You will see an message asking you to wait up to 10 minutes before being able to access your remote desktop.
47+
You will see a message asking you to wait up to 10 minutes before being able to access your remote desktop.
5048

5149
!!! warning
5250
Please wait till the desktop instance is ready before moving on to the next step.

docs/workshops/Synopsys-Verification/Section-2/04-submit-batch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Next, you'll submit four jobs into the cluster, each job requests a specific ins
1515

1616
1. Run the `qstat` command to view the status of the jobs.
1717

18-
1. You can also view job status by clicking on **My Job Queue** in the left side navigation bar in SOCA portal under **PROFILE** section as shown in the screen shot below:
18+
1. You can also view job status by clicking on **My Job Queue** on the left side navigation bar in SOCA portal under **PROFILE** section as shown in the screen shot below:
1919

2020
![](../imgs/my-job-queue.png)
2121

0 commit comments

Comments
 (0)