Skip to content

Merge main into staging #637

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jan 30, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 8 additions & 7 deletions .github/actions/demo-notebook/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ inputs:
description: "Load the created .env file"
required: true

runs:
runs:
using: "composite"
steps:
steps:
- name: Install python3 for Jupyter Notebooks
shell: bash
run: |
Expand All @@ -18,10 +18,11 @@ runs:
- name: Install validmind for notebook execution
shell: bash
run: |
pip install validmind
pip install validmind[llm]
pip install fairlearn aequitas
pip install validmind
pip install validmind[llm]
pip install fairlearn aequitas
pip install shap==0.44.1
pip install anywidget

- name: Ensure .env file is available
shell: bash
Expand All @@ -36,9 +37,9 @@ runs:
shell: bash
if: ${{ steps.find_env.outcome == 'success' }}
run: |
cd site
cd site
source ../${{ inputs.env_file }}
quarto render --profile exe-demo notebooks/tutorials/intro_for_model_developers_EXECUTED.ipynb &> render_errors.log || {
quarto render --profile exe-demo notebooks/tutorials/intro_for_model_developers_EXECUTED.ipynb &> render_errors.log || {
echo "Execute for intro_for_model_developers_EXECUTED.ipynb failed";
cat render_errors.log;
exit 1;
Expand Down
15 changes: 8 additions & 7 deletions .github/actions/prod-notebook/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ inputs:
description: "Load the created .env file"
required: true

runs:
runs:
using: "composite"
steps:
steps:
- name: Install python3 for Jupyter Notebooks
shell: bash
run: |
Expand All @@ -18,10 +18,11 @@ runs:
- name: Install validmind for notebook execution
shell: bash
run: |
pip install validmind
pip install validmind[llm]
pip install fairlearn aequitas
pip install validmind
pip install validmind[llm]
pip install fairlearn aequitas
pip install shap==0.44.1
pip install anywidget

- name: Ensure .env file is available
shell: bash
Expand All @@ -36,9 +37,9 @@ runs:
shell: bash
if: ${{ steps.find_env.outcome == 'success' }}
run: |
cd site
cd site
source ../${{ inputs.env_file }}
quarto render --profile exe-prod notebooks/tutorials/intro_for_model_developers_EXECUTED.ipynb &> render_errors.log || {
quarto render --profile exe-prod notebooks/tutorials/intro_for_model_developers_EXECUTED.ipynb &> render_errors.log || {
echo "Execute for intro_for_model_developers_EXECUTED.ipynb failed";
cat render_errors.log;
exit 1;
Expand Down
15 changes: 8 additions & 7 deletions .github/actions/staging-notebook/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ inputs:
description: "Load the created .env file"
required: true

runs:
runs:
using: "composite"
steps:
steps:
- name: Install python3 for Jupyter Notebooks
shell: bash
run: |
Expand All @@ -18,10 +18,11 @@ runs:
- name: Install validmind for notebook execution
shell: bash
run: |
pip install validmind
pip install validmind[llm]
pip install fairlearn aequitas
pip install validmind
pip install validmind[llm]
pip install fairlearn aequitas
pip install shap==0.44.1
pip install anywidget

- name: Ensure .env file is available
shell: bash
Expand All @@ -36,9 +37,9 @@ runs:
shell: bash
if: ${{ steps.find_env.outcome == 'success' }}
run: |
cd site
cd site
source ../${{ inputs.env_file }}
quarto render --profile exe-staging notebooks/tutorials/intro_for_model_developers_EXECUTED.ipynb &> render_errors.log || {
quarto render --profile exe-staging notebooks/tutorials/intro_for_model_developers_EXECUTED.ipynb &> render_errors.log || {
echo "Execute for intro_for_model_developers_EXECUTED.ipynb failed";
cat render_errors.log;
exit 1;
Expand Down
14 changes: 7 additions & 7 deletions .github/workflows/deploy-docs-prod.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ jobs:

- name: Render prod docs site
run: |
cd site
quarto render --profile production &> render_errors.log || {
cd site
quarto render --profile production &> render_errors.log || {
echo "Quarto render failed immediately";
cat render_errors.log;
exit 1;
Expand All @@ -39,11 +39,11 @@ jobs:
id: create_env
run: |
touch .env
echo VM_API_HOST=${{ secrets.PLATFORM_API_HOST }} >> .env
echo VM_API_KEY=${{ secrets.PLATFORM_API_KEY }} >> .env
echo VM_API_SECRET=${{ secrets.PLATFORM_API_SECRET }} >> .env
echo VM_API_MODEL=${{ secrets.PLATFORM_DEV_MODEL }} >> .env
cat .env
echo VM_API_HOST=${{ secrets.PLATFORM_API_HOST }} >> .env
echo VM_API_KEY=${{ secrets.PLATFORM_API_KEY }} >> .env
echo VM_API_SECRET=${{ secrets.PLATFORM_API_SECRET }} >> .env
echo VM_API_MODEL=${{ secrets.PLATFORM_DEV_MODEL }} >> .env
cat .env

# Only execute the prod notebook if .env file is created
- name: Execute prod Intro for Model Developers notebook
Expand Down
14 changes: 7 additions & 7 deletions .github/workflows/deploy-docs-staging.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ jobs:

- name: Render staging docs site
run: |
cd site
quarto render --profile staging &> render_errors.log || {
cd site
quarto render --profile staging &> render_errors.log || {
echo "Quarto render failed immediately";
cat render_errors.log;
exit 1;
Expand All @@ -39,11 +39,11 @@ jobs:
id: create_env
run: |
touch .env
echo VM_API_HOST=${{ secrets.PLATFORM_API_HOST }} >> .env
echo VM_API_KEY=${{ secrets.PLATFORM_API_KEY }} >> .env
echo VM_API_SECRET=${{ secrets.PLATFORM_API_SECRET }} >> .env
echo VM_API_MODEL=${{ secrets.PLATFORM_DEV_MODEL }} >> .env
cat .env
echo VM_API_HOST=${{ secrets.PLATFORM_API_HOST }} >> .env
echo VM_API_KEY=${{ secrets.PLATFORM_API_KEY }} >> .env
echo VM_API_SECRET=${{ secrets.PLATFORM_API_SECRET }} >> .env
echo VM_API_MODEL=${{ secrets.PLATFORM_DEV_MODEL }} >> .env
cat .env

# Only execute the staging notebook if .env file is created
- name: Execute staging Intro for Model Developers notebook
Expand Down
18 changes: 9 additions & 9 deletions .github/workflows/validate-docs-site.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ jobs:

- name: Render demo docs site
run: |
cd site
quarto render --profile development &> render_errors.log || {
cd site
quarto render --profile development &> render_errors.log || {
echo "Quarto render failed immediately";
cat render_errors.log;
exit 1;
Expand All @@ -52,11 +52,11 @@ jobs:
id: create_env
run: |
touch .env
echo VM_API_HOST=${{ secrets.PLATFORM_API_HOST }} >> .env
echo VM_API_KEY=${{ secrets.PLATFORM_API_KEY }} >> .env
echo VM_API_SECRET=${{ secrets.PLATFORM_API_SECRET }} >> .env
echo VM_API_MODEL=${{ secrets.PLATFORM_DEV_MODEL }} >> .env
cat .env
echo VM_API_HOST=${{ secrets.PLATFORM_API_HOST }} >> .env
echo VM_API_KEY=${{ secrets.PLATFORM_API_KEY }} >> .env
echo VM_API_SECRET=${{ secrets.PLATFORM_API_SECRET }} >> .env
echo VM_API_MODEL=${{ secrets.PLATFORM_DEV_MODEL }} >> .env
cat .env

# Only execute the demo notebook if .env file is created
- name: Execute demo Intro for Model Developers notebook
Expand All @@ -66,7 +66,7 @@ jobs:
with:
env_file: .env

- name: Test for warnings or errors
- name: Test for warnings or errors
run: |
if grep -q 'WARN:\|ERROR:' site/render_errors.log; then
echo "Warnings or errors detected during Quarto render"
Expand All @@ -76,7 +76,7 @@ jobs:
echo "No warnings or errors detected during Quarto render"
fi

# Demo bucket is in us-east-1
# Demo bucket is in us-east-1
- name: Configure AWS credentials
run: aws configure set aws_access_key_id ${{ secrets.AWS_ACCESS_KEY_ID }} && aws configure set aws_secret_access_key ${{ secrets.AWS_SECRET_ACCESS_KEY }} && aws configure set default.region us-east-1

Expand Down
29 changes: 13 additions & 16 deletions site/guide/model-documentation/export-documentation.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,11 @@ aliases:

Export your model documentation or validation reports as Microsoft Word files (`.docx`) for use outside of the {{< var validmind.platform >}}.

::: {.callout}
{{< var vm.product >}} supports Word 365, Word 2019, Word 2016, and Word 2013.
:::


::: {.attn}

## Prerequisites
Expand All @@ -16,10 +21,6 @@ Export your model documentation or validation reports as Microsoft Word files (`
- [x] Model documentation is completed or in progress.[^2]
- [x] You are a [{{< fa code >}} Developer]{.bubble} or [{{< fa circle-check >}} Validator]{.bubble}, or assigned another role with sufficient permissions to perform the tasks in this guide.[^3]

::: {.callout}
{{< var vm.product >}} supports Word 365, Word 2019, Word 2016, and Word 2013.
:::

:::

## Export model documentation
Expand All @@ -32,30 +33,23 @@ Export your model documentation or validation reports as Microsoft Word files (`

4. In right sidebar, click **{{< fa download >}} Export Document**.

5. Configure the export options:

<!--- - Check **Include comment threads** to include comment threads in the exported file.
- Check **Section activity logs** to include a history of changes in each section of the documentation. --->
- Choose the file format for export. We currently support exporting to `.docx` for Microsoft Word format.

6. Click **{{< fa file-arrow-down >}} Download File** to download the file locally on your machine.
7. Click **{{< fa file-arrow-down >}} Download File** to download the file locally on your machine.

## Export validation report

1. In the left sidebar, click **{{< fa cubes >}} Inventory**.

2. Select a model or find your model by applying a filter or searching for it.[^5]

<!--- NR Mar 2024 this option does not yet exist --->
3. In the left sidebar that appears for your model, click **{{< fa shield >}} Validation Report**.

4. In right sidebar, click **{{< fa download >}} Export Document**.

5. Configure the export options:
5. Configure what is exported in your document by checking off the relevant boxes:

<!--- - Check **Include comment threads** to include comment threads in the exported file.
- Check **Section activity logs** to include a history of changes in each section of the documentation. --->
- Choose the file format for export. We currently support exporting to `.docx` for Microsoft Word format.
- Include compliance summary[^6]
- Include validation guidelines information[^7]
- Include validation guideline adherence details

6. Click **{{< fa file-arrow-down >}} Download File** to download the file locally on your machine.

Expand All @@ -80,3 +74,6 @@ Export your model documentation or validation reports as Microsoft Word files (`

[^5]: [Working with the model inventory](/guide/model-inventory/working-with-model-inventory.qmd#search-filter-and-sort-models)

[^6]: [Assess compliance](/guide/model-validation/assess-compliance.qmd)

[^7]: [Manage validation guidelines](/guide/model-validation/manage-validation-guidelines.qmd)
Binary file added site/guide/monitoring/example-f1-score.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified site/guide/monitoring/metric-over-time-data.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
30 changes: 21 additions & 9 deletions site/guide/monitoring/work-with-metrics-over-time.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -5,18 +5,27 @@ date: last-modified

Once generated via the {{< var validmind.developer >}}, view and add metrics over time to your ongoing monitoring plans in the {{< var validmind.platform >}}.

Metrics over time refers to the continued monitoring of a model's performance once it is deployed. Tracking how a model performs as new data is introduced or conditions change ensures that it remains accurate and reliable in real-world environments where data distributions or market conditions shift.
Metrics over time refers to the continued monitoring of a model's performance once it is deployed. Tracking how a model performs as new data is introduced or conditions change ensures that it remains accurate and reliable in real-world environments where data distributions or market conditions shift.

- Model performance is determined by continuously measuring metrics and comparing them over time to detect degradation, bias, or shifts in the model's output.
- Performance data is collected and tracked over time, often using a rolling window approach or real-time monitoring tools with the same metrics used in testing, but observed across different periods.
- Model performance is determined by continuously measuring metrics and comparing them over time to detect degradation, bias, or shifts in the model's output.
- Performance data is collected and tracked over time, often using a rolling window approach or real-time monitoring tools with the same metrics used in testing, but observed across different periods.
- Continuous tracking helps to identify if and when a model needs to be recalibrated, retrained, or even replaced due to performance deterioration or changing conditions.

::: {.column-margin}
::: {.callout}
## **[Log metrics over time {{< fa hand-point-right >}}](/notebooks/how_to/log_metrics_over_time.ipynb)**

Learn how to log metrics over time, set thresholds, and analyze model performance trends with our Jupyter Notebook sample.
:::

:::

::: {.attn}

## Prerequisites

- [x] {{< var link.login >}}
- [x] Metrics over time have already been logged via the {{< var validmind.developer >}} for your model.[^1]
- [x] Metrics over time have already been logged via the {{< var validmind.developer >}} for your model.[^1]
- [x] You are a [{{< fa code >}} Developer]{.bubble} or assigned another role with sufficient permissions to perform the tasks in this guide.[^2]

:::
Expand Down Expand Up @@ -44,14 +53,16 @@ Metrics over time refers to the continued monitoring of a model's performance on
- Select the metric over time to insert into the model documentation from the list of available metrics.
- Search by name using **{{<fa magnifying-glass >}} Search** on the top-left to locate specific metrics.

![Metric over time blocks that have been selected for insertion](metrics-over-time-menu.png){width=90% fig-alt="A screenshot showing several metric over time blocks that have been selected for insertion" .screenshot}
![Metric Over Time blocks that have been selected for insertion](metrics-over-time-menu.png){fig-alt="A screenshot showing several Metric Over Time blocks that have been selected for insertion" .screenshot group="time-metric"}

To preview what is included in a metric, click on it. By default, the actively selected metric is previewed.

7. Click **Insert # Metrics(s) Over Time to Document** when you are ready.

8. After inserting the metrics into your document, review the data to confirm that it is accurate and relevant.

![Example F1 Score — Metric Over Time visualization](example-f1-score.png){fig-alt="A screenshot showing an example F1 Score — Metric Over Time visualization" .screenshot group="time-metric"}


## View metric over time metadata

Expand All @@ -60,6 +71,7 @@ After you have added metrics over time to your document, you can view the follow
- Date and time the metric was recorded
- Who updated the metric
- The numeric value of the metric
- The metric's thresholds
- Any additional parameters

1. In the left sidebar, click **{{< fa cubes >}} Inventory**.
Expand All @@ -68,11 +80,11 @@ After you have added metrics over time to your document, you can view the follow

3. In the left sidebar that appears for your model, click **{{< fa book-open >}} Documentation** or **{{< fa desktop >}} Ongoing Monitoring**.

4. Locate the metric whose metadata you want to view.
4. Locate the metric whose metadata you want to view.

5. Under the metric's name, click on **Data** tab.
5. Under the metric's name, click on **Data** tab.

![](metric-over-time-data.png){width=85% fig-alt="A screenshot showing the Data tab within a metric over time" .screenshot}
![Example Data tab within a Metric Over Time](metric-over-time-data.png){fig-alt="A screenshot showing an example Data tab within a Metric Over Time" .screenshot}


## What's next
Expand All @@ -85,7 +97,7 @@ After you have added metrics over time to your document, you can view the follow

<!-- FOOTNOTES -->

[^1]: [Intro to Unit Metrics](/notebooks/how_to/run_unit_metrics.ipynb)
[^1]: [Log metrics over time](/notebooks/how_to/log_metrics_over_time.ipynb)

[^2]: [Manage permissions](/guide/configuration/manage-permissions.qmd)

Expand Down
Binary file modified site/notebooks.zip
Binary file not shown.
Loading