Skip to content

Merge Main into Prod (prod-2.6.7) #261

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 186 commits into from
Dec 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
186 commits
Select commit Hold shift + click to select a range
f6a09be
FEAT: run custom R tests
erichare Aug 12, 2024
a6e2f69
First pass at custom tests
erichare Aug 12, 2024
0918f52
Clean up the register / run test functions
erichare Aug 15, 2024
3d5f6e1
Update run_tests.R
erichare Sep 5, 2024
caed67a
Notebook for custom tests, save_model function
erichare Sep 5, 2024
1d1f846
Update implement_custom_tests.Rmd
erichare Sep 7, 2024
115a933
Merge branch 'validmind:main' into r-run-test
erichare Sep 17, 2024
f8fdadc
Custom tests should be fully working
erichare Sep 17, 2024
752d487
Updates for custom tests
erichare Sep 26, 2024
aa03ee9
Add validmind README
cachafla Sep 26, 2024
2c2c878
Updates
cachafla Oct 8, 2024
f49d88d
Change result wrapper output in running test
erichare Oct 14, 2024
dc3adce
Small CRAN check updates
erichare Oct 15, 2024
056dea3
2.5.22
johnwalz97 Oct 15, 2024
45880fa
refactor: getting rid of threshold test objects
johnwalz97 Oct 15, 2024
6b8823a
refactor: converting prompt validation threshold tests to function tests
johnwalz97 Oct 15, 2024
274ff88
refactor: moving prompt evaluation tests to test functions
johnwalz97 Oct 15, 2024
86c3115
refactor: moving to test functions for data validation tests
johnwalz97 Oct 15, 2024
23cd845
Merge branch 'main' into john6797/sc-6725/table-consolidation-step-3-…
johnwalz97 Oct 24, 2024
6cccb64
Merge branch 'main' into john6797/sc-6725/table-consolidation-step-3-…
johnwalz97 Oct 25, 2024
de89e12
Update implement_custom_tests.Rmd
erichare Oct 29, 2024
cc5022d
Merge branch 'r-run-test' into main
erichare Oct 29, 2024
c651650
Merge pull request #1 from erichare/main
erichare Oct 29, 2024
3039ecd
Fix auto description generation
erichare Oct 29, 2024
2718aef
Move R custom tests notebook to code sharing
erichare Oct 30, 2024
9a5ba86
Merge branch 'main' into john6797/sc-6725/table-consolidation-step-3-…
johnwalz97 Oct 31, 2024
017358f
chore: linter complaint
johnwalz97 Oct 31, 2024
5224867
Merge branch 'main' into john6797/sc-6728/table-consolidation-step-6-…
johnwalz97 Oct 31, 2024
d2eee45
Merge branch 'main' into john6797/sc-6728/table-consolidation-step-6-…
johnwalz97 Nov 1, 2024
1bdac9f
refactor: more classes converted to functions
johnwalz97 Nov 1, 2024
d8afb01
Merge remote-tracking branch 'origin/prod' into prod-2.5.24-to-main
actions-user Nov 1, 2024
44c1b15
Merge pull request #221 from validmind/prod-2.5.24-to-main
cachafla Nov 1, 2024
012abff
Generate docs
github-actions[bot] Nov 1, 2024
2f5dddf
refactor: finished updating nlp tests
johnwalz97 Nov 4, 2024
47ec50b
fix: fixing linter complaint
johnwalz97 Nov 4, 2024
221c2cb
Merge branch 'main' into john6797/sc-6728/table-consolidation-step-6-…
johnwalz97 Nov 4, 2024
b58d023
refactor: removing built in classes for tests and metrics
johnwalz97 Nov 4, 2024
7d68892
refactor: another big batch of tests
johnwalz97 Nov 4, 2024
e8e5655
refactor: another large batch of tests
johnwalz97 Nov 4, 2024
69f98a2
refactor: almost done with metrics
johnwalz97 Nov 4, 2024
6184191
fix: linter complaints
johnwalz97 Nov 4, 2024
d5bd9db
refactor: finished all tests
johnwalz97 Nov 5, 2024
1ee3f2f
refactor: linter complaints
johnwalz97 Nov 5, 2024
392d8a1
fix: threshold test comparisons not working with param_grid changes
johnwalz97 Nov 5, 2024
0cbfd9f
chore: Update version to 2.5.25
johnwalz97 Nov 5, 2024
5aeed7d
refactor: commiting changes for now to switch to bug fix for threshol…
johnwalz97 Nov 5, 2024
1a28e2c
fix: linter complaint
johnwalz97 Nov 5, 2024
6f938cf
Merge pull request #224 from validmind/john/fix-broken-threshold-comp…
johnwalz97 Nov 5, 2024
d3bb17d
Merge branch 'main' into john6797/sc-6728/table-consolidation-step-6-…
johnwalz97 Nov 5, 2024
3c1efdc
Generate docs
github-actions[bot] Nov 5, 2024
de9ef5d
refactor: making more progress to getting everything working
johnwalz97 Nov 6, 2024
7a13ade
refactor: more changes to get things working and reworking output han…
johnwalz97 Nov 6, 2024
298363e
refactor: working on unit tests
johnwalz97 Nov 6, 2024
5cd0da6
Update existing metrics
juanmleng Nov 6, 2024
4493021
Merge pull request #167 from erichare/r-run-test
cachafla Nov 7, 2024
4a5b6c1
Generate docs
github-actions[bot] Nov 7, 2024
6193879
Update ragas metrics and version
juanmleng Nov 7, 2024
381e1b2
Update poetry lock
juanmleng Nov 7, 2024
6aaf1f7
Merge branch 'main'
juanmleng Nov 7, 2024
98e507d
Clear notebook output
juanmleng Nov 7, 2024
cb6fe49
Make context option in AspectCritic
juanmleng Nov 7, 2024
d9264e4
Make contexts optional for ResponseRelevancy
juanmleng Nov 7, 2024
c6e57ed
refactor: getting unit metrics and composite tests working
johnwalz97 Nov 7, 2024
1dc485f
Merge branch 'main' into john6797/sc-6728/table-consolidation-step-6-…
johnwalz97 Nov 7, 2024
14769a6
refactor: linter complaints
johnwalz97 Nov 7, 2024
ba659cf
refactor: check for sensitive data
johnwalz97 Nov 7, 2024
6f589e4
Fix typo in twitter_covid_19
juanmleng Nov 7, 2024
af97ada
#226: Use system tempdir for R save_model
erichare Nov 7, 2024
4882b0b
Update platform.R
erichare Nov 7, 2024
92c7710
2.5.26
juanmleng Nov 7, 2024
0242581
Merge pull request #227 from erichare/bugfix-tempdir-226
cachafla Nov 7, 2024
3052896
Merge pull request #225 from validmind/juan5508/sc-7312/update-rag-te…
juanmleng Nov 7, 2024
a74e39b
Generate docs
github-actions[bot] Nov 7, 2024
d1adb0d
refactor: moving some stuff around
johnwalz97 Nov 8, 2024
26377a9
fix: run integration tests on large runners
eggshell Nov 8, 2024
4c05e33
[SC7201] Spike POC for the option pricer model using quantlib (#219)
AnilSorathiya Nov 11, 2024
d1f84a9
Merge pull request #229 from validmind/cullen/fix-tests
juanmleng Nov 11, 2024
b726d90
Update NLP tests
juanmleng Nov 12, 2024
066598d
Fix lint
juanmleng Nov 12, 2024
9c701c5
2.5.27
juanmleng Nov 12, 2024
8f7768b
Merge pull request #231 from validmind/juan5508/sc-7379/updates-and-s…
juanmleng Nov 12, 2024
416fb2b
Generate docs
github-actions[bot] Nov 12, 2024
7ecf6b0
Ignore some files when running `make docs` on Mac
cachafla Nov 13, 2024
c98847a
Updated product terminology — `developer-framework` repo side (#233)
validbeck Nov 13, 2024
2e6f026
Generate docs
github-actions[bot] Nov 13, 2024
a6cb7b1
Merge pull request #234 from validmind/cachafla/fix-docs
cachafla Nov 14, 2024
b8f697b
Python docs intro adjust (#235)
validbeck Nov 14, 2024
bb1c6dd
Bye old repo 39 4 ur service (#236)
validbeck Nov 14, 2024
ae622e3
[SC 7399] Serialize date, time, datetime and Date objects to log test…
AnilSorathiya Nov 14, 2024
bdbd674
Run docs on Ubuntu large
cachafla Nov 14, 2024
038f20b
Merge pull request #238 from validmind/cachafla/docs-on-ubuntu
cachafla Nov 15, 2024
48b9b94
Generate docs
github-actions[bot] Nov 15, 2024
1dbbba1
update custom tests name (#237)
AnilSorathiya Nov 15, 2024
2d307f7
refactor: comparison and composite tests are fully working
johnwalz97 Nov 15, 2024
93f8884
refactor: fix summary serialization
johnwalz97 Nov 15, 2024
161c759
refactor: fix test id script
johnwalz97 Nov 15, 2024
4909d5f
refactor: updating test id types and script for generating them
johnwalz97 Nov 15, 2024
3a54a3f
refactor: rename comparison tests file
johnwalz97 Nov 15, 2024
07a2053
refactor: getting everything together
johnwalz97 Nov 19, 2024
9f61456
refactor: trying to get all tests to run locally
johnwalz97 Nov 19, 2024
57a15b9
Merge branch 'main' into john6797/sc-6728/table-consolidation-step-6-…
johnwalz97 Nov 19, 2024
89de5d8
refactor: get unit metrics working properly
johnwalz97 Nov 19, 2024
7921618
refactor: updating dependencies and removing unused deps
johnwalz97 Nov 19, 2024
78bcc3b
refactor: make ai pr explain limit number of tokens from diff
johnwalz97 Nov 19, 2024
9bc182a
fix: output should only need like 1000 tokens
johnwalz97 Nov 19, 2024
9e8c3e0
fix: install tiktoken for ai pr explain workflow
johnwalz97 Nov 19, 2024
c257391
refactor: trying to get unit tests to pass
johnwalz97 Nov 19, 2024
0fe38bc
fix: updating unit tests for vm tests to handle results properly
johnwalz97 Nov 19, 2024
840d651
Updated logos in python docs & readme (#239)
validbeck Nov 19, 2024
87475bc
Generate docs
github-actions[bot] Nov 19, 2024
c023c74
fix: various fixes to tests
johnwalz97 Nov 19, 2024
f3ac69f
fix: various fixes to tests
johnwalz97 Nov 19, 2024
6e6abbb
fix: various fixes to tests
johnwalz97 Nov 19, 2024
93c3ca9
fix: various fixes to tests
johnwalz97 Nov 19, 2024
8d31178
fix: some fixes - really weird stuff going on with the unit tests fo…
johnwalz97 Nov 19, 2024
21ab6b3
fix: reset known failures
johnwalz97 Nov 19, 2024
22cb029
fix: trying a fix for stability analysis which is failing
johnwalz97 Nov 20, 2024
b8cb4a2
fix: trying a fix for stability analysis which is failing
johnwalz97 Nov 20, 2024
8a8c7b0
fix: trying a fix for stability analysis which is failing
johnwalz97 Nov 20, 2024
aad4c1f
Merge branch 'main' into john6797/sc-6728/table-consolidation-step-6-…
johnwalz97 Nov 20, 2024
6c50bf5
2.6.0
johnwalz97 Nov 21, 2024
d63c9da
fix: couple of small fixes
johnwalz97 Nov 21, 2024
3021e55
fix: need to export local test provider class properly
johnwalz97 Nov 21, 2024
dd95baa
refactor: remove notebooks for output templates
johnwalz97 Nov 21, 2024
589051e
fix: edge case where model class is not hashable
johnwalz97 Nov 21, 2024
374ed97
fix: couple of small fixes and improvements
johnwalz97 Nov 21, 2024
84a44de
fix: clean up some warnings
johnwalz97 Nov 21, 2024
062f789
refactor: quick fix to test providers
johnwalz97 Nov 22, 2024
cae3af8
fix: quick fix that I missed in latest change
johnwalz97 Nov 22, 2024
a417251
Merge pull request #216 from validmind/john6797/sc-6728/table-consoli…
johnwalz97 Nov 22, 2024
fbdf227
Generate docs
github-actions[bot] Nov 22, 2024
f774417
2.6.1
johnwalz97 Nov 25, 2024
a2cffae
fix: docs action is looping continuously - ignore updates to docs bui…
johnwalz97 Nov 25, 2024
e9b177a
build(deps-dev): bump tornado from 6.4.1 to 6.4.2
dependabot[bot] Nov 22, 2024
5564e62
chore(deps-dev): bump notebook from 7.0.7 to 7.2.2
dependabot[bot] Nov 22, 2024
def30cb
fix: fix unit metrics run_metric so integration tests notebook passes
johnwalz97 Nov 22, 2024
9561ae9
refactor: move metadata (tasks and tags) into load.py
johnwalz97 Nov 22, 2024
a766a67
feat: adding python environment metadata to test results for troubles…
johnwalz97 Nov 22, 2024
c83639d
fix: linter complaints
johnwalz97 Nov 22, 2024
486c2f4
feat: add more metadata to test result
johnwalz97 Nov 22, 2024
176fc96
Fix comparison tests
juanmleng Nov 26, 2024
d5fef83
Remove dev-framework pip install
juanmleng Nov 26, 2024
6b158b2
fix: couple of fixes from the recent refactor
johnwalz97 Nov 26, 2024
5a1c897
fix: enable test description generation when running a test suite
johnwalz97 Nov 26, 2024
6eb78fa
Generate docs
github-actions[bot] Nov 26, 2024
0d4d7b8
chore: don't run workflows for only docs updates
johnwalz97 Nov 26, 2024
c1698ff
chore: don't run unit tests if just notebook, docs or scripts are upd…
johnwalz97 Nov 26, 2024
eee3cfd
chore: updating workflow spec
johnwalz97 Nov 26, 2024
60546e6
chore: update workflow spec
johnwalz97 Nov 26, 2024
63d33b6
Merge pull request #250 from validmind/john6797/dont-run-workflows-fo…
johnwalz97 Nov 26, 2024
d469979
Generate docs
github-actions[bot] Nov 26, 2024
cdbb30c
chore: removing metrics references
johnwalz97 Nov 27, 2024
f237b61
Updated "Get your code snippet" section for relevant notebooks (#251)
validbeck Nov 28, 2024
f0773a0
Generate docs
github-actions[bot] Nov 28, 2024
f89c233
Fix PSI
juanmleng Nov 28, 2024
181e032
2.6.3
juanmleng Nov 28, 2024
53ac0ce
Merge pull request #253 from validmind/juan5508/sc-7605/input-data-mu…
juanmleng Nov 28, 2024
0efdb80
Generate docs
github-actions[bot] Nov 28, 2024
0c31d9f
Add log image example
juanmleng Nov 29, 2024
b0daba5
Merge branch 'main' into john6797/sc-7220/table-consolidation-step-7-…
johnwalz97 Nov 29, 2024
c628c20
2.6.4
juanmleng Nov 29, 2024
d0ef9e9
Merge pull request #254 from validmind/juan5508/sc-7587/add-logging-i…
juanmleng Nov 29, 2024
858ab1d
Generate docs
github-actions[bot] Nov 29, 2024
b0426d5
Merge branch 'main' into john6797/sc-7220/table-consolidation-step-7-…
johnwalz97 Dec 2, 2024
cb72eff
Squashed commit of the following:
johnwalz97 Dec 2, 2024
b389d51
[SC 7429] Add title as an optional parameter in the run_test (#244)
AnilSorathiya Dec 4, 2024
9c88222
Generate docs
github-actions[bot] Dec 4, 2024
cc0c1e8
[SC 7521] Move capital markets notebooks from code sharing to code sa…
AnilSorathiya Dec 4, 2024
c1e3758
Generate docs
github-actions[bot] Dec 4, 2024
bb2be2b
Improved README.pypi.md (#257)
validbeck Dec 4, 2024
d471a77
Generate docs
github-actions[bot] Dec 4, 2024
e1a3d69
Merge pull request #252 from validmind/john6797/sc-7220/table-consoli…
johnwalz97 Dec 4, 2024
fed91e2
Generate docs
github-actions[bot] Dec 4, 2024
3abb1ff
Merge branch 'main'
juanmleng Dec 5, 2024
6b21800
2.6.6
juanmleng Dec 5, 2024
48fbbb0
Merge pull request #255 from validmind/juan5508/sc-6028/create-basic-…
juanmleng Dec 5, 2024
8b3cc00
Generate docs
github-actions[bot] Dec 5, 2024
ad744bb
[SC 7682] Hotfix for run composite test missing 2 required (#258)
AnilSorathiya Dec 5, 2024
246777e
Generate docs
github-actions[bot] Dec 5, 2024
2cafb6a
Some more outdated client library references (#259)
validbeck Dec 6, 2024
8a38817
Generate docs
github-actions[bot] Dec 6, 2024
525d355
Merge remote-tracking branch 'origin/prod' into prod-2.5.25-to-main
cachafla Dec 9, 2024
6bb9777
Updates
cachafla Dec 9, 2024
2b6896b
Merge pull request #260 from validmind/prod-2.5.25-to-main
cachafla Dec 9, 2024
3c97691
Generate docs
github-actions[bot] Dec 9, 2024
def3efd
Merge remote-tracking branch 'origin/main' into prod-2.6.7
actions-user Dec 9, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
11 changes: 11 additions & 0 deletions .github/workflows/ai_explain.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@

from openai import OpenAI
from github import Github
from tiktoken import encoding_for_model

# Initialize GitHub and OpenAI clients
github_token = os.getenv("GITHUB_TOKEN")
Expand Down Expand Up @@ -69,6 +70,16 @@
# Prepare OpenAI prompt
prompt = prompt_template.format(diff=diff)

# check number of tokens
encoding = encoding_for_model("gpt-4o-mini")
tokens = encoding.encode(prompt)
print(f"Number of tokens: {len(tokens)}")
# 128k is max tokens for gpt-4o-mini
num_output_tokens = 1000
if len(tokens) > 128000 - num_output_tokens:
tokens = tokens[: 128000 - num_output_tokens]
prompt = encoding.decode(tokens)

# Call OpenAI API
client = OpenAI()
response = client.chat.completions.create(
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/ai_explain.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,13 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: "3.x"
python-version: '3.x'

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install openai
pip install tiktoken
pip install PyGithub

- name: Explain PR
Expand Down
8 changes: 5 additions & 3 deletions .github/workflows/docs.yaml
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
# This workflow will install Python dependencies and generate Markdown
# documentation from docstrings using Sphinx. We generate the HTML
# documentation to keep it up to date with the Markdown files

name: Framework API docs
name: Python Library API docs

on:
push:
branches:
- main
- release-v1
paths-ignore:
- 'docs/_build/**'
workflow_dispatch:
inputs:
note:
Expand All @@ -21,7 +22,8 @@ permissions:

jobs:
docs:
runs-on: ubuntu-latest
runs-on:
group: ubuntu-vm-large

steps:
- uses: actions/checkout@v4
Expand Down
7 changes: 4 additions & 3 deletions .github/workflows/integration.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,16 @@ on:
- main
- prod
- release-v1
paths-ignore:
- 'docs/_build/**'

permissions:
contents: read

jobs:
integration:
# Don't rerun the integration tests after docs were generated
if: "!contains(github.event.head_commit.message, 'Generate docs')"
runs-on: ubuntu-latest
runs-on:
group: ubuntu-vm-large

steps:
- uses: actions/checkout@v4
Expand Down
16 changes: 12 additions & 4 deletions .github/workflows/python.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,16 @@ name: Code Quality and Unit Testing
on:
push:
branches: [main]
paths-ignore:
- 'docs/_build/**'
- 'notebooks/**'
- 'scripts/**'
pull_request:
branches: ["*"]
branches: ['*']
paths-ignore:
- 'docs/_build/**'
- 'notebooks/**'
- 'scripts/**'

permissions:
contents: read
Expand All @@ -17,7 +25,7 @@ jobs:
runs-on: ubuntu-latest

steps:
# TODO figure out why dev-framework requires so much disk space and fix that
# TODO figure out why VM Library requires so much disk space and fix that
# - name: Free Disk Space (Ubuntu)
# uses: jlumbroso/free-disk-space@main
# with:
Expand All @@ -42,8 +50,8 @@ jobs:
- name: Set up Python 3.9
uses: actions/setup-python@v5
with:
python-version: "3.9"
cache: "poetry"
python-version: '3.9'
cache: 'poetry'

- name: Install Dependencies
run: |
Expand Down
15 changes: 13 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,26 @@ else
poetry run python -m unittest discover tests
endif

test-unit:
poetry run python -m unittest "tests.test_unit_tests"

test-integration:
poetry run python scripts/run_e2e_notebooks.py

docs:
rm -rf docs/_build
poetry run pdoc validmind -d google -t docs/templates --no-show-source --logo https://vmai.s3.us-west-1.amazonaws.com/vm-logo.svg --favicon https://vmai.s3.us-west-1.amazonaws.com/favicon.ico -o docs/_build
ifeq ($(shell uname),Darwin)
poetry run pdoc validmind !validmind.tests.data_validation.Protected* -d google -t docs/templates --no-show-source --logo https://vmai.s3.us-west-1.amazonaws.com/validmind-logo.svg --favicon https://vmai.s3.us-west-1.amazonaws.com/favicon.ico -o docs/_build
else
poetry run pdoc validmind -d google -t docs/templates --no-show-source --logo https://vmai.s3.us-west-1.amazonaws.com/validmind-logo.svg --favicon https://vmai.s3.us-west-1.amazonaws.com/favicon.ico -o docs/_build
endif

docs-serve:
poetry run pdoc validmind -d google -t docs/templates --no-show-source --logo https://vmai.s3.us-west-1.amazonaws.com/vm-logo.svg --favicon https://vmai.s3.us-west-1.amazonaws.com/favicon.ico
ifeq ($(shell uname),Darwin)
poetry run pdoc validmind !validmind.tests.data_validation.Protected* -d google -t docs/templates --no-show-source --logo https://vmai.s3.us-west-1.amazonaws.com/validmind-logo.svg --favicon https://vmai.s3.us-west-1.amazonaws.com/favicon.ico
else
poetry run pdoc validmind -d google -t docs/templates --no-show-source --logo https://vmai.s3.us-west-1.amazonaws.com/validmind-logo.svg --favicon https://vmai.s3.us-west-1.amazonaws.com/favicon.ico
endif

version:
@:$(call check_defined, tag, new semver version tag to use on pyproject.toml)
Expand Down
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,30 @@
# ValidMind Library

<!-- TODO: put back in when workflows are working properly -->
<!-- [![Code Quality](https://github.com/validmind/developer-framework/actions/workflows/python.yaml/badge.svg)](https://github.com/validmind/developer-framework/actions/workflows/python.yaml)
[![Integration Tests](https://github.com/validmind/developer-framework/actions/workflows/integration.yaml/badge.svg)](https://github.com/validmind/developer-framework/actions/workflows/integration.yaml) -->
<!-- [![Code Quality](https://github.com/validmind/validmind-library/actions/workflows/python.yaml/badge.svg)](https://github.com/validmind/validmind-library/actions/workflows/python.yaml)
[![Integration Tests](https://github.com/validmind/validmind-library/actions/workflows/integration.yaml/badge.svg)](https://github.com/validmind/validmind-library/actions/workflows/integration.yaml) -->

![ValidMind logo](images/ValidMind-logo-color.svg "ValidMind logo")
![ValidMind logo](https://vmai.s3.us-west-1.amazonaws.com/validmind-logo.svg "ValidMind logo")

ValidMind's Python library automates the documentation and validation of your models through a comprehensive suite of developer tools and methods.
The ValidMind Library is a suite of developer tools and methods designed to run validation tests and automate the documentation of your models.

Built to be model agnostic, it works seamlessly with any Python model without requiring developers to rewrite existing code.
Designed to be model agnostic, the ValidMind Library provides all the standard functionality without requiring you to rewrite any functions as long as your model is built in Python.

It includes a suite of rich documentation and model testing capabilities - from dataset descriptions to identifying model weak spots and overfit areas. Through this library, you can automate documentation generation by feeding artifacts and test results to the ValidMind platform.
With a rich array of documentation tools and test suites, from documenting descriptions of your datasets to testing your models for weak spots and overfit areas, the ValidMind Library helps you automate model documentation by feeding the ValidMind Platform with documentation artifacts and test results.

## Contributing to the ValidMind Library

We believe in the power of collaboration and welcome contributions to the ValidMind Library. If you've noticed a bug, have a feature request, or want to contribute a test, please create a pull request or submit an issue and refer to the [contributing guide](README.md#how-to-contribute) below.

- Interested in connecting with fellow AI model risk practitioners? Join our [Community Slack](https://docs.validmind.ai/about/contributing/join-community.html)!

- For more information about ValidMind's open source tests and Jupyter notebooks, read the [Library docs](https://docs.validmind.ai/developer/get-started-developer-framework.html).
- For more information about ValidMind's open-source tests and Jupyter Notebooks, read the [ValidMind Library docs](https://docs.validmind.ai/developer/get-started-validmind-library.html).

## Getting started

### Install from PyPI

To install the library and all optional dependencies, run:
To install the ValidMind Library and all optional dependencies, run:

```bash
pip install validmind[all]
Expand Down Expand Up @@ -80,7 +80,7 @@ This will install the dependencies and git hooks for the project.
a new kernel with Jupyter:

```bash
poetry run python -m ipykernel install --user --name dev-framework --display-name "Library"
poetry run python -m ipykernel install --user --name validmind --display-name "ValidMind Library"
```

### Installing LLM validation dependencies
Expand Down Expand Up @@ -143,7 +143,7 @@ Single file:
poetry run python scripts/add_test_description.py review validmind/tests/ongoing_monitoring/FeatureDrift.py
```

## Adding a Copyright Header
## Adding a copyright header

When adding new files to the project, you can add the ValidMind copyright header to any files that
are missing it by running:
Expand All @@ -152,7 +152,7 @@ are missing it by running:
make copyright
```

## Known Issues
## Known issues

### ValidMind wheel errors

Expand Down
51 changes: 37 additions & 14 deletions README.pypi.md
Original file line number Diff line number Diff line change
@@ -1,48 +1,71 @@
# ValidMind Library

ValidMind's Python library automates the documentation and validation of your models through a comprehensive suite of developer tools and methods.
The ValidMind Library is a suite of developer tools and methods designed to automate the documentation and validation of your models.

Built to be model agnostic, it works seamlessly with any Python model without requiring developers to rewrite existing code.
Designed to be model agnostic, the ValidMind Library provides all the standard functionality without requiring you to rewrite any functions as long as your model is built in Python.

It includes a suite of rich documentation and model testing capabilities - from dataset descriptions to identifying model weak spots and overfit areas. Through this library, you can automate documentation generation by feeding artifacts and test results to the ValidMind platform.
With a rich array of documentation tools and test suites, from documenting descriptions of your datasets to testing your models for weak spots and overfit areas, the ValidMind Library helps you automate model documentation by feeding the ValidMind Platform with documentation artifacts and test results.

## Getting started
## What is ValidMind?

### Install from PyPI
ValidMind helps developers, data scientists and risk and compliance stakeholders identify potential risks in their AI and large language models, and generate robust, high-quality model documentation that meets regulatory requirements.

To install the library and all optional dependencies, run:
[The ValidMind AI risk platform](https://docs.validmind.ai/about/overview.html) consists of two intertwined product offerings:

- **The ValidMind Library** — Designed to be incorporated into your existing model development environment, you use the ValidMind Library to run tests and log documentation to the ValidMind Platform. Driven by the power of open-source, the ValidMind Library welcomes contributions to our code and developer samples: [`validmind-library` @ GitHub](https://github.com/validmind/validmind-library)
- **The ValidMind Platform** — A cloud-hosted user interface allowing you to comprehensively track your model inventory throughout the entire model lifecycle according to the unique requirements of your organization. You use the ValidMind Platform to oversee your model risk management process via the customizable model inventory.

### What do I need to get started with ValidMind?

> **All you need to get started with ValidMind is an account with us.**
>
> Signing up is FREE — **[Register with ValidMind](https://docs.validmind.ai/guide/configuration/register-with-validmind.html)**

That's right — you can run tests and log documentation even if you don't have a model available, so go ahead and [**Get started with the ValidMind Library**](https://docs.validmind.ai/developer/get-started-validmind-library.html)!

### How do I do more with the ValidMind Library?

**[Explore our code samples!](https://docs.validmind.ai/developer/samples-jupyter-notebooks.html)**

Our selection of Jupyter Notebooks showcase the capabilities and features of the ValidMind Library, while also providing you with useful examples that you can build on and adapt for your own use cases.

## Installation

To install the ValidMind Library and all optional dependencies, run:

```bash
pip install validmind[all]
```

To just install the core functionality without optional dependencies (some tests and models may not work), run:
To install the ValidMind Library without optional dependencies (core functionality only), run:

```bash
pip install validmind
```

#### Extra dependencies
### Extra dependencies

- **Install with LLM Support**
The ValidMind Library has optional dependencies that can be installed separately to support additional model types and tests.

- **LLM Support**: To be able to run tests for Large Language Models (LLMs), install the `llm` extra:

```bash
pip install validmind[llm]
```

- **Install with Hugging Face `transformers` support**
- **PyTorch Models**: To use pytorch models with the ValidMind Library, install the `torch` extra:

```bash
pip install validmind[transformers]
pip install validmind[torch]
```

- **Install with PyTorch support**
- **Hugging Face Transformers**: To use Hugging Face Transformers models with the ValidMind Library, install the `transformers` extra:

```bash
pip install validmind[pytorch]
pip install validmind[transformers]
```

- **Install with R support (requires R to be installed)**
- **R Models**: To use R models with the ValidMind Library, install the `r` extra:

```bash
pip install validmind[r-support]
Expand Down
2 changes: 1 addition & 1 deletion docs/_build/search.js

Large diffs are not rendered by default.

Loading
Loading