Releases: octue/octue-sdk-python
Releases · octue/octue-sdk-python
Fix google auth library imports and `grpcio` version
Contents (#729)
Fixes
- Fix google auth library imports
Dependencies
- Use new version of
grpcio
to avoid spurious warning
Switch to Kueue service backend
Summary
This release switches to running Twined services on Kubernetes + Kueue. This brings the following features:
- Queue questions so they're not just dropped if the service backend is overwhelmed
- Run questions that take any amount of time (specifically opening us up to runs > 1 hour)
- Request arbitrary compute resources per question (CPU, memory, storage etc.)
- Stop extraneous question reruns by allowing us to control when we acknowledge question events
- Monitor running questions individually
- Make it easier to run questions on providers other than Google in the future (i.e. on any Kubernetes cluster)
Contents (#723)
IMPORTANT: There are breaking changes.
New features
- #709 (see PR for list of breaking changes)
- Authenticate requests to service registries
Enhancements
- Add
allow_not_found
option toServiceConfiguration.from_file
- Add default event store ID to
get_events
- Increase default maximum heartbeat interval to 360s
Refactoring
Dependencies
- Remove
gunicorn
andFlask
dependencies
Operations
- Replace Terraform configuration with new
terraform-octue-twined-core
module
Testing
- Move deployment test to
octue/example-service-kueue
repository
Improve output validation/upload logging
Contents (#691)
Enhancements
- Improve output validation/upload logging
Fixes
- Avoid attempting to upload output manifest if it's
None
Log warning when runtime timeout is near
Contents (#690)
Enhancements
- Log warning when running on on Cloud Run and runtime timeout (1 hour) is near
Dependencies
- Add
pydash
dependency
Refactoring
- Replace custom nested attribute functions with
pydash
usage
Allow optional strands
Contents (#688)
New features
- Allow optional strands
Dependencies
- Use
twined=0.6.0
Fixes
- Skip dataset validation for missing optional manifests
Add documentation on updating Octue services
Contents (#683)
Operations
- Use latest
ruff
pre-commit check
Dependencies
- Add
ruff
to dev dependencies
Other
- Add doc on updating an Octue service
Switch to ruff developer tooling
Contents (#682)
Operations
- Switch from
flake8
,black
, andisort
toruff
Dependencies
- Remove old formatters/linters and add
ruff
config
Refactoring
- Apply
ruff
to all files
Check for service revision existence
Contents (#680)
IMPORTANT: There is 1 breaking change.
Enhancements
- 💥 BREAKING CHANGE: Use cloud URIs by default for datasets in output manifests
- Add comments around checking for service revision existence
- Improve error when
octue.services
topic doesn't exist
Fixes
- Raise error if service revision subscription doesn't exist when no service registry is in use
- Remove
octue.services
prefix from subscription names
Refactoring
- Avoid repeated conversion to Pub/Sub ID for a service
Upgrade instructions
💥 Use cloud URIs by default for datasets in output manifests
Set use_signed_urls_for_output_datasets
to True
in the app configuration to keep using signed URLs for datasets in output manifests.
Revert analysis output location removal
Contents (#677)
Fixes
- Pass output arguments into
Analysis
and use them
Reversions
- Revert "REF: Stop storing
output_location
inAnalysis
"
Make signed URLs for output datasets optional
Contents (#676)
IMPORTANT: There is 1 breaking change.
Enhancements
- Allow using non-signed URLs for datasets in output manifest (controllable via the app configuration file)
- Handle all
requests
errors while:- Getting cloud metadata for datafiles and datasets
- Downloading datafiles
Fixes
- Avoid trying to access buckets for URL datasets
Refactoring
- 💥 BREAKING CHANGE: Stop storing
output_location
inAnalysis
- Remove unnecessary finalisation from template apps
Upgrade instructions
💥 Stop storing `output_location` in `Analysis`
If calling Analysis.finalise
manually, either stop doing this and rely on the output_location
field of the app configuration or explicitly pass in the upload_output_datasets_to
argument.