You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Determine how to design image configurations best to add routing support
Ensure routing support is optionally available in ngen-based worker image(s)
Add/improve mechanism(s) for controlling specific ngen repo commit
Design build, backend, and/or UI mechanisms to support ngen workflows using externally provided/referenced BMI module code
Investigate possible support of custom provided Docker image or Dockerfile
Separate Docker functionality to isolated package
Review and update Python package build process and dist_package.sh script
Use Docker Secrets for managing service SSL certificates
Job and Resource Management
Fully incorporate monitor-service into running main stack
Enhance job startup monitoring to auto restart hung workers that don't start on their own
Create (i.e., recreate, but with new names) allocation paradigms that are multiple MPI ranks per container
Add client package support for requesting a job and designating a subset of the entire hydrofabric
Dynamically deactivate/reactivate resources on a Swarm node if the node's status switch to/from “pause”
GUI Development Tasks
Integrate current GUI code with dmod.client package
Finish GUI views for model configuration
Finalize domain selection GUI view
Component for configuring "regular" BMI formulations
Component for configuring multi-BMI formulations
Support the uploading of all required catchment-specific model formulation BMI initialization files
Data Management
Complete client package/CLI support for dataset upload/download functionality
Support in data-service for deriving AORC_CSV format datasets that are a subset of the time series of the source dataset
Support in data-service for deriving AORC_CSV format datasets that are a subset of catchments of the source dataset
Support in data-service for deriving datasets to fulfill otherwise unfulfillable requirements for jobs (along with updating data availability check functionality)
Support in data-service for automated routines to clean up temporary datasets
Make sure the object-store-dataset-based Docker volume for output datasets that gets created as part of a job is cleaned up upon job completion
Implement mechanism (e.g., different type, different format, new index within format) for distinguishing hydrofabric datasets that have been subdivided (keep in mind that a subdivided hydrofabric depends on both the original hydrofabric and the partitioning configuration)
Support in data-service to retrieve and process raw forcing data from remote services or locations
Ticket to collect issues that are part of the next release (like to be represented by a new Github Project).
Docker and Infrastructure
Job and Resource Management
GUI Development Tasks
Data Management
AORC_CSV
format datasets that are a subset of the time series of the source datasetAORC_CSV
format datasets that are a subset of catchments of the source datasetmodeldata.datarequest
withmodeldata.data
Evaluation, Visualization, and Calibration
Bug Fixes
Other Required Items
Optional Items
The text was updated successfully, but these errors were encountered: