Skip to content

Evaluate usability wrt. to the reference documentation #15

Open

Description

About

General

Explore and validate if all important use cases and scenarios outlined at MLflow Tracking are covered, and work successfully. When not, add corresponding information to the "Caveats" section of the documentation.

Backlog

Those are specific items we would like to explore, or need to be addressed.

  • Fix or evaluate impact of failing test cases #11 enumerates a few failing test cases. Their impact needs to be assessed.
  • Directly use an SQLAlchemy database URI as MLFLOW_TRACKING_URI, without using a Tracking Server. See Where Runs are recorded.
  • Evaluate Automatic Logging on behalf of a few examples.
  • Evaluate the databricks and/or databricks-uc (Databricks Unity) backends / URI schemes.
  • Identify other subsystems of MLflow, which could be applicable to be supported by CrateDB.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions