Skip to content

bug: EVAL and SUBMISSION test types raise bare NotImplementedError with no user-friendly message #218

@arekay-nv

Description

@arekay-nv

Problem

The CLI accepts eval and submission as valid type values (they are defined in the TestType enum), but using them results in a bare NotImplementedError with no helpful message for the user.

Affected locations:

  • src/inference_endpoint/config/schema.py:633 — raises NotImplementedError for EVAL/SUBMISSION
  • src/inference_endpoint/config/runtime_settings.py_from_config_default() raises NotImplementedError for EVAL/SUBMISSION modes
  • src/inference_endpoint/main.py:119eval() command raises NotImplementedError("Accuracy evaluation not yet implemented")

Impact

Users who attempt to use type: eval or type: submission in their YAML config, or run inference-endpoint eval, receive a Python stack trace rather than a clear error message. This is especially confusing since the values appear valid in the schema.

Expected Behavior

Replace bare NotImplementedError raises with CLIError (from exceptions.py) that explains the feature is not yet available and points to the tracking issue for accuracy evaluation (#4).

Related

Metadata

Metadata

Assignees

Labels

priority: P0Critical — blocks release or userstype: bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions