Problem
The CLI accepts eval and submission as valid type values (they are defined in the TestType enum), but using them results in a bare NotImplementedError with no helpful message for the user.
Affected locations:
src/inference_endpoint/config/schema.py:633 — raises NotImplementedError for EVAL/SUBMISSION
src/inference_endpoint/config/runtime_settings.py — _from_config_default() raises NotImplementedError for EVAL/SUBMISSION modes
src/inference_endpoint/main.py:119 — eval() command raises NotImplementedError("Accuracy evaluation not yet implemented")
Impact
Users who attempt to use type: eval or type: submission in their YAML config, or run inference-endpoint eval, receive a Python stack trace rather than a clear error message. This is especially confusing since the values appear valid in the schema.
Expected Behavior
Replace bare NotImplementedError raises with CLIError (from exceptions.py) that explains the feature is not yet available and points to the tracking issue for accuracy evaluation (#4).
Related
Problem
The CLI accepts
evalandsubmissionas validtypevalues (they are defined in theTestTypeenum), but using them results in a bareNotImplementedErrorwith no helpful message for the user.Affected locations:
src/inference_endpoint/config/schema.py:633— raisesNotImplementedErrorfor EVAL/SUBMISSIONsrc/inference_endpoint/config/runtime_settings.py—_from_config_default()raisesNotImplementedErrorfor EVAL/SUBMISSION modessrc/inference_endpoint/main.py:119—eval()command raisesNotImplementedError("Accuracy evaluation not yet implemented")Impact
Users who attempt to use
type: evalortype: submissionin their YAML config, or runinference-endpoint eval, receive a Python stack trace rather than a clear error message. This is especially confusing since the values appear valid in the schema.Expected Behavior
Replace bare
NotImplementedErrorraises withCLIError(fromexceptions.py) that explains the feature is not yet available and points to the tracking issue for accuracy evaluation (#4).Related