Skip to content

Invalid Value: 'file'. Supported values are: 'text', 'image_url', 'input_audio', 'refusal', and 'audio' #396

Open
@jozkee

Description

@jozkee

Service

Azure OpenAI

Describe the bug

I'm trying to use Files API to upload an image and then ask the model to describe it but I'm getting an error.

Unhandled exception. System.ClientModel.ClientResultException: HTTP 400 (invalid_request_error: invalid_value)
Parameter: messages[0].content[1].type

Invalid Value: 'file'. Supported values are: 'text', 'image_url', 'input_audio', 'refusal', and 'audio'.
   at OpenAI.ClientPipelineExtensions.ProcessMessage(ClientPipeline pipeline, PipelineMessage message, RequestOptions options)
   at OpenAI.Chat.ChatClient.CompleteChat(BinaryContent content, RequestOptions options)
   at OpenAI.Chat.ChatClient.CompleteChat(IEnumerable`1 messages, ChatCompletionOptions options, CancellationToken cancellationToken)
   at Azure.AI.OpenAI.Chat.AzureChatClient.CompleteChat(IEnumerable`1 messages, ChatCompletionOptions options, CancellationToken cancellationToken)
   at OpenAI.Chat.ChatClient.CompleteChat(ChatMessage[] messages)
   at Program.Main(String[] args) in C:\Users\dacantu\source\repos\ConsoleApp37.OpenAI\ConsoleApp.OpenAI\Program.cs:line 30
   at Program.<Main>(String[] args)

I want to achieve the same result I get from passing the image as base64 (commented-out line in code snippet). Is my expectation wrong?

For context I want to avoid buffering large contents into memory when interacting with the model as described in dotnet/extensions#5819.


I was also unable to use UploadFileAsync with FileUploadPurpose.UserData, I get Invalid value for purpose.. As per the OpenAI docs, user_data seems more appropriate:

You can upload these files to the Files API with any purpose, but we recommend using the user_data purpose for files you plan to use as model inputs.

Steps to reproduce

using System.ClientModel;
using Azure.AI.OpenAI;
using Azure.Identity;
using OpenAI;
using OpenAI.Chat;
using OpenAI.Files;

class Program
{
    static async Task Main(string[] args)
    {
        // Initialize the OpenAI client
        OpenAIClient openAIClient = new AzureOpenAIClient(
            new Uri("https://my-endpoint.openai.azure.com/"),
            new DefaultAzureCredential());

        ChatClient chatClient = openAIClient.GetChatClient("gpt-4o-mini");

        // Upload file and then reference it by ID.
        OpenAIFileClient fileClient = openAIClient.GetOpenAIFileClient();
        using Stream fileStream = File.OpenRead(@"C:\Users\dacantu\Desktop\pancake_bunny.jpg");
        ClientResult<OpenAIFile> fileResult = await fileClient.UploadFileAsync(fileStream, "bunny.jpg", FileUploadPurpose.Assistants);

        var message = new UserChatMessage([
            ChatMessageContentPart.CreateTextPart("Describe this image."),
            ChatMessageContentPart.CreateFilePart(fileResult.Value.Id),
            //ChatMessageContentPart.CreateImagePart(new BinaryData(File.ReadAllBytes(@"C:\Users\dacantu\Desktop\pancake_bunny.jpg")), "image/jpeg"),
        ]);

        ChatCompletion completion = chatClient.CompleteChat(message);
        foreach (ChatMessageContentPart? contentPart in completion.Content)
        {
            Console.Write(contentPart.Text);
        }
    }
}

Code snippets

OS

Windows

.NET version

.NET 9.0

Library version

2.2.0-beta.4, both OpenAI and Azure.AI.OpenAI

Metadata

Metadata

Assignees

Labels

questionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions