Skip to content

Conversation

eschmidbauer
Copy link
Contributor

This change makes it possible to use the openai client SDK

./server --host 0.0.0.0 --port 9010 -nt \
     -m models/ggml-large-v3-q5_0.bin --request-path /audio/transcriptions --inference-path ""
from openai import OpenAI

client = OpenAI(api_key="1", base_url="http://localhost:9010")

with open("test.wav", "rb") as file:
    payload: bytes = file.read()
    transcription = client.audio.transcriptions.create(
        model="models/ggml-large-v3-q5_0.bin",
        file=file
    )
    print(transcription.text)

@ggerganov ggerganov merged commit bec9836 into ggml-org:master Jul 8, 2024
iThalay pushed a commit to iThalay/whisper.cpp that referenced this pull request Sep 23, 2024
iThalay pushed a commit to iThalay/whisper.cpp that referenced this pull request Sep 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants