You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i would like to create whisper model transcriptions from audio in a memory-based buffer, ideally of type []byte; whereas currently the only way to identify audio in this sdk is via openai.AudioRequest.FilePath, thus it must be stored in the underlying OS's filesystem.
in my current use case, the audio file is already in a []byte buffer, never came from a filesystem in the first place, thus it feels unnecessary and artificial to explicitly store it in a temporary file and thereafter delete it in order to use whisper api via this projects sdk.
thanks!
The text was updated successfully, but these errors were encountered:
mdarc
added a commit
to mdarc/go-openai
that referenced
this issue
May 28, 2023
* Implement optional io.Reader in AudioRequest (#303) (#265)
* Fix err shadowing
* Add test to cover AudioRequest io.Reader usage
* Add additional test cases to cover AudioRequest io.Reader usage
* Add test to cover opening the file specified in an AudioRequest
i would like to create whisper model transcriptions from audio in a memory-based buffer, ideally of type
[]byte
; whereas currently the only way to identify audio in this sdk is viaopenai.AudioRequest.FilePath
, thus it must be stored in the underlying OS's filesystem.in my current use case, the audio file is already in a
[]byte
buffer, never came from a filesystem in the first place, thus it feels unnecessary and artificial to explicitly store it in a temporary file and thereafter delete it in order to use whisper api via this projects sdk.thanks!
The text was updated successfully, but these errors were encountered: