Skip to content

GPU Utilization #1252

@Ivan2getdmodelz

Description

@Ivan2getdmodelz

I have Ubuntu 22.04 and 24.04 with nvidia GPUs, and noticed when monitoring the GPU, that each time a file is loaded, the GPU gets loaded, and once the transcription is complete, it removes the model from VRAM. For speed and for batch processing (i.e. monitoring the watch folder) a useful feature would be to leave the model in VRAM until either a new or different model is selected, or the program is closed.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions