Both remote (Wisprflow) and local (Whisper-XXX) speech recognition assume English language speech as input. I'm fairly certain that the models are their multilingual variants and not the ".en" ones: the debug mode shows them as "whisper-XXX: multi". From what I know about Whisper, it'd be a matter of passing a flag (ideally, by selecting a language in the app).
I'm unaware of how this would work for Wisprflow, but they advertise "over 100" languages, so presumably the process would be similar.
PS It'd be good to add to the documentation whether Wisprflow is configured with the private/no data retention mode activated, and whether users will be able to use their own API keys if need be in the future.