A new UTAU resampler based on pc-nsf-hifigan for virtual singer.
For Jinriki please use our Hachimisampler
Hifisampler was modified from straycatresampler, replacing the original WORLD with pc-nsf-hifigan.
Pc-nsf-hifigan employs neural networks to upsample the input features, offering clearer audio quality than traditional vocoders. It is an improvement over the traditional nsf-hifigan, supporting f0 inputs that do not match mel, making it suitable for UTAU resampling.
- Install Python 3.10 and run the following commands (it's strongly recommended to use conda for easier environment management):
pip install -r requirements.txt
- Download the CUDA version of PyTorch from the Torch website (If you're certain about only using the ONNX version, then downloading the CPU version of PyTorch is fine).
- Fill out the config.toml. (config.toml, hifisampler.exe, hifiserver.py and launch_server.py should be in the same directory, for now. It is suggested to keep the original file hierarchy as in the compressed file)
- Download the release, unzip it, and run 'hifiserver.py'.
- Set UTAU's resampler to
hifisampler.exe.
- g: Adjust gender/formants.
- Range:
-600to600| Default:0
- Range:
- B: Adjust breath/noise.
- Range:
0to500| Default:100
- Range:
- V: Adjust voice/harmonic.
- Range:
0to150| Default:100
- Range:
- G: Force to regenerate feature cache(Ignoring existed cache).
- No value needed
- Me: Enable Mel spectrum loop mode.
- No value needed
- yjzxkxdn
- openvpi for the pc-nsf-hifigan
- MinaminoTenki