-
-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add procedural and parametrizable audio support #3394
Comments
I'm totally for this. Once planned to try something similar as an addon for chiptunes specifically, but I've never found the time to actually do so (so far). First thought about the proposal above: I don't think this would need a timeline (nor should it have one). Instead there should probably be a Delay modulator node or something like that (along with others to cut parts of streams, too). But just to put this in for discussion – random idea – might be worth maybe even making this fully scriptable (think VisualShader vs. Shader). The big thing would be the framework either way, considering most processing can be broken down to rather simple math operations. Quick example for a sinus generator: audio_node generator; // has no input sample
input float frequency = 10000.0f;
input float amplitude = 1.0f;
// Basically called once per requested sample
void modulate() {
SAMPLE.lr = amplitude * sin(TIME * 2 * PI * frequency);
} Or some stereo balance modulator: audio_node modulator; // gets a sample as input
input float balance = 0.5f; // 0.0 = full left, 1.0 = full right
void modulate() {
sample s = mono(SAMPLE); // reduce to mono
SAMPLE.l = (1.0f - balance) * s;
SAMPLE.r = balance * s;
} So one script would define a single node (input or modulator), but full scripts could also be merged to single "complex" nodes again. Maybe overthinking, but I'd consider this very neat (and also very useful in education/audio processing) and maybe something for GPU processing, too (as long as no samples are based on previous ones)? |
I would like to point out that we have |
Maybe PureData (foss) and MaxMSP can give some ideas. They both are used to create interactive projects with procedural sounds/music/synthesizers. I have seen another software that target procedural music creation for live events either via nodes or via code. It's called PraxisLive (foss). Also, you can make visual effects or shaders on it but it is made by a musician-programmer. |
Also what do you think about this FOSS project: https://github.com/soul-lang/SOUL? Perhaps, it could be leveraged as the "audio shader" language |
This is awesome :)! Some thoughts from my perspective as Technical Game Sound Designer who's worked on a number of Unity/UE4 using both their native audio and FMOD/Wwise (and one Godot project so far :)): One of the key features that makes FMOD / Wwise so attractive, apart from from all the convenient mixing and helpful profiling tools is that they're event driven. An event can do a plethora of things ranging from simple audio playback of a simple random wave container to setting global parameters that affect many currently playing sounds, stop other sounds etc.. The designers are given a lot of freedom while the code-side requirements are fairly non-complex most of the time. Where "procedural" or parameter driven audio gets very interesting in a game audio context, is sounds affecting other sounds, when the system can talk to itself, update state, and talk back to the game loop so gameplay can react to sound instead of the other way around. The one new native audio implementation that I'm most excited about currently is UE5's new meta sounds: https://docs.unrealengine.com/5.0/en-US/AudioFeatures/MetaSounds/MetasoundsReferenceGuide/ Maybe there's some inspiration / ideas to be found there that could fit the Godot way of doing things. |
@MichaelKlier I have seen some of the work that Epic is doing on MetaSound, but I have the feeling it is largely too complex for a musican or sound designer to use. For something so complex, I think it would be easier to just use PureData or similar as an add-on. I wanted to focus on having a simpler workflow where you can still do most of the most common audio manipulation tasks. |
Strongly agree. Implementing an interface and a workflow that musicians and sound designers are used to (for example: a DAW like Reaper and a middleware like Wwise), would be an excellent improvement. |
MetaSound is too complex to use, it would be good enough for Godot to have a solution that is capable to replace FMOD/Wwise |
@reduz totally agree that Metasound is certainly complex. I didn't mean to imply to do sth. similar, merely suggesting to take inspiration/ideas by some of the functionality it implements :). Metasound is the only native audio implementation that I would personally consider to use over 3rd party Middleware (native UE4 is capable as well but still limited). Again, always considering the actual requirements of the project, not every project needs middleware. I guess it really depends on what the end goal is. If it's about getting up to par with audio middleware out of the box to make it an attractive solution for sound designers to use, then it probably needs to offer at least the core set of features these tools have. On that note, they only have very basic implementations of actual sound generators (sinewaves etc.), with Wwise having some more advanced ones for certain types of ambient sounds like wind and I don't know many people who are using them. And procedurally generating sounds like. wind/fire etc. using just sound generators paired with filter nodes, envelopes, lfo's etc. is a complex task on it's own. By planning to offer this kind of functionality, when I read the outline correctly, you're kind of entering the PD/MaxMsp/Metasounds realm anyway. I believe this discussion could benefit from actively bringing in more voices of audio people who are working with these tools on a daily basis. |
Regarding sound generators: I wouldn't expect Godot to have any complex generators, but certainly think it would be nice to have some very simple out of the box solution to generate basic effects or sounds similar to what bfxr and similar tools can do. |
Without a doubt this would be something that I would use in all my projects. BFXR is ideal for small or rapid development projects, seeing something like this integrated into the engine together with the initial proposal would leave all the basics served with respect to audio, getting closer and closer to having to leave GODOT to do something, so less something simple. |
Topic: AudioStreamSample.data compatibility with AudioStreamGeneratorPlaybackJust curious, has the issue proposed here being addressed?
for interoperability between AudioStreamSample.data and AudioStreamGeneratorPlayback? |
Recently, CLAP an audio plugin API, was released by bitwig (modern DAW software developer), and u-he (long-established synthesizer developer). https://github.com/free-audio/clap It is MIT licensed, unlike VST, and appears to be compatible with Godot license. Although it depends on how popular the CLAP format becomes, I think it has the potential to cover most procedural audio use cases by adding MIDI Input Node and CLAP Plugin Node to AudioGraph. |
there is a Glicol named madness, it basically says It's pretty fun stuff, and the language that uses is simple :) |
@theraot wrote:
@GeorgeS2019 wrote:
I never made a new proposal for it, because while it would be a convenient function, it can be implemented in code (albeit with a small bit of hassle). One thing I am considering proposing though is to request that Haven't looked into what happens if you feed more frames than it wants or if this can buy headspace so the same size buffer can be reused over and over again (just skip enough process frames), but maybe at least being able to take an |
Is this project still in the works somehow? I find it extremely interesting and I think it would add a unique edge to Godot over other engines. If not implement something from scratch maybe a very reliable way to work with Pure Data (also FOSS)? |
Sharing some ideas about this PR, SuperCollider (foss) is something amazing and does what this PR is pointing to, although it's only through code. Maybe its the server and language could be used within Godot as it is intended to be used for interactivity and real-time. |
I would love to add support for Godot in the Heavy compiler (Pure Data to C/C++ conversion): https://github.com/Wasted-Audio/hvcc We currently have support for Unity and WWISE targets, but it would be great to have a full opensource stack and better integration with Godot. If Godot can provide its own UX-design aspects and multi-channel (and spatial) audio handling and load audio plugins this would be a nice separation of concerns. Heavy won't be able to run your entire audio suite, but handle specific procedural DSP tasks. Both audio generation ("instrument") and processing ("effects") types are at minimum useful. BTW it was rumored that Unreal would move to implementing CLAP audio plugin support. That could actually be an interesting move to go for CLAP hosting as the main interface for Godot audio plugin devs. We currently already have basic CLAP support in Heavy (which could save on some technical debt, compared to designing+implementing a spec from scratch). |
Commenting so that I get notified of updates on this, and also to note my high level of investment in this proposal. |
Implements a way for audio stream playback to be configured via parameters directly in the edited AudioStreamPlayer[2D/3D]. Currently, configuring the playback stream is not possible (or is sometimes hacky as the user has to obtain the currently played stream, which is not always immediately available). This PR only implements this new feature to control looping in stream playback instances (a commonly requested feature, which was lost in the transition from Godot 2 to Godot 3). But the idea is that it can do a lot more: * If effects are bundled to the stream, control per playback instance parameters such as cutoff or resoance, or any other exposed effect parameter per playback instance. * For the upcoming interactive music PR (godotengine#64488), this exposes an easy way to change the active clip, which was not possible before. * For the upcoming parametrizable audio support (godotengine/godot-proposals#3394) this allows editing and animating audio graph parameters. In any case, this PR is required to complete godotengine#64488. Update modules/vorbis/audio_stream_ogg_vorbis.h Co-authored-by: A Thousand Ships <96648715+AThousandShips@users.noreply.github.com> Update modules/minimp3/audio_stream_mp3.h Co-authored-by: A Thousand Ships <96648715+AThousandShips@users.noreply.github.com> Update modules/minimp3/audio_stream_mp3.h Co-authored-by: A Thousand Ships <96648715+AThousandShips@users.noreply.github.com> Update modules/vorbis/audio_stream_ogg_vorbis.h Co-authored-by: A Thousand Ships <96648715+AThousandShips@users.noreply.github.com> Update doc/classes/AudioStream.xml Co-authored-by: A Thousand Ships <96648715+AThousandShips@users.noreply.github.com>
Now that godotengine/godot#64488 is approaching completion, I think it would make sense to start discussing this again, how it works with the new system, and whether the graph approach is still best. |
@IntangibleMatter Well, that PR implements adaptive music, this proposal is about building synths and mixers. Most of the features are not overlapping I think. |
Implements a way for audio stream playback to be configured via parameters directly in the edited AudioStreamPlayer[2D/3D]. Currently, configuring the playback stream is not possible (or is sometimes hacky as the user has to obtain the currently played stream, which is not always immediately available). This PR only implements this new feature to control looping in stream playback instances (a commonly requested feature, which was lost in the transition from Godot 2 to Godot 3). But the idea is that it can do a lot more: * If effects are bundled to the stream, control per playback instance parameters such as cutoff or resoance, or any other exposed effect parameter per playback instance. * For the upcoming interactive music PR (godotengine#64488), this exposes an easy way to change the active clip, which was not possible before. * For the upcoming parametrizable audio support (godotengine/godot-proposals#3394) this allows editing and animating audio graph parameters. In any case, this PR is required to complete godotengine#64488. Update modules/vorbis/audio_stream_ogg_vorbis.h Co-authored-by: A Thousand Ships <96648715+AThousandShips@users.noreply.github.com> Update modules/minimp3/audio_stream_mp3.h Co-authored-by: A Thousand Ships <96648715+AThousandShips@users.noreply.github.com> Update modules/minimp3/audio_stream_mp3.h Co-authored-by: A Thousand Ships <96648715+AThousandShips@users.noreply.github.com> Update modules/vorbis/audio_stream_ogg_vorbis.h Co-authored-by: A Thousand Ships <96648715+AThousandShips@users.noreply.github.com> Update doc/classes/AudioStream.xml Co-authored-by: A Thousand Ships <96648715+AThousandShips@users.noreply.github.com>
Perhaps allowing sends between buses (like in most DAWs) would be enough for the graph functionality or at least as a quick starting point. |
I feel it would be beneficial to add more synthesis options than the listed oscillators/noise generators. Having a Karplus-strong, granular, wavetable, or other options with some sort of midi control might be a good starting place. A stream could be structured similarly to the randomization stream (in terms of holding multiple different streams) and each assigned a midi-voice. So something like this:
or just using the AudioStreamSynchronized found here: godotengine#64488 (Though if the intent is to be able to send MIDI data to the substreams I'm not sure if that's the right approach) I mention MIDI only because Godot currently does not officially support OSC, as I believe OSC would be a much preferable solution for stream communication than MIDI. The instrument streams could be a single stream resource that uses something like the synthesis toolkit (which the ChucK language uses) and then use an enum to choose the "instrument". That way there would not be a separate stream for each oscillator/synthesis/sound generator type, just a single InstrumentStream. I think the graph approach will be the most approachable for those who have worked with something like MaxMSP or pd in the past, so I think it's a useful feature to keep. Though, having some sort of hybrid timeline/graph system may be most useful, sort of like openmusic's maquette system, where the sound generating graph can be inside a larger horizontal block in a timeline. Sort of like this: |
Provides a simple solution to the godotengine/godot-proposals#3394 by adding sine, saw tooth signals and white, brown, pink noise generators as audio filter components.
Implemented a simple tone and noise generator. It uses audio filter backend and UI. These are spectral domain images of the signals produced by this generator. Sine and saw tooth signals at 400Hz: White, brown and pink noise spectrum: Pink noise is based on this article: I tuned it by trial-and-error for 44100Hz sample rate audio. I think it needs a higher order filter and parameter tuning. I am not sure how it will behave for other sampling rates. if it is considered as an alternative solution, I am planning to create a PR. Background informationToneTone is generated by simulating an ideal oscillator. Its goal is generate a signal in this form where where starting from an initial NoiseWhite noise can be generated by Godot's math library's randfn function. Left most column of the second image above proves it. It is constant over all spectrum as expected. Brown can be generated by accumulating Pink is tricky. It is generated by passing the output of |
Provides a simple solution to the godotengine/godot-proposals#3394 by adding sine, saw tooth signals and white, brown, pink noise generators as audio filter components.
Provides a simple solution to the godotengine/godot-proposals#3394 by adding sine, saw tooth signals and white, brown, pink noise generators as audio filter components.
Provides a simple solution to the godotengine/godot-proposals#3394 by adding sine, saw tooth signals and white, brown, pink noise generators as audio filter components.
Describe the project you are working on
Godot
Describe the problem or limitation you are having in your project
Sound designers in the industry end up having to use tools such as FMOD or Wise, with not really any FOSS alternative. Godot users are forced to use this bit of proprietary software in order to have more advanced audio in their games.
Interactive music is being handled by a separate PR/proposal being worked on, so this focuses entirely on procedural audio.
Describe the feature / enhancement and how it helps to overcome the problem or limitation
This would be a special AudioStream resource in Godot that would make it possible to have procedural/parametrizable audio playback in Godot.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
A new AudioStream class will be added: AudioStreamGraph. When edited, it will look like this:
The general idea is that the editor will be divided in two.
In the audio graph, several AudioGraphNodes will exist. Those have input and output Audio connections, as well as Controller connections (as a different type of data port).
The following audio graph nodes will be present:
Basic
Generators
These generate audio output:
Operators
These combine audio outputs:
And the idea is to have what is common in this type of software for controlling parameters, such as LFO, Envelopes, etc. This is just the general proposal.
If this enhancement will not be used often, can it be worked around with a few lines of script?
Audio is low level.
Is there a reason why this should be core and not an add-on in the asset library?
While this could be an add-on, games often require a mature and maintained solution for this. Given the popularity of tools such as Wise and FMOD, it sounds like it should be better this is core.
The text was updated successfully, but these errors were encountered: