(Replying to PARENT post)
However, audio has a very different set of constraints from other types of workloads - the hallmarks being one worker doing LOTS of number crunching on a SINGLE stream of floating-point numbers (well, two streams, for stereo), that processing necessarily happening in SERIAL, and getting the results back INSTANTLY. Why serial? Because for most nontrivial audio processing algorithms, the results depend on not just the previous sample, or even chunk of samples, but are often a rolling algorithm that depends on a very long history of prior samples. Why instantly? Because plugins need to be able to run in realtime for production and auditioning, so every processing block has a very tight budget of tens of milliseconds to do all its work, and some of them make use of a lot of that budget. Also, all of these constraints apply across an entire track as well - every plugin on a track has to apply in serial, one at a time, and they need to share memory and act on the same block of audio.
One thing you might notice is these constraints are pretty bad conditions for GPU work. You're not the first to think of trying that - it's just not a great fit for most kinds of audio processing. There are some algorithms that can run massively parallel and independent, but they're outliers. Horizontally scaling different tracks across CPUs, however, works splendidly.
(Replying to PARENT post)
https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s...
> New algorithms are needed to scale out audio processing, as well as make use of new hardware types (for example, using the gpu)
What kind of "scale out" are you referring to here if not "to have more plugins or instruments"?
(Replying to PARENT post)
There needs to be significant number theory and computer science algorithmic work related to sound and how we represent sound with data.
GPU's currently cannot work with sound data in the processing chain, and multicore is basically just used to scale horizontally (ie to have more plugins or instruments)
New algorithms are needed to scale out audio processing, as well as make use of new hardware types (for example, using the gpu)