(Replying to PARENT post)

DAW innovation will only come with inventing new algorithms.

There needs to be significant number theory and computer science algorithmic work related to sound and how we represent sound with data.

GPU's currently cannot work with sound data in the processing chain, and multicore is basically just used to scale horizontally (ie to have more plugins or instruments)

New algorithms are needed to scale out audio processing, as well as make use of new hardware types (for example, using the gpu)

πŸ‘€barkingcatπŸ•‘2yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

You seem to think DAW makers don't already specialize a ton in DSP, algorithms, and concurrency - I can assure you that that's definitely the case, and that innovation and optimization happen at a very healthy pace. There is significant market pressure to run tracks and plugins in a highly optimized way. Several DAWs have a visible CPU-usage meter, and some allow users to directly configure a process isolation model for plugins.

However, audio has a very different set of constraints from other types of workloads - the hallmarks being one worker doing LOTS of number crunching on a SINGLE stream of floating-point numbers (well, two streams, for stereo), that processing necessarily happening in SERIAL, and getting the results back INSTANTLY. Why serial? Because for most nontrivial audio processing algorithms, the results depend on not just the previous sample, or even chunk of samples, but are often a rolling algorithm that depends on a very long history of prior samples. Why instantly? Because plugins need to be able to run in realtime for production and auditioning, so every processing block has a very tight budget of tens of milliseconds to do all its work, and some of them make use of a lot of that budget. Also, all of these constraints apply across an entire track as well - every plugin on a track has to apply in serial, one at a time, and they need to share memory and act on the same block of audio.

One thing you might notice is these constraints are pretty bad conditions for GPU work. You're not the first to think of trying that - it's just not a great fit for most kinds of audio processing. There are some algorithms that can run massively parallel and independent, but they're outliers. Horizontally scaling different tracks across CPUs, however, works splendidly.

πŸ‘€jrajavπŸ•‘2yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

> GPU's currently cannot work with sound data in the processing chain

https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s...

> New algorithms are needed to scale out audio processing, as well as make use of new hardware types (for example, using the gpu)

What kind of "scale out" are you referring to here if not "to have more plugins or instruments"?

πŸ‘€tonyarklesπŸ•‘2yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

It’s still pretty new, but people are taking advantage of GPUs for audio these days: https://www.gpu.audio
πŸ‘€sporklπŸ•‘2yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

The GPU separates vocals and percussion from the instruments on my DJ laptop (also why has nobody made spleeter an LV2 yet?)
πŸ‘€bandramiπŸ•‘2yπŸ”Ό0πŸ—¨οΈ0