This page was generated programmatically. To access the article at its original site, you can follow the link below:
https://vi-control.net/community/threads/is-sampling-dead-ai-gpu-gaming-chips.159173/
and if you wish to have this article removed from our website, please get in touch with us
The process of composing with samples feels akin to attempting to paint using strings attached to brushes stretched across a studio canvas. I never feel fulfilled with the outcomes. It all seems rather meh.
Authentic natural sounds are far superior, even though virtual instruments do serve a purpose. These are my thoughts; many may disagree, but this thread is not intended to spark this discussion; it’s to explore whether AI can be beneficial. There has been a synthetic realm since Charlie Moog; if you appreciate this, great, but I am discussing real-world emulation.
Consider a drum skin, for instance. A beginner might assume that if a drummer strikes it with equal force, it produces the same sound. This is incorrect, even when we exclude reverb from consideration. Throughout its skin, there exist varying tones, and a skilled drummer is adept at exploiting them.
As a saxophonist, I understand that the reed is paramount; it took me years to cultivate my bond with the cane.
Vibrato is such a unique aspect. It is tailored for each sustained note, even when maintaining the same “velocity”/volume level. No matter how long one lives, one will never replicate the same oscilloscopic pattern for two notes – ever. This is a significant part of musical elegance. Every note is a spontaneous muse, a relationship, a fleeting moment, if you will. Mechanical vibrato or randomization is not the solution. One can distinguish Stan Getz from Charlie Parker in an instant. Every authentic instrument behaves this way. The ear recognizes this, even if the conscious mind does not.
Can AI and GPUs facilitate a more personal, intimate connection with our virtual sounds? Is it possible to create a 3D model of a globally responsive drum skin? A model that, if marbles were dropped onto it, would yield an appropriate sound specific to the location of the marbles and their momentum?
Can the gaming industry contribute to this? A GPU/NPU generates graphics; why not apply similar concepts to sound? Tensor cores employ the addition and subtraction of matrices to identify virtual items in space. They are capable of forming a virtual “World Space” where an object can be accurately positioned and given an identity. Modern GPUs perform trillions of calculations every second, widely utilized (alongside ray tracing) to create the 3D virtual reality we observe in video games.
Here is a compelling video on this topic, if you wish to gain a deeper understanding.
Why not design a sound space resembling a gaming world space, with sound objects constructed within a 3D auditory environment? We could supplement this with complementary playable visual instruments, but this is not always necessary and isn’t really the crux of my point. I propose utilizing ray tracing and matrix mapping for virtual sound objects and acoustic environments.
Why not create a space where you can intimately design a drum skin, not only visually but also using the same cores to develop a virtual sound environment and link the visual drum skin with it? Various other sound parameters could also be mapped in this fashion.
We already recognize that AI favors GPU architectures. New chips also include NPUs, which are comparable. What advantages arise from conceptualizing sound in the same manner as a game developer thinks about virtual 3D imagery?
Z
This page was generated programmatically. To access the article at its original site, you can follow the link below:
https://vi-control.net/community/threads/is-sampling-dead-ai-gpu-gaming-chips.159173/
and if you wish to have this article removed from our website, please get in touch with us