I assume this can do sound rendering, like simulating a conversation on a subway platform while a subway passes by?
Or singing while walking through a tunnel?
Since it has capabilities that would be hard to replicate, rather than show the tool on the landing page, I would show the output. Remove the clutter and force people to listen to what the tool can produce.
The tool as it is now, is being marketed towards yourself, people that wanted to build that tool and know what it is. But everyone will know what it can do after listening to sample output.
This is great (in theory). During lockdown I got an ambisonic mic (Rode NF-SFW1) and used it to create Dolby Atmos experiences. The workflow - including sending it to Dolby's tool every time - was such a pain. Adding additional 3d elements was especially annoying and limiting.
Unfortunately that's no longer my hobby so can't test this for you but definitely scratches an itch for past me. Nice
It's funny to see this now because I've been for a couple weeks looking into audio spacialization. After a lot of research and even trying to write my own spatializer plugin, I found that Game Engines have probably the most complete toolset to do this task. (Specifically I'm using Godot with https://valvesoftware.github.io/steam-audio/).
Steam audio is pretty awesome in that regards because it supports HRTF and all the physical based goodies like occlusion/reflection and sound propagation. So you can get really really immersive spatial audio.
The only downside with this solution is that you can't do offline rendering. So my question is:
can Audiocube do offline rendering? seems like it would be one killer feature for my use case.
Very cool app. If you can crack multi-channel output formats (5.1, Atmos) I can see a lot of prosumers who would happily buy the product. Even the most basic tools for Dolby 5.1 are overpriced IMHO and Atmos encoder prices are either far beyond the reach of most DAW users or require use of Pro Tools.
One downside of selling into the pro audio market is piracy unfortunately. I learnt that the hard way and ended up having to use iLok.
YouTuber Benn Jordan would probably get a kick out of this. He's a major audiophile and did a series of ambisonic ambience.
Will there be a Linux version anytime soon!?
Easy decision to send you a few pounds for this. This is no small task to put together and looks really impressive. I can't wait to try it out :)
I have a history experimenting with 3D audio - about 15 years ago I build myself a pair of ambisonic microphones, but until only recently I think the software support for ambisonic capture and mixing has been seriously lacking. Back when I built the mics I started working on a plugin suite for the processing, but I could never get it quite right. Nowadays, there are more 3rd party options I can use, and I will spend some time with this again :)
No expert at all here. But could I use this to say, model my room and understand how to treat it acoustically to remove reflections and stuff like that?
Really cool! I've been working on a side project that utilizes spatial audio and I've been pleasantly surprised by the quality I'm getting just using the WebAudioAPI HRTF spatialization. I'm sure this is leagues ahead but it was really nice to find that I didn't really need to do anything to get decent spatial audio other than set the panner node to HRTF mode.
IIUC, I'd be able to take individual closely miked recording on multiple different instruments and mix them into a soundspace, such that when I listen on stereo headphones, I'd be able to "locate" the sounds on a virtual stage?
(asking because I listen to a lot of live jam music in stereo and noticed that they use a stereo mix with a virtual image)
You using Blender's viewport? (Great idea if so, nearly perfect recreation if no!)
This looks great! How small of an audio buffer have you been able to get down to? Any plans for an API?
I've been developing a VR spatial sound and music app for a few years with the Unity game engine, bypassing the game engine's audio and instead remote controlling Ambisonic VSTs in REAPER. I can achieve low latency with that approach but it's a bit limited because all the tracks and routing need to be setup beforehand. There's probably a way to script it on REAPER but that sounds like an uphill battle. It would be a lot more natural to interface with an audio backend that is organized in terms of audio objects in space.
What I'd like is more flexibility to create and destroy objects on the fly. The VSTs I'm working with don't have any sort of occlusion either. That would be really nice to play with. Meta has released a baked audio raytracing solution for Quest, and that's fun for some situations but the latency is a bit too much for a satisfying virtual instrument.
Here's my project for context: https://musicality.computer/vr
Venus Theory and Andrew Huang are two other YT channels that would probably love this. Venus Theory does a lot of sound design and cinematic things, and Andrew Huang just loves to experiment.
Very interesting! Did you write your own acoustic simulation engine for this?
> As a standalone app, Audiocube offers tools, workflows, and processes that go beyond the capabilities of VST plugins.
But does it support VST/AU in order to load instruments rather than "samples"?
Looks interesting, will check it out. I am curious what makes audiocube different and better than Logic Pro’s binaural and spatial Dolby atmos mixing?
I've had this idea a few times and happy to see someone do something similar. Will try this out, thanks for sharing it here.
I can see it being useful as a VST, actually- could be an interesting part of my workflow in Live.
Nice work! Can you export for multichannel playback or is it binaural / stereo?
Is there VST/AU support? I didn't see it mentioned.
Neat interface the grouped ones with matching neighbor contours
not sure if it's the ad blocker or safari itself, but the download links don't appear to work in safari. I had to switch to chrome to download.
The free download requires account registration, which will discourage most people from trying it. Even third-party login might make it easier, but I didn't find that option.
Does it work with vst plugins?
This seems intriguing but I'm genuinely confused.
It seems like "bakes in" spatial audio to binaural stereo?
But who is the market for that?
I love spatial audio on my AirPods but a big feature is that it moves with my head and can even be customized for my ears.
And I certainly don't want it applied when downsizing to a mono Bluetooth speaker.
It seems like you'd need to export your final product to surround/Atmos for the way people want to, and currently do, consume spatial audio? I assume the target here is Apple Music, short films, etc.?
I mean I think the concept of the 3D DAW is great. I just want to make sure there's a product-market fit here, so you can succeed. Or is there a market I'm overlooking?
Not spatial audio, but reminded me of audioGL (2012, but a newer video posted in 2024): https://m.youtube.com/@AudioGL/videos
Is there an audio format that stores the 3D origins of each sound, so that you could theoretically play it through Airpods (or some other spatially-aware headphones) and hear the sound change as you tilt your head?
For those of us unfamiliar with the term DAW, I assume it’s Digital Audio Workstation.
At first glance I thought of: DEW
I am so glad you built this, I just bought a license.
I sketched out an similar app, but never had super pressing need. I can think of many many uses for this from modeling performance spaces, minimizing resonances in industrial settings, crime scene reconstruction, art installations, speaker placement for large concerts and many more.
What research or similar tools did you look to for inspiration?
Some things that come to mind
Efficient Interactive Sound Propagation in Dynamic Environments https://cdr.lib.unc.edu/concern/dissertations/5425kb409
Precomputed Wave Simulation for Real-Time Sound Propagation of Dynamic Sources in Complex Scenes http://gamma.cs.unc.edu/PrecompWaveSim/
Immersed boundary methods in wave-based virtual acoustics https://www.pure.ed.ac.uk/ws/portalfiles/portal/257303782/Bi...