Bit of an article concerning the new surround sound music production tech apple is promoting. With technocrats pushing production as more important than composition or anything really, and apple needing more of your money, surround sound music as a standard wouldn’t be surprising. It might mean they could resell you all the stuff you already bought. The application possibilities though, bring up questions:
Most music you hear in movies/shows/games plays OVER the visual material not WITHIN it, because otherwise it can interfere with immersion. Sound effects/dialog are different and play within the realm of the visuals, thus making sense to use in surround sound. It’s been used that way for decades. It works in open theater, which are rapidly disappearing thanks to c4vid19. It also works for PROPERLY set up home theater, which, most people don't have.
So Apple, not at all known for pushing gimmicks, has got surround sound music and special headphones for it. Enthusiasts claim these headphones “kind of” reproduce the 3d surround sound. As apposed to normal headphones that can already reproduce a 3d sound stage just fine. Or good normal speakers which, when properly set up, can ALSO reproduce 3d sound, though not quite the same as headphones can. In Logic Pro, WITHOUT creating a surround sound project profile, one can simply adjust the imaging placement of any sound/effect, using the onboard plugin. Effectively changing the perceived placement of sounds when listening under normal headphones. It also works to a certain degree when listening on a properly set up normal stereo pair of speakers. This technique is likely available in most DAWs worth a damn and gives the same effect of immersion/dimension as surround sound. It can also save bandwidth by utilizing phase or imaging appropriation. All without the more expensive time consuming surround sound music production.
Surround sound enthusiasts talk about recording an orchestra twice, so they can use surround sound to seat the listener in the middle of the concert. You can do that with just one recording and no surround sound production, by simply using that head shaped mic thing sold for less than a studio surround sound rig. Just place it where you want the listener to be in the concert. Done. Simpler, faster, less expensive, just as effective. Apple’s headphones “kind of” reproducing surround sound, insinuates a proper surround speaker system would be needed. All the while the trend in gaming is VR head sets and such, which means gamers will be on headphones, which they mostly are on already.
Enthusiasts claim: surround sound for music is good because it “complicates the profession”. Rubbish. From an engineering point of view, complicating a profession usually decreases efficiency. Good engineers know, one of the foundational pillars of engineering is efficiency, and simplicity goes hand in hand with efficiency. Surround sound enthusiasts also claim: More complicated=more professional. Hardly. If 2 people do the same job, with the same outcome but one takes twice as long, makes the process twice as complicated and spends twice as much money, most will say the faster person was more professional, because they flippin are. Professionals get the job done as efficiently as possible knowing, there is a set time/budget for any project. The more time spent on needlessly complicated production, the less time left for composition and other important things that impact the quality of the end product. Not surprising that some technophiles would love surround sound music production as the new standard. It would allow one to rack up more hours to be paid for doing busy work and shift attention away from their possibly subpar composition skills. Surround sound music production sure smells ripe for posers.
So, with surround sound you can make music surround the listener. The exact same thing one can achieve mixing normal stereo tracks utilizing proper phase/imaging placement, with before mentioned logic plugin, without the extra time and financial expense of surround sound production.
Preserving/promoting immersion depends on situation. In movies/shows/games, unless the music is IN scene (coming from a radio within the imaginary environment for example), it almost always plays OVER the visual realm, not WITHIN it. For music, re-creating acoustics of a scene would often confuse immersion. Sound, particularly music, is one of the most potent brain stimulants. If you made the music sound IN scene, your brain might ask: Why’s music playing in this hallway batman is sneaking in? Where is the sound system playing this music? Why are the characters not reacting to the music that’s obviously IN scene?
On the flip side, if surround sound music recreated a perfect concert hall or other sound stage unrelated to the scene, it has the possibility to compete with the scene’s immersion. The brain might ask: Why does the dialog/sfxs sound like they are in scene but the music sounds like it’s in a concert hall that I’m NOT in? Furthermore, if you DO match the music acoustics to the scene, every editing cut, game player movement or environment change, would shift the music’s acoustic phase, often creating audible whiplash, possibly harming immersion. It would be unrealistic to provide a recorded acoustic variant for every situation. So, for games, it’d be more feasible to feed it a dry normal stereo music recording and let the game system process environmental acoustics as was appropriate. Which would not require one to produce the music in surround sound in the first place. There’s also the issue that music does not always sound good in a lot of environments period.
Exceptions to all this exist of course but they don’t seem common enough to justify surround music production being a standard, as the exceptions require very specific situations and careful implementation. Otherwise, immersion can be degraded. I could be wrong but in general, this is why, MOST music, not sfx/dialog, plays OVER the visual realm, not within it. Oh, and very kindly, apple can eat a bag of dicks. Just sayin.