What's new

MIR 3D Fan Club

MIR3d relies on server technology and runs in parallel with your DAW.
This is correct, but only in case of MIR 3D used as part of VE Pro. In case of the plug-in version all processing is done within the DAW, as with any other plug-in. It is only the main GUI (also known as the "Venue Map") that resides outside the host and is shared by all instances of MIR.
 
On the topic of MIR in VE Pro, I noticed that, when using any VSL instruments, it automatically assigns the profile for that instrument. Is there a way to get MIR to do that when not using VE Pro?
 
On the topic of MIR in VE Pro, I noticed that, when using any VSL instruments, it automatically assigns the profile for that instrument. Is there a way to get MIR to do that when not using VE Pro?
No, this only works within VEP. But you still can manually assign instrument profiles.
 
This is correct, but only in case of MIR 3D used as part of VE Pro. In case of the plug-in version all processing is done within the DAW, as with any other plug-in. It is only the main GUI (also known as the "Venue Map") that resides outside the host and is shared by all instances of MIR.
Yes, I forgot to mention that. 164 of my MIR plugins are inside VEPro. My post was about the dozen instances in the DAW, and I mostly open the same venue in both Venue Maps.
 
Quick question about MIR. It seems like it's not just single tool, but a few tools that create the overall experience.

- Positioning/Direction w/GUI
- Reverb with multiple impulses and roompacks
- Tool to sweeten the tail of the convolution (MIRacle)
- Roomtone generator
- ?

Are these tools dependent on each other or could I conceivably use them with other tools I might have?

For example, if I have a non-MIR reverb that I would like to use, could I use MIR solely for instrument positioning, by setting the MIR tail to 0 seconds, and then feed it into this other reverb? Alternatively could I also use MIRacle to modulate the tails of other convolution reverbs I have?

It seems like I could, but was wondering if anyone had any experience doing this.
 
Quick question about MIR. It seems like it's not just single tool, but a few tools that create the overall experience.

- Positioning/Direction w/GUI
- Reverb with multiple impulses and roompacks
- Tool to sweeten the tail of the convolution (MIRacle)
- Roomtone generator
- ?

Are these tools dependent on each other or could I conceivably use them with other tools I might have?

For example, if I have a non-MIR reverb that I would like to use, could I use MIR solely for instrument positioning, by setting the MIR tail to 0 seconds, and then feed it into this other reverb? Alternatively could I also use MIRacle to modulate the tails of other convolution reverbs I have?

It seems like I could, but was wondering if anyone had any experience doing this.
In short: yes.
 
could I use MIR solely for instrument positioning, by setting the MIR tail to 0 seconds, and then feed it into this other reverb?
Yes, but there is no actual "0" for MIR's tail. Instead, I suggest to either set the Dry/Wet Ratio of the individual instruments (i.e. MIR Icons) to fully dry, or to switch the chosen Venue's output to Dry Solo globally. - "Dry" doesn't mean "unprocessed" in MIR's lingo, it's the readily positioned and pre-processed source signal, just without the positional impulse responses.

Using one of the scoring stages for this task that allow for positioning on all sides of the Main Microphone is helpful, too.

Be warned that the "other" reverb you plan to feed will know little to nothing about the positioning info of MIR, though. In 99% of all cases the best result you can achieve like that will be "a bit more to the left" or "a bit more to the right".
 
Be warned that the "other" reverb you plan to feed will know little to nothing about the positioning info of MIR, though. In 99% of all cases the best result you can achieve like that will be "a bit more to the left" or "a bit more to the right".
Oh yes, after reading through the docs and seeing all the work that you all did, recording and processing, I realize that the major advantage of MIR would go right out the window with this approach! But I did wonder.

I think what's really cool, is that all those individual tools, and the way they are decoupled, can supersede other plugins that I might buy. That was not obvious to me before. My previous impression of MIR was "very sophisicated reverb", and not "can also replace other panning plugins" and "can also make your other reverbs sound better".

I know MIR's price might be perceived as high, but considering this decoupling of tools, I think it is very reasonable!
 
Yes, but there is no actual "0" for MIR's tail. Instead, I suggest to either set the Dry/Wet Ratio of the individual instruments (i.e. MIR Icons) to fully dry, or to switch the chosen Venue's output to Dry Solo globally. - "Dry" doesn't mean "unprocessed" in MIR's lingo, it's the readily positioned and pre-processed source signal, just without the positional impulse responses.
Just realized I have a question about this. It's my understanding that a number of factors lead to the perception of a sound source in space, even without reverberations. When using Dry mode, how does MIR 3D position an instrument?

For example, does it include the Haas effect? Does it pan the source audio from left and right? Does it use a head-transfer function to eq the left/right channels based on the sound source position? Does it EQ when incorporating the distance to the source? I'm sure it does something very sophisticated, I'm mainly wondering how much more MIR 3D does than I would if I were trying to manually position a sound source. And again, this is before incorporating reverb.

BTW: I mocked up a classical orchestral piece with this and I really love how immersed I feel in the soundstage. It's really amazing! I think it's very well thought-out.
 
Thanks for the friendly words! Highly appreciated. :emoji_thumbsup:

This screenshot illustrates my answers below:

1714950067731.png
When using Dry mode, how does MIR 3D position an instrument?
The dry signal is encoded in Ambisonics and takes the place at the exact x/y/z coordinates where the impulse source was located during the recording session in a MIR Venue. (... see #2 in the screenshot above). It gets decoded together with the wet signal as if it had been there from the beginning, but without the artefacts that the direct signal components typically exhibit after the convolution process.

For example, does it include the Haas effect?
No, not at all. Ambisonics is a coincident format.

BTW: The Haas effect is quite problematic when a signal is folded down to a narrower format, e.g. from stereo to mono.

Does it pan the source audio from left and right?
Errr ... yes, of course. 8-) Just look at the MIR Icon and you will see exactly where the left and right channel will be placed in the virtual sound field. (... see screenshot)

Does it use a head-transfer function to eq the left/right channels based on the sound source position?
No, as this would work only with headphones. But you can easily add binaural effects with external tools of your choice if you set MIR 3D to output un-decoded, raw Ambisonics (up to 3rd order). (... see screenshot)

Does it EQ when incorporating the distance to the source?
Yes, you can switch on a so-called Air Absorption Filter in MIR's Dry Signal Handling. (... see #1 in the screenshot above)

I'm sure it does something very sophisticated, I'm mainly wondering how much more MIR 3D does than I would if I were trying to manually position a sound source. And again, this is before incorporating reverb.
As I wrote above: The idea is to replace the direct signal components with an "ideal" version derived from the input source. IOW: Sophisticated, yes, but completely transparent and logical, concept-wise.

You might also be interested in the little primer I wrote for legacy MIR Pro that covers many of the underlying ideas and concepts: -> Think MIR!

Enjoy MIR 3D! :)
 
Last edited:
I appreciate the detailed response! That diagram is gold! Would love to see it in the manual even.

Apologies for the very basic questions about positioning. I was trying to solicit a response in terms of things that I actually knew (Ambisonics is a little confusing to me). I'll look more at the primer.
 
Ambisonics is a little confusing to me
I totally understand that. But the good news is: You don't need to know _anything_ about it if you use MIR 3D, unless you have a very good reason to deal with it directly (... which probably won't happen until you actually figured it out already. ;) ).

MIR "thinks" in very musical, straight-forward terms that have been established since ages:
  • There's a stage in a beautiful sounding hall of your choice.
  • You invite musicians and ask them to take a seat at a suitable position that fits your taste and needs.
  • You ask the engineer to set up the main mic array at a position and in a format you intend to "record" in.
  • Add the spot mics to taste (i.e. the individual Dry/Wet ratio of each MIR instance).
  • Press play. :)
.... all of this is fully malleable without further technical knowledge, directly on the stage or by means of direct numerical entry for finer detail. It helps a lot if you have a discerning ear, but Ambisonics comes in at a much, much later stage.

Enjoy MIR 3D!
 
Is it worth getting MIR if using Syncronized libraries? Is it even possible? Or is it necessary to use the VI versions? Just completed articulation maps for my Sync Dimension, Solo and Appassionata strings so suddenly using them in earnest after procrastinating for a long time about using them instead of Note Performer for general writing.
 
You can definitely use the Synchronized libraries with MIR Pro 3d. There's a setting called "MIR Unprocessed among the drop-down list of presets in the Mixer tab.
 
Understood. ;-D ... I tried to do this with this section in MIR's manual:

-> https://www.vsl.info/en/manuals/mir-pro-3d/getting-that-sound

The idea of the "virtual spot mic" is covered several times, too, most prominently in this section of the Instrument Settings chapter:

-> https://www.vsl.info/en/manuals/mir-pro-3d/instrument-settings#dry-signal-handling

HTH!
Yes. But I think you got what I meant ☺️
Sometimes what we think is clear as water someone else is struggling to understand it. Maybe it’s because a language barrier.
 
I’m curious how one would approach getting an outriggers type of array or wide rear mics, similar to a lot of surround mic setups. It seems like you cannot move the virtual capsules very far apart.
 
Top Bottom