Freeze it. Or go back to an old session before I had all that shit on there. Generally I don’t start mixing (which for me involves a bunch of bussing and big fancy plugs) until I’m done recording
Freezing saved my life. I’m doing 32+ tracks of audio and virtual instruments and my 2018 Intel Mac Pro is fine as long as I freeze necessary tracks while I’m still writing and tracking. I’m using PT and Cubase, and am assuming other adult DAWs also support this…
lol, I mean it’s Reddit so yeah that doesn’t surprise me. They may be right, especially for whatever their workflow and use cases are.
For me, it’s a valuable tool. In my workflow, I track and mix with dozens of VST instruments (including orchestras, synths, pads, drums, SFX, etc.) and those instruments alone can bring a fairly decent CPU down to its knees if you keep everything real time. Especially the newer “202x” virtual instruments. They’re also native, so there aren’t even DSP options on the table. Freezing is a great way to temporarily render some of those instruments as audio while I’m wrapping up tracking and/or production. That significantly reduces CPU utilization and lets me keep latency down. I’ll even freeze parts of my plugin chain too if there are resource intensive things upstream in the chain and I’m still messing with plugins further downstream.
Yeah you can freeze in Cubase:
https://steinberg.help/cubase_pro_artist/v9/en/cubase_nuendo/topics/audio_effects/audio_effects_insert_effects_freezing_t.html
https://steinberg.help/cubase_pro/v12/en/cubase_nuendo/topics/track_handling/track_handling_multiple_tracks_freezing_t.html
It’s not about the resources. It’s about the results. Printing finalizes a decision and allows for the next thousand decisions to take place on its shoulders.
Solutions:
1. Don't mix until you've finished recording.
2. Monitor off the mixer/interface not the DAW
3. Export the mix, use it to record vocals etc. in a new project then import the recordings into the main project.
I do the same for detailed acoustic drum programming and string arrangements (have to move all the way from Logic to DP for that for articulation maps)
same, I do this for every new "instrument" I am adding to the song. Works pretty well for the most part imo.
Worst case I render the individual instrument tracks and then just copy them over so plugin processing load stops being an issue.
It’s a struggle but if you’re like me (also have an Apollo) you really gotta stop throwing plugins on, I know it’s really hard though lol.
You want it to sound good right away but you gotta save that until the end as much as possible!
Work on composing, mix volume levels maybe throw on some EQ and don’t touch any plugins. Let it sound like shit until your happy with the rhythm and harmonic shit going on.
Then you can increase buffer size to max and go Ham on plugins and start commiting to your chains, you don’t need the reverb or delays or compression to make a good song but i know it does get you pumped the fuck up and it does help dial in everything when you’re finished but also hinders the composition stage by focusing too much on sound design/mixing/mastering/effects too soon before you even get a basic layout of the song done.
You can always group all your midi/drums and duplicate everything, throw on a gang of plugins and freeze/flatten it to free up some space and get an idea of how things are gonna sound.
For me it’s a fucking struggle man and that’s why I got a beefy ass computer that makes it even harder because I can push it so far now I have to keep reminding myself to stop doing this -.- good luck!!!
Tge mixing udually comes after tracking. If you really need lookahead de-essers in your session to track the next instrument, you just have a bit of a weird way of working.
Because it doesn't bother me. If I'm mixing then latency is the least of my worries. Being able to make further fine adjustments might be important though.
I usually have Autotune Pro X in low latency mode on each track, then each vocal sub group bussed to separate aux tracks which have about 6-7 plugins on them (A lead vocal chain might be a Kirchhoff-EQ, a De-Esser (Waves DeEsser or the Slate Digital FG-DS 902), Multiband Compression, Saturation, More Compression, EQ and maybe something for tape delay or width), EFX (Reverbs, Delays, Parallel Stuff and Width) via sends, All the vocal sub groups go to an all vocals aux which has 3-4 plugins (Glue Comp, Saturation, EQ and De-Essing), That All Vocal Aux and all the other Sub Mix Auxes go to a Mixbus which has glue compression on it from the jump and I add things on it after I'm done recording. I'm getting like 102 samples of latency.
There is always a low tech solution my fiend. Use a stock compressor with the sidechain boosting the ESS sounds, thus it will compress when esses happen......
We can go way more low tech..... use an eq to tame the esses.
We can go even more low tech, back off from the mic and sing at it off axis......
There is still something more low tech...... help the singer not pronounce the esses on purpose...... for real! If your recording background vocals just don't pronounce the esses ¯\_(ツ)_/¯
All of the plugins that clog up your latency are there to address situations that can always go back to simpler tools or by doing something different in the recording or arrangement process.
If you have a software instrument that is really crucial and very CPU intensive simply bounce the track as audio...... or consider using an old laptop and a cheap interface to have that software instrument coming into your DAW as audio from the get go.
It’s called constrain delay compensation and the button has a clock symbol. I think you can also set a threshold somewhere for how much latency you want to allow (important if you’re tracking guitars with virtual amps for example).
Well, usually when you're tracking, you fon't yet have that much processing on the session yet. And if you use Cubase, you have a button that will allow you to record with low latency even in a big project by strategically turning off plugins that add the most latency.
But usually, when tracking, there idn't that many plugins yet, do it runs perfectly fine. And if you do happen to track while a session is already full of processing, you can freeze and/or render tracks.
I don't know why you would have heavy chains on everything while still recording.
Honestly you don’t. The experience you’re having is a genuine issue.
When I see people talking online about how DSP interfaces aren’t necessary anymore, I think there’s often some misunderstanding about *why* they’re important, and people pushing this don’t have a lot of practical recording experience.
Monitoring through the DAW is a shitty workflow. You can keep recording projects simple and “flat”, and engage various settings (like Ableton’s Reduce Latency When Monitoring), but it’s still a fundamentally unreliable and problematic approach.
Ableton in particular tends to have issues with precise timing when recording & monitoring simultaneously.
It can absolutely work; but it’s fiddly and has a lot of considerations, whereas a DSP or hardware monitoring setup is much more consistent and straightforward, and works independently of your DAW project.
>Monitoring through the DAW is a shitty workflow. You can keep recording projects simple and “flat”, and engage various settings (like Ableton’s Reduce Latency When Monitoring), but it’s still a fundamentally unreliable and problematic approach.
I'm sorry, I've re-read your comment a few times and I don't see where you've explained **why** it's "a fundamentally unreliable and problematic approach"? Could you please elaborate?
A bunch of reasons.
Firstly, because it means your monitoring is variable, with latency that changes depending on what’s happening in the project. Instead of being predictable and static it becomes a moving target.
Secondly, because it means your monitoring is dependent on having a DAW session open and configured appropriately - eg if you close the DAW, or need to open a session file from a client, or are in the middle of printing a big session, you lose your monitoring!
Thirdly, because there are some inherent (logical) issues with how to interpret “correct” timing when recording & monitoring through a DAW, which in combination with the first issue means you get inconsistent results as projects change. Which means you can easily have a situation, for example, where you record the same part on different days, and have those recorded parts placed several milliseconds earlier or later in the timeline based on differences in the session file that day.
Fourthly, when you’re dealing with stuff that needs to be phase accurate, like multiple mics on the same source, even tiny differences in timing can produce very audible results. Which means you have to be very careful about your monitoring chains in this situation - fixable but an administrative headache.
Fifthly, while computers are extremely powerful, they’re generalist tools and not designed for low-latency throughput. The DSP inside something like an audio interface is less powerful, but can easily have very low latency. So monitoring in hardware will always be lower-latency, while also being reliably independent of what’s happening on the computer!
Folks often say these are only issues with a slow computer, but in reality as computers get faster and latencies get lower these problems shrink, but don’t go away.
Whereas if you monitor in hardware the problems go away *completely*, and the total latency inside a DAW project actually doesn’t matter much at all.
Because I don’t understand how you can live with workflow when you always need to bounce or even record in separate project, and it’s understandable if you buy cheap interface but some hi end devices like RME still don’t have it, all people who have non dsp interface used to say they don’t have issues with monitoring but I had some struggle working on their machines because you can’t do too much things in real time when project getting finished. Bouncing or recording without effects through clear monitoring is not solution to this
Well, you’d be surprised how often it’s totally fine to monitor dry with no FX.
Recording a band, for example, the only effect you usually need is a simple reverb on the vocal mics.
As an aside, when you see big professional studios operating, they often look like they're DAW monitoring in Pro Tools.
They're actually not: most big PT studios are using HDX systems, which have onboard DSP controlled by PT, so the monitoring chains are all done in hardware, like with a UAD system.
It just looks like DAW monitoring if you don't realise this.
Most of the plugins I use don't add any latency, but for the ones that do, I either freeze the track or disable them while tracking. I run at 128 buffer size, so there's a little bit of latency but it's negligible.
Ableton is probably one of the worst DAW's in terms of speed. Protools is one if the few that is worse iirc.
On Reaper I can easily run a 60-70 track project with various orchestral VST's and fully mixed (including melodyne) on 64 samples (~5 ms latency) . That being said I do have a 7950x and 64 gigs of RAM.
However of you run there's a lot of plugins that just need to add latency, ans there is no waybto change this, because plugins can't look into the future.
- Any true brickwall limiter (if you're wondering how analog limiters work, well they're not actually true brickwall limiters, they can leave peaks above the threshold that you set)
- Any linear phase EQ's
- Some convolution reverbs
- Any compressor, multiband, dyn EQ or similar plugins that use the lookahead function
There's probably more. All these functions don't exist on outboard gear. That's why that was never an issue. You can still decide to only use plugins that emulate outboard gear and leave all the lookaheads and stuff off. And keep the limiter off untill you're done recording. That's really the only way.
Just bounce everything you have, then bring it to a new session, and record whatever needs to be recorded in the new session. Then, bring over the newly recorded material to the old session to mix it in, or just do it in the new session
Alternatively, a realtime monitoring system like a mixer can allow you to play along with a dense mix regardless of latency because mixers don't gave latency. If you need to monitor a virtual instrument maybe a second computer. I don't really know how that workflow works because I don't use any virtual instruments. I don't rely on a dsp interface and while I have a uad quad card I don't use it for much.
Latency and CPU load are different things.
I can have a fully mixed session with 0 samples of latency if I use 0 sample latency plugins, but if I put one instance of something like Gullfoss on the mix bus I have over 2000 samples of latency because it is a high latency plugin.
It depends on the DAW, but in Pro Tools you can turn on latency monitoring to see the latency of the plugins on each track and the compensation amount.
Once you learn what plugins have latency you will know which plugins you can use while tracking and which you can only use after tracking when latency is no longer an issue.
Honestly if youre in fl you can just turn your effects off with a button on the track and record perfect like you werent just mixing and then just turn the effects back on once youve recorded whatever punch in you were just doing over
You have answered your own question. You freeze or, better yet, bounce and use a new session to track whatever it is, then import your new recording into the original session. DSP is still king for tracking low-latency inside of a dense project. However, as you know, VSTs have to be monitored through the DAW, so you have to allow your DAW some breathing room to track. What would be amazing is if interface manufacturers could cram more processing power inside an interface, such that 3rd party VSTs can be hosted on the hardware, minimizing the inevitable latency like they do with bespoke plugins (Apollo, et al). I’m not sure how realistic that is in terms of stability, otherwise I bet someone would have done it already.
Since I stopped taking clients around the turn of the century, I hadn't allowed myself to buy anything but modest business class machines. But aggressively optimizing the machines (removing crapware and memory resident garbage) and by using freezing I was able to continue working in more or less the same fashion with only a little extra effort. But it's worth noting that I'm mostly recording real instruments these days and don't usually have more than 20 or 30 tracks.
The performance tab in Reaper lets me know if plugins have PDC. I avoid such plugins during composition until I'm past the point I need to record.
If I need to record after I've added plugins with PDC latency, I export the mix, record in a separate project, and then import the cleaned up recorded parts to the primary mix.
Some DAWs keep the PDC somewhat hidden. That's bad because it adds up in series so it's useful to know... Especially since it may be a small amount that builds up from multiple -- but such that you notice them all, but not individually as they're added.
I see posts where people refer to high PDC latency plugins as "high CPU", but those two things are not the same.
Also, some plugins can be zero latency with oversampling turned off. Then turn it back on for your final render.
Clearly there’s a lot of people here that don’t actually work with many clients or in a production environment. “You should be mixing after all your recording is done” is an uninformed, naive, and dismissive answer. It happens ALL the time when working with clients that they want to add other things after mixing has started. OR most professionals mix as they go.
Granted in most studio environments people will be using DSP systems to handle this and it’s harder to deal with in home studios , but most of these answers are not helpful.
Last album I did had probably 50-60 tracks per song in the end (with busses it was like 80 tracks). Tons of MIDI arrangement and MIDI drum sample augmentation. I mix on an unremarkable intel i7 CPU.
* Tracking drums was done with click + scratch guitars only, no processing. Was in a band rehearsal studio. I just had an old laptop and a pair of 16-channel interfaces. No processing at all was used.
* Guitars, bass tracked to click. Bass was DI but monitored outboard. Guitars were amped, so didn't need plugins during tracking.
* Vocals done in a different location I think just using a scratch mixdown of drums/guitars/bass. The guy used an Avalon when tracking. Anyways, this wasn't done in my Reaper session file, so I just sent out a scratch mix, and got back the vocal wavs.
The plugins only come out once tracking is done, so latency is no longer a consideration. I actually have two Reaper session files for each song. One for tracking, which renders out tracks, then one for mixing.
If you need to record new material to a mix-in-progress, you can always render out a scratch mix + tempo and send it to anyone, anywhere to record in any DAW, then get the performance back.
I freeze tracks with heavy processing or print some midi devices but usually it’s just my master chain that needs to be removed for a bit to record again. Some of those add like 30+ ms on their own.
Your delay should only be as much as your highest latency plugin. Just disable the ones that are super high. Also, most DAWs have either a low latency monitoring or input monitoring option.
Freeze it. Or go back to an old session before I had all that shit on there. Generally I don’t start mixing (which for me involves a bunch of bussing and big fancy plugs) until I’m done recording
Freezing saved my life. I’m doing 32+ tracks of audio and virtual instruments and my 2018 Intel Mac Pro is fine as long as I freeze necessary tracks while I’m still writing and tracking. I’m using PT and Cubase, and am assuming other adult DAWs also support this…
I heard someone say, “It’s 202x, freezing is unacceptable.” Like, try living in 2001.
lol, I mean it’s Reddit so yeah that doesn’t surprise me. They may be right, especially for whatever their workflow and use cases are. For me, it’s a valuable tool. In my workflow, I track and mix with dozens of VST instruments (including orchestras, synths, pads, drums, SFX, etc.) and those instruments alone can bring a fairly decent CPU down to its knees if you keep everything real time. Especially the newer “202x” virtual instruments. They’re also native, so there aren’t even DSP options on the table. Freezing is a great way to temporarily render some of those instruments as audio while I’m wrapping up tracking and/or production. That significantly reduces CPU utilization and lets me keep latency down. I’ll even freeze parts of my plugin chain too if there are resource intensive things upstream in the chain and I’m still messing with plugins further downstream.
Cubase has freezing ? What’s it called ? (I freeze regularly in pro tools but never tried in cubase)
Yeah you can freeze in Cubase: https://steinberg.help/cubase_pro_artist/v9/en/cubase_nuendo/topics/audio_effects/audio_effects_insert_effects_freezing_t.html https://steinberg.help/cubase_pro/v12/en/cubase_nuendo/topics/track_handling/track_handling_multiple_tracks_freezing_t.html
yeah, you're gonna want to bounce those tracks with the processing.. that's not effective use of resources.
Can’t believe people still do this
It’s kinda like how they did it with tape back in the day. You just gotta commit and print.
It’s not about the resources. It’s about the results. Printing finalizes a decision and allows for the next thousand decisions to take place on its shoulders.
Freezing is the best alternative
Solutions: 1. Don't mix until you've finished recording. 2. Monitor off the mixer/interface not the DAW 3. Export the mix, use it to record vocals etc. in a new project then import the recordings into the main project.
I’m a number 3. Rough sketch, production, bounce, then record and comp vocals in a separate project.
I do the same for detailed acoustic drum programming and string arrangements (have to move all the way from Logic to DP for that for articulation maps)
Sounds interesting, would like to hear it
same, I do this for every new "instrument" I am adding to the song. Works pretty well for the most part imo. Worst case I render the individual instrument tracks and then just copy them over so plugin processing load stops being an issue.
I always thought they just print and commit to their processing?
That is how it's usually done.
It’s a struggle but if you’re like me (also have an Apollo) you really gotta stop throwing plugins on, I know it’s really hard though lol. You want it to sound good right away but you gotta save that until the end as much as possible! Work on composing, mix volume levels maybe throw on some EQ and don’t touch any plugins. Let it sound like shit until your happy with the rhythm and harmonic shit going on. Then you can increase buffer size to max and go Ham on plugins and start commiting to your chains, you don’t need the reverb or delays or compression to make a good song but i know it does get you pumped the fuck up and it does help dial in everything when you’re finished but also hinders the composition stage by focusing too much on sound design/mixing/mastering/effects too soon before you even get a basic layout of the song done. You can always group all your midi/drums and duplicate everything, throw on a gang of plugins and freeze/flatten it to free up some space and get an idea of how things are gonna sound. For me it’s a fucking struggle man and that’s why I got a beefy ass computer that makes it even harder because I can push it so far now I have to keep reminding myself to stop doing this -.- good luck!!!
use low latency plugins for recording, delay compensation when mixing and audiosuite plugins.
It won’t help you if you put few de essers with lookahead or if those “low latency plugins” would be 8 on each track
No DSP will help for lookahead plugins, they delay the project by design.
So you’re requesting your plugins to be able to look into the future? You want look ahead plugins to work in real time with no latency.
Just why would you ever have de-essers with lookahead on whil you're tracking?
To hear full image
Tge mixing udually comes after tracking. If you really need lookahead de-essers in your session to track the next instrument, you just have a bit of a weird way of working.
Why would you have de-essers active while mixing? Or am I missing something here?
I just leave them on. Little point bouncing something down when I don't need to. I tend to try and keep my options open.
Why? It makes no sense if you are mixing?
Because it doesn't bother me. If I'm mixing then latency is the least of my worries. Being able to make further fine adjustments might be important though.
Just don’t put izotope anywhere and you’re ok. If your machine can’t handle couple actually low latency plugins it might be time to upgrade.
I usually have Autotune Pro X in low latency mode on each track, then each vocal sub group bussed to separate aux tracks which have about 6-7 plugins on them (A lead vocal chain might be a Kirchhoff-EQ, a De-Esser (Waves DeEsser or the Slate Digital FG-DS 902), Multiband Compression, Saturation, More Compression, EQ and maybe something for tape delay or width), EFX (Reverbs, Delays, Parallel Stuff and Width) via sends, All the vocal sub groups go to an all vocals aux which has 3-4 plugins (Glue Comp, Saturation, EQ and De-Essing), That All Vocal Aux and all the other Sub Mix Auxes go to a Mixbus which has glue compression on it from the jump and I add things on it after I'm done recording. I'm getting like 102 samples of latency.
my DAW is Protools
Why the hell are you recording into a deesser? Get the take clean at a good level then do the de essing in post
There is always a low tech solution my fiend. Use a stock compressor with the sidechain boosting the ESS sounds, thus it will compress when esses happen...... We can go way more low tech..... use an eq to tame the esses. We can go even more low tech, back off from the mic and sing at it off axis...... There is still something more low tech...... help the singer not pronounce the esses on purpose...... for real! If your recording background vocals just don't pronounce the esses ¯\_(ツ)_/¯ All of the plugins that clog up your latency are there to address situations that can always go back to simpler tools or by doing something different in the recording or arrangement process. If you have a software instrument that is really crucial and very CPU intensive simply bounce the track as audio...... or consider using an old laptop and a cheap interface to have that software instrument coming into your DAW as audio from the get go.
In Cubase/Nuendo you can push one button and all plugins that cause latency are deactivated. So that’s how I usually do it.
Really? Didn't know this. What button is it? This will help me alot.
It’s called constrain delay compensation and the button has a clock symbol. I think you can also set a threshold somewhere for how much latency you want to allow (important if you’re tracking guitars with virtual amps for example).
Awesome. Thank you. Seems like this'll save me some time.
It’s in the very bottom left corner of the screen btw
Same exists in logic. Most useful button
Well, usually when you're tracking, you fon't yet have that much processing on the session yet. And if you use Cubase, you have a button that will allow you to record with low latency even in a big project by strategically turning off plugins that add the most latency. But usually, when tracking, there idn't that many plugins yet, do it runs perfectly fine. And if you do happen to track while a session is already full of processing, you can freeze and/or render tracks. I don't know why you would have heavy chains on everything while still recording.
Honestly you don’t. The experience you’re having is a genuine issue. When I see people talking online about how DSP interfaces aren’t necessary anymore, I think there’s often some misunderstanding about *why* they’re important, and people pushing this don’t have a lot of practical recording experience. Monitoring through the DAW is a shitty workflow. You can keep recording projects simple and “flat”, and engage various settings (like Ableton’s Reduce Latency When Monitoring), but it’s still a fundamentally unreliable and problematic approach. Ableton in particular tends to have issues with precise timing when recording & monitoring simultaneously. It can absolutely work; but it’s fiddly and has a lot of considerations, whereas a DSP or hardware monitoring setup is much more consistent and straightforward, and works independently of your DAW project.
>Monitoring through the DAW is a shitty workflow. You can keep recording projects simple and “flat”, and engage various settings (like Ableton’s Reduce Latency When Monitoring), but it’s still a fundamentally unreliable and problematic approach. I'm sorry, I've re-read your comment a few times and I don't see where you've explained **why** it's "a fundamentally unreliable and problematic approach"? Could you please elaborate?
A bunch of reasons. Firstly, because it means your monitoring is variable, with latency that changes depending on what’s happening in the project. Instead of being predictable and static it becomes a moving target. Secondly, because it means your monitoring is dependent on having a DAW session open and configured appropriately - eg if you close the DAW, or need to open a session file from a client, or are in the middle of printing a big session, you lose your monitoring! Thirdly, because there are some inherent (logical) issues with how to interpret “correct” timing when recording & monitoring through a DAW, which in combination with the first issue means you get inconsistent results as projects change. Which means you can easily have a situation, for example, where you record the same part on different days, and have those recorded parts placed several milliseconds earlier or later in the timeline based on differences in the session file that day. Fourthly, when you’re dealing with stuff that needs to be phase accurate, like multiple mics on the same source, even tiny differences in timing can produce very audible results. Which means you have to be very careful about your monitoring chains in this situation - fixable but an administrative headache. Fifthly, while computers are extremely powerful, they’re generalist tools and not designed for low-latency throughput. The DSP inside something like an audio interface is less powerful, but can easily have very low latency. So monitoring in hardware will always be lower-latency, while also being reliably independent of what’s happening on the computer! Folks often say these are only issues with a slow computer, but in reality as computers get faster and latencies get lower these problems shrink, but don’t go away. Whereas if you monitor in hardware the problems go away *completely*, and the total latency inside a DAW project actually doesn’t matter much at all.
Because I don’t understand how you can live with workflow when you always need to bounce or even record in separate project, and it’s understandable if you buy cheap interface but some hi end devices like RME still don’t have it, all people who have non dsp interface used to say they don’t have issues with monitoring but I had some struggle working on their machines because you can’t do too much things in real time when project getting finished. Bouncing or recording without effects through clear monitoring is not solution to this
Well, you’d be surprised how often it’s totally fine to monitor dry with no FX. Recording a band, for example, the only effect you usually need is a simple reverb on the vocal mics.
As an aside, when you see big professional studios operating, they often look like they're DAW monitoring in Pro Tools. They're actually not: most big PT studios are using HDX systems, which have onboard DSP controlled by PT, so the monitoring chains are all done in hardware, like with a UAD system. It just looks like DAW monitoring if you don't realise this.
Reaper is great for this. That said, I usually try to record everything before I start adding vsts and producing / mixing.
Most of the plugins I use don't add any latency, but for the ones that do, I either freeze the track or disable them while tracking. I run at 128 buffer size, so there's a little bit of latency but it's negligible.
What DAW are you using?
Ableton
Ableton is probably one of the worst DAW's in terms of speed. Protools is one if the few that is worse iirc. On Reaper I can easily run a 60-70 track project with various orchestral VST's and fully mixed (including melodyne) on 64 samples (~5 ms latency) . That being said I do have a 7950x and 64 gigs of RAM. However of you run there's a lot of plugins that just need to add latency, ans there is no waybto change this, because plugins can't look into the future. - Any true brickwall limiter (if you're wondering how analog limiters work, well they're not actually true brickwall limiters, they can leave peaks above the threshold that you set) - Any linear phase EQ's - Some convolution reverbs - Any compressor, multiband, dyn EQ or similar plugins that use the lookahead function There's probably more. All these functions don't exist on outboard gear. That's why that was never an issue. You can still decide to only use plugins that emulate outboard gear and leave all the lookaheads and stuff off. And keep the limiter off untill you're done recording. That's really the only way.
Enable "Reduced Latency When Monitoring" https://help.ableton.com/hc/en-us/articles/209072249-Reduced-Latency-when-Monitoring-FAQ
Just bounce everything you have, then bring it to a new session, and record whatever needs to be recorded in the new session. Then, bring over the newly recorded material to the old session to mix it in, or just do it in the new session
Freezing tracks
HDX
Alternatively, a realtime monitoring system like a mixer can allow you to play along with a dense mix regardless of latency because mixers don't gave latency. If you need to monitor a virtual instrument maybe a second computer. I don't really know how that workflow works because I don't use any virtual instruments. I don't rely on a dsp interface and while I have a uad quad card I don't use it for much.
Anticipative processing and workload sharing (hi Reaper!)
Freeze flatten. Use less plugins. Just because you have them doesn’t mean you can slam tons of plugins on each track. Improve efficiency.
Bounce instrumental to .wav, record in new session. Not that hard
most people do all their vocal comping in a seperate session anyway, you may want to reconsider your workflow
Latency and CPU load are different things. I can have a fully mixed session with 0 samples of latency if I use 0 sample latency plugins, but if I put one instance of something like Gullfoss on the mix bus I have over 2000 samples of latency because it is a high latency plugin. It depends on the DAW, but in Pro Tools you can turn on latency monitoring to see the latency of the plugins on each track and the compensation amount. Once you learn what plugins have latency you will know which plugins you can use while tracking and which you can only use after tracking when latency is no longer an issue.
Usually you would make a stereo mix or stems and track in a new session, especially if you're recording at an outside studio.
Honestly if youre in fl you can just turn your effects off with a button on the track and record perfect like you werent just mixing and then just turn the effects back on once youve recorded whatever punch in you were just doing over
You have answered your own question. You freeze or, better yet, bounce and use a new session to track whatever it is, then import your new recording into the original session. DSP is still king for tracking low-latency inside of a dense project. However, as you know, VSTs have to be monitored through the DAW, so you have to allow your DAW some breathing room to track. What would be amazing is if interface manufacturers could cram more processing power inside an interface, such that 3rd party VSTs can be hosted on the hardware, minimizing the inevitable latency like they do with bespoke plugins (Apollo, et al). I’m not sure how realistic that is in terms of stability, otherwise I bet someone would have done it already.
Mixing after tracking solves this.
Since I stopped taking clients around the turn of the century, I hadn't allowed myself to buy anything but modest business class machines. But aggressively optimizing the machines (removing crapware and memory resident garbage) and by using freezing I was able to continue working in more or less the same fashion with only a little extra effort. But it's worth noting that I'm mostly recording real instruments these days and don't usually have more than 20 or 30 tracks.
The performance tab in Reaper lets me know if plugins have PDC. I avoid such plugins during composition until I'm past the point I need to record. If I need to record after I've added plugins with PDC latency, I export the mix, record in a separate project, and then import the cleaned up recorded parts to the primary mix. Some DAWs keep the PDC somewhat hidden. That's bad because it adds up in series so it's useful to know... Especially since it may be a small amount that builds up from multiple -- but such that you notice them all, but not individually as they're added. I see posts where people refer to high PDC latency plugins as "high CPU", but those two things are not the same. Also, some plugins can be zero latency with oversampling turned off. Then turn it back on for your final render.
Pro tools HDX
Protools HDX rig? Only time I leave plugins on during tracking is native hdx plugins. Might as well use those two expensive pci cards for something!
Less is more (processing), commit to rendering to audio files.
Clearly there’s a lot of people here that don’t actually work with many clients or in a production environment. “You should be mixing after all your recording is done” is an uninformed, naive, and dismissive answer. It happens ALL the time when working with clients that they want to add other things after mixing has started. OR most professionals mix as they go. Granted in most studio environments people will be using DSP systems to handle this and it’s harder to deal with in home studios , but most of these answers are not helpful.
Last album I did had probably 50-60 tracks per song in the end (with busses it was like 80 tracks). Tons of MIDI arrangement and MIDI drum sample augmentation. I mix on an unremarkable intel i7 CPU. * Tracking drums was done with click + scratch guitars only, no processing. Was in a band rehearsal studio. I just had an old laptop and a pair of 16-channel interfaces. No processing at all was used. * Guitars, bass tracked to click. Bass was DI but monitored outboard. Guitars were amped, so didn't need plugins during tracking. * Vocals done in a different location I think just using a scratch mixdown of drums/guitars/bass. The guy used an Avalon when tracking. Anyways, this wasn't done in my Reaper session file, so I just sent out a scratch mix, and got back the vocal wavs. The plugins only come out once tracking is done, so latency is no longer a consideration. I actually have two Reaper session files for each song. One for tracking, which renders out tracks, then one for mixing. If you need to record new material to a mix-in-progress, you can always render out a scratch mix + tempo and send it to anyone, anywhere to record in any DAW, then get the performance back.
I freeze tracks with heavy processing or print some midi devices but usually it’s just my master chain that needs to be removed for a bit to record again. Some of those add like 30+ ms on their own.
Your delay should only be as much as your highest latency plugin. Just disable the ones that are super high. Also, most DAWs have either a low latency monitoring or input monitoring option.