Latency refers to a short period of delay (usually measured in milliseconds) between when an audio signal enters a system and when it emerges.
When recording a vocal or an instrument, latency refers to the time delay that you hear between: the live performance of the musician or vocalist being recorded, this recording then playing back from the hard drive and through monitors or headphones.
So, what are you doing that could give you a latency problem in MEP? Are you recording audio into MEP and getting a delay between what you are saying/singing/playing and what you are hearing in the headphones if you are monitoring?
Usually, latency is a problem in an audio program during recording, not a video program when not recording audio.
Do you have an external audio interface card? I use an M-Audio M-Track.
You can try reducing the buffer size inside the sound card’s software and/or in MEP. If the buffer is too low it can cause glitches, and if it’s too high this can cause audible latency, so you’ll have to try different settings that work well with your computer system. The more powerful the system and the better the sound card, the less trouble you’ll potentially have with this buffer size issue. Usually, the default buffer size in MEP should be left alone.
I'm going to deviate from John CBs reply and say it is a bit more complex than that in my experience due to the way video editors seem to work. I have all but given up with doing live mixing within MEP or VPX due to latency issues in real world projects consisting of multiple tracks of audio.
I would spread latency causes down into separate parts within a project but all of it comes down to data through-put.
First the project.
The audio must stay in sync with the video footage so the less compression in the video file used the less decoding that has to go on for any one given track and releases more CPU cycles per second. There is a limit to that though as the less compression there is the faster the need for the data needs to be read perhaps stressing out other parts of the system.
Video resolution. Again the smaller the file, the less data has to be processed.
The same goes for the amount of tracks running at the same time.
Then the system specs. The better the components the more data they can through-put. The system is only as good as its weakest link.
Program settings. Sometimes proxy files are helpful. They are less data. Designed to not only help keep playback running smoother but can also reduce latency due to less data being processed and results in less latency between an action in the mixer and what you see on screen. Deactivating all the effects and lowering frame rate can help as well as again less data has to be processed at once. Soloing tracks may also help but personally I find that of very limited used without accompanying tracks to know why I'm doing what I'm doing at which point in a mix.
Last, as John CB mentioned buffer numbers.
Sampling doesn't seem to have as much effect but does have some benefits from lowering but mainly it is the number of buffers needed that seem to make the most difference. At least on my system using the wave driver for the inboard sound chip. John on the other hand has access to his M-Audio M-Track with I presume ASIO zero latency drivers so I would be interested if he can put up a similar video to the one I will show at the bottom of this reply to show what I have to endure.
The problem, as John also partially pointed out is that lowering the buffer number can cause problems in the audio. Anything from bad noises to dropped audio. The only solution at that point as the channel numbers increase is to increase the number of buffers and sometimes the sample rates as well. This can increase the latency to well beyond a second and a half depending on the processing speeds of the components involved. A really high end system with fast everything may not suffer much latency at all while a system with just one link in the chain of components may suffer a lot.
So why does it happen?
With hard drives they have their own buffer limits as do SSDs and NVMe drives when accessing large files.The information on the drives may not be contiguous but spread around increasing read and write times. That is then fed to the System ram where is is read as needed and then dropped to be replaced by the next batch of data to be read. The faster the ram in use and the amount to hand can help but only up to a point.
The video still has to be processed as well and kept in sync with all those video tracks. So often when the system starts to get a lot of data the time it takes between pressing the start or stop button and the time the program starts to react can vary depending on the above. That can sometimes be seen as the data is loaded and produces that blue bar go rapidly (or not) across the bottom of the program. Notice the playback starts as the line reaches the end. If you are running Task Manager at the same time you can also monitor the amount of ram, Vram, GPU use,CPU use and disk read write speeds as data is being written and read. The video you see is what has happened already. The audio has been retarded to stay in sync with what you see on the screen. Therefore any slider movement you make will be written to where you are on the screen and not at the point where the audio should be. That is the latency you see between your action of a control and what you then hear delayed.
The upshot is each and every project may need to be fine tuned to get the best latency and that latency will change if more is added to the project and other adjustments have to be made to the settings.
I would be very interested to see a similar video from John CB with his M-Audio M-Track to see if that helps improve things as it has been a while since I bothered to buy an expensive additional sound card. In the past such things have been made obsolete by newer versions of Windows. I have a good collection of obsolete sound cards many of which cost me several hundred pounds that are now not much more useful than a used teabag. The inboard chips are now almost as good sound wise unless you go really esoteric or need additional input-output configurations or maybe a dedicated specialist input to say, use a phantom powered microphone.
Just notice the delay in the audio compared to the action of the controls of the mixer for each setting.
I think that this will confuse the OP. I would like to know what he thinks latency means. Normally, it is as I defined it, when recording, it is the lag between what you sing/play whilst recording and when you hear it in the monitor.
What you've demonstrated is a lag between moving the faders and hearing the result; recording is not being done. Not the same thing.
Did you try with Automation on? If so, when you've done the automation and played it back, is the automation lagging because of "latency" or because of your reaction time? You can't, for example, reduce the volume to 0 when you hear something that you don't want and expect it to be at the start of where you don't want to hear it. You heard it, you reacted, but too late.
I realize that this is not the same as what you showed, but let's stick to the definition of latency as being for recording/monitoring, not for modifying the volume sliders, until such time as the OP comes back.
I think that he is having difficulty understanding what the tools, MEP and an audio editor, do. He was trying to edit the video in Sound Forge Pro, and is mentioning latency (presumably for audio recording/monitoring) in the video editor. He should be doing the inverse.
If you can't move the faders or pan pots or whatever when you need to. Meaning at the point you are hearing and seeing what is going on then it is impossible to mix accurately to video. For instance sound effects moving across the scene. It will always be out of position to the action. I know, had been complaining about this for long time which is why I won't mix audio on some projects in MEP or VPX but use my DAW.
Not that any video editing package is any better. The correct tool for the correct job.
My sound effects are always with the action. You have to put the sfx at the right place, listen, correct, listen correct.
I use the waveform and listening along with visuals (what is showing) for editing, not automation. As I said, the time it takes you to react, the moment is already gone anyways, so you have to edit manually.
I agree that the delay between moving a fader and the hearing the result is too long. But, if you adjust/draw curves, there is no delay between the effect, say, a volume change, and the result is correct.
Also, effects on audio objects are very limited. MEP has automation only for Volume and Pan on tracks, excluding the FX track. There is no automation for other effects. You can draw/adjust the curves, but not during playback.
Again, this is a different problem from the definition of latency.
Apart from we have very different views on live mixing abilities assuming the software is capable, I am answering the question as the post is titled.
BEST SETTINGS FOR LATENCY? and trying to explain that it varies dependent on the load produced by the project content against the capability of a given system. What works for one person may not work for another person. In effect there is no best setting.