volume and processing of voice tracks

derek

New member
Would it be practical to create a smart duck option for thevoice tracks so that the rms of the music track is attenuated to that of the voice track?

This will make loud songs drop considerably in volume, but have a very low impact on quieter songs.

Another approach would be to get the rms of the track, and duck it to a specific value. This could also help balance tracks in a live talkover situation.
This should work, because the same scan used by autoamp would probably give you the necessary data to accomplish this.

Speaking of processing, what would it take to run a voicetrack through the same dsp available when using the input live?

Thank you for your consideration
 
Would it be practical to create a smart duck option for thevoice tracks so that the rms of the music track is attenuated to that of the voice track?
Currently you can configure amount of ducking, but it's fixed for all music tracks.

This will make loud songs drop considerably in volume, but have a very low impact on quieter songs.
Typically, the songs are normalized to play at the same level. Having songs with different volume levels will sound odd and not many stations work like this.
 
I agree to a point. The output of a radio station has a constant volume, but the incoming audio could have a very wide dynamic range, and the processing takes care of that. Also, if I am talking over a transition where the first song is fading out, and the second song is very loud at the beginning, I would only duck the second song until I had finished speaking. I wouldn't want to change the volume of the track, or have it fade in because there may be points in the future where that song needs to start at full volume.
 
Here is an example. The first track fades while I talk, but the second track is much louder and so is the only thing ducked. With every track being ducked the same way, the fade would have had a much more automated feel about it.
 

Attachments

  • example.zip
    1.7 MB · Views: 21
I guess, this wouldn't be the case if the tracks were normalized previously, this way they will have played at the same volume and ducking will affect them equally.
 
I guess, this wouldn't be the case if the tracks were normalized previously, this way they will have played at the same volume and ducking will affect them equally.
The problem is that the end of track 1 is very quiet, as the song is fading away. Track 2 is quite hot from the start. Ducking material that is already fading out on its own makes for a choppy transition.
I've been giving this more thought, and I think I have a more simple way of adding a "smart duck" option. This feature would intelligently determine whether volume reduction is necessary. If the volume of the fading material is already below a user-defined threshold, no ducking would occur. However, when transitioning to the next track, the smart ducking function would assess the volume level of the intro. If it significantly exceeds the microphone's attenuation, only the intro would undergo volume reduction. This approach leverages existing data You' already have because of the crossfading and mixing functions.
 
Thank you for the suggestion and detailed explanation. I'll add it to the list of potential improvements. At this point I can't tell you when/if this will be implemented.
 
Back
Top