Haoming Guo and Gerald Friedland and Gopala Krishna Anumanchipalli

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2023-183

May 19, 2023

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-183.pdf

Vocoder models have recently achieved substantial progress in generating authentic audio comparable to human quality while significantly reducing memory requirement and inference time. However, these data-hungry generative models require large-scale audio data for learning good representations. In this paper, we apply contrastive learning methods in training the vocoder to improve the perceptual quality of the vocoder without modifying its architecture or adding more data. We design an auxiliary task with mel-spectrogram contrastive learning to enhance the utterance-level quality of the vocoder model in data-limited conditions. We also extend the task to include waveforms to improve the multi-modality comprehension of the model and address the discriminator overfitting problem. We optimize the additional task simultaneously with GAN training objectives. Our result shows that the tasks improve model performance substantially in data-limited settings. Our analysis based on the result indicates that the proposed design successfully alleviate discriminator overfitting and produce audio of higher fidelity.

Advisors: Gerald Friedland


BibTeX citation:

@mastersthesis{Guo:EECS-2023-183,
    Author= {Guo, Haoming and Friedland, Gerald and Anumanchipalli, Gopala Krishna},
    Title= {Enhancing GAN-based Vocoders with Contrastive Learning},
    School= {EECS Department, University of California, Berkeley},
    Year= {2023},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-183.html},
    Number= {UCB/EECS-2023-183},
    Abstract= {Vocoder models have recently achieved substantial progress in generating authentic audio comparable to human quality while significantly reducing memory requirement and inference time. However, these data-hungry generative models require large-scale audio data for learning good representations. In this paper, we apply contrastive learning methods in training the vocoder to improve the perceptual quality of the vocoder without modifying its architecture or adding more data. We design an auxiliary task with mel-spectrogram contrastive learning to enhance the utterance-level quality of the vocoder model in data-limited conditions. We also extend the task to include waveforms to improve the multi-modality comprehension of the model and address the discriminator overfitting problem. We optimize the additional task simultaneously with GAN training objectives. Our result shows that the tasks improve model performance substantially in data-limited settings. Our analysis based on the result indicates that the proposed design successfully alleviate discriminator overfitting and produce audio of higher fidelity.},
}

EndNote citation:

%0 Thesis
%A Guo, Haoming 
%A Friedland, Gerald 
%A Anumanchipalli, Gopala Krishna 
%T Enhancing GAN-based Vocoders with Contrastive Learning
%I EECS Department, University of California, Berkeley
%D 2023
%8 May 19
%@ UCB/EECS-2023-183
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-183.html
%F Guo:EECS-2023-183