Learning Self-Supervised Representations of Code Functionality

Paras Jain, Ajay Jain, Tianjun Zhang, Pieter Abbeel, Joseph Gonzalez and Ion Stoica

EECS Department
University of California, Berkeley
Technical Report No. UCB/EECS-2021-62
May 13, 2021

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-62.pdf

Recent work learns contextual representations of source code by reconstructing tokens from their context. For downstream semantic understanding tasks like summarizing code in English, these representations should ideally capture program functionality. However, we show that the popular reconstruction-based BERT model is sensitive to source code edits, even when the edits preserve semantics. We propose ContraCode: a contrastive pre-training task that learns code functionality, not form. ContraCode pre-trains a neural network to identify functionally similar variants of a program among many non-equivalent distractors. We scalably generate these variants using an automated source-to-source compiler as a form of data augmentation. Contrastive pre-training improves JavaScript summarization and TypeScript type inference accuracy by 2% to 13%. We also propose a new zero-shot JavaScript code clone detection dataset, showing that ContraCode is both more robust and semantically meaningful. On it, we outperform RoBERTa by 39% AUROC in an adversarial setting and up to 5% on natural code.

Advisor: Ion Stoica and Joseph Gonzalez


BibTeX citation:

@mastersthesis{Jain:EECS-2021-62,
    Author = {Jain, Paras and Jain, Ajay and Zhang, Tianjun and Abbeel, Pieter and Gonzalez, Joseph and Stoica, Ion},
    Title = {Learning Self-Supervised Representations of Code Functionality},
    School = {EECS Department, University of California, Berkeley},
    Year = {2021},
    Month = {May},
    URL = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-62.html},
    Number = {UCB/EECS-2021-62},
    Abstract = {Recent work learns contextual representations of source code by reconstructing tokens from their context. For downstream semantic understanding tasks like summarizing code in English, these representations should ideally capture program functionality. However, we show that the popular reconstruction-based BERT model is sensitive to source code edits, even when the edits preserve semantics. We propose ContraCode: a contrastive pre-training task that learns code functionality, not form. ContraCode pre-trains a neural network to identify functionally similar variants of a program among many non-equivalent distractors. We scalably generate these variants using an automated source-to-source compiler as a form of data augmentation. Contrastive pre-training improves JavaScript summarization and TypeScript type inference accuracy by 2% to 13%. We also propose a new zero-shot JavaScript code clone detection dataset, showing that ContraCode is both more robust and semantically meaningful. On it, we outperform RoBERTa by 39% AUROC in an adversarial setting and up to 5% on natural code.}
}

EndNote citation:

%0 Thesis
%A Jain, Paras
%A Jain, Ajay
%A Zhang, Tianjun
%A Abbeel, Pieter
%A Gonzalez, Joseph
%A Stoica, Ion
%T Learning Self-Supervised Representations of Code Functionality
%I EECS Department, University of California, Berkeley
%D 2021
%8 May 13
%@ UCB/EECS-2021-62
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-62.html
%F Jain:EECS-2021-62