YYYY
MMM
CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained Model
CLSEBERT:语法增强代码预训练模型的对比学习
CLSEBERT:構文拡張コードの事前トレーニング済みモデルの対照学習
CLSEBERT: 구문 강화 코드 사전 훈련 모델을 위한 대조 학습
CLSEBERT: Aprendizaje contrastivo para el modelo preentrenado de código mejorado de sintaxis
CLSEBERT : modèle pré-entraîné d'apprentissage contrastif pour le code amélioré de syntaxe
CLSEBERT: сравнительное обучение для предварительно обученной модели кода с расширенным синтаксисом
Xin Wang ¹, Yasheng Wang ², Pingyi Zhou ², Meng Xiao ², Yadao Wang ², Li Li ³, Xiao Liu ⁴, Hao Wu 武浩 ⁵, Jin Liu 刘进 ¹, Xin Jiang ²
¹ School of Computer Science, Wuhan University 武汉大学 计算机学院
² Noah's Ark Lab, Huawei 华为 诺亚方舟实验室
³ Faculty of Information Technology, Monash University
⁴ School of Information Technology, Deakin University
⁵ School of Information Science and Engineering, Yunnan University 云南大学 信息学院
arXiv, 10 August 2021
Abstract

Pre-trained models for programming languages have proven their significant values in various code-related tasks, such as code search, code clone detection, and code translation. Currently, most pre-trained models treat a code snippet as a sequence of tokens or only focus on the data flow between code identifiers.

However, rich code syntax and hierarchy are ignored which can provide important structure information and semantic rules of codes to help enhance code representations. In addition, although the BERT-based code pre-trained models achieve high performance on many downstream tasks, the native derived sequence representations of BERT are proven to be of low-quality, it performs poorly on code matching and similarity tasks.

To address these problems, we propose CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model, to deal with various code intelligence tasks. In the pre-training stage, we consider the code syntax and hierarchy contained in the Abstract Syntax Tree (AST) and leverage the constrastive learning to learn noise-invariant code representations. Besides the masked language modeling (MLM), we also introduce two novel pre-training objectives. One is to predict the edges between nodes in the abstract syntax tree, and the other is to predict the types of code tokens. Through extensive experiments on four code intelligence tasks, we successfully show the effectiveness of our proposed model.
arXiv_1
arXiv_2
arXiv_3
arXiv_4
Reviews and Discussions
https://www.hotpaper.io/index.html
Multi-resonance enhanced photothermal synergistic fiber-optic Tamm plasmon polariton tip for high-sensitivity and rapid hydrogen detection
Broadband ultrasound generator over fiber-optic tip for in vivo emotional stress modulation
Review for wireless communication technology based on digital encoding metasurfaces
Coulomb attraction driven spontaneous molecule-hotspot paring enables universal, fast, and large-scale uniform single-molecule Raman spectroscopy
Multiphoton intravital microscopy in small animals of long-term mitochondrial dynamics based on super‐resolution radial fluctuations
Non-volatile tunable multispectral compatible infrared camouflage based on the infrared radiation characteristics of Rosaceae plants
Spectro-polarimetric detection enabled by multidimensional metasurface with quasi-bound states in the continuum
Emerging low-dimensional perovskite resistive switching memristors: from fundamentals to devices
CW laser damage of ceramics induced by air filament
Eco-friendly quantum-dot light-emitting diode display technologies: prospects and challenges
Operando monitoring of state of health for lithium battery via fiber optic ultrasound imaging system
Observation of polaronic state assisted sub-bandgap saturable absorption



Previous Article                                Next Article
About
|
Contact
|
Copyright © Hot Paper