MMM
YYYY
CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained Model
CLSEBERT:语法增强代码预训练模型的对比学习
CLSEBERT:構文拡張コードの事前トレーニング済みモデルの対照学習
CLSEBERT: 구문 강화 코드 사전 훈련 모델을 위한 대조 학습
CLSEBERT: Aprendizaje contrastivo para el modelo preentrenado de código mejorado de sintaxis
CLSEBERT : modèle pré-entraîné d'apprentissage contrastif pour le code amélioré de syntaxe
CLSEBERT: сравнительное обучение для предварительно обученной модели кода с расширенным синтаксисом
Xin Wang ¹, Yasheng Wang ², Pingyi Zhou ², Meng Xiao ², Yadao Wang ², Li Li ³, Xiao Liu ⁴, Hao Wu 武浩 ⁵, Jin Liu 刘进 ¹, Xin Jiang ²
¹ School of Computer Science, Wuhan University 武汉大学 计算机学院
² Noah's Ark Lab, Huawei 华为 诺亚方舟实验室
³ Faculty of Information Technology, Monash University
⁴ School of Information Technology, Deakin University
⁵ School of Information Science and Engineering, Yunnan University 云南大学 信息学院
arXiv, 10 August 2021
Abstract

Pre-trained models for programming languages have proven their significant values in various code-related tasks, such as code search, code clone detection, and code translation. Currently, most pre-trained models treat a code snippet as a sequence of tokens or only focus on the data flow between code identifiers.

However, rich code syntax and hierarchy are ignored which can provide important structure information and semantic rules of codes to help enhance code representations. In addition, although the BERT-based code pre-trained models achieve high performance on many downstream tasks, the native derived sequence representations of BERT are proven to be of low-quality, it performs poorly on code matching and similarity tasks.

To address these problems, we propose CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model, to deal with various code intelligence tasks. In the pre-training stage, we consider the code syntax and hierarchy contained in the Abstract Syntax Tree (AST) and leverage the constrastive learning to learn noise-invariant code representations. Besides the masked language modeling (MLM), we also introduce two novel pre-training objectives. One is to predict the edges between nodes in the abstract syntax tree, and the other is to predict the types of code tokens. Through extensive experiments on four code intelligence tasks, we successfully show the effectiveness of our proposed model.
arXiv_1
arXiv_2
arXiv_3
arXiv_4
Reviews and Discussions
https://www.hotpaper.io/index.html
Femtosecond laser maskless direct writing of dual-band crosstalk-free information for all-in-one high-security encryption metasurface
Polarization-guided diffusion prior for eyeglass reflection removal
AI-assisted metaphotonics
Interpretable low-dose CT enhancement via multi-Gaussian cluster variance reduction
Polygonal generalized perfect spatiotemporal optical vortices
Perovskite nanocrystals in glass for high efficiency and ultra-high resolution dynamic holographic multicolor display
Pixelated BIC metasurfaces for terahertz integrated sensing and imaging
Overcoming challenges in InP-based quantum dots: from nucleation mechanisms to high-performance quantum dot light-emitting diodes
Emerging landscape of photonic bound states in the continuum for next-generation metadevices
Holotomography-driven learning unlocks in-silico staining of single cells in flow cytometry by avoiding fluorescence co-registration
Narrow beam and low-sidelobe electro-optic beam steering on thin-film lithium niobate optical phased array
Scene-level passive polarization 3D imaging



Previous Article                                Next Article
About
|
Contact
|
Copyright © Hot Paper