%0 Journal Article %A FU Qun-chao %A WANG Cong %T Based on Multiple Probing Tasks Fine-Tuning of Language Models for Text Classification %D 2019 %R 10.13190/j.jbupt.2019-149 %J Journal of Beijing University of Posts and Telecommunications %P 76-83 %V 42 %N 6 %X Pre-trained language models are widely used in many natural language processing tasks, but there is no fine-tuning for different tasks. Therefore, for text classification task, the author proposes a method of fine-tuning language model based on probing task, which utilizes the specific linguistic knowledge of probing task training model, and improves the performance of the model in text classification task. Six probing tasks are given to cover the shallow information of sentences, grammar and semantics. The method is shown validated on six text classification datasets, and classification error rate is improved. %U https://journal.bupt.edu.cn/EN/10.13190/j.jbupt.2019-149