Skip to main content

Table 8 Precision (%), Recall (%) and F1-score (%) of different pre-trained language models

From: Joint extraction of Chinese medical entities and relations based on RoBERTa and single-module global pointer

Pre-trained Language Model

Prec.

Rec.

F1

RoBERTa-wwm

68.32

58.62

63.10

BERT-wwm

67.48

57.82

62.28

ERNIE

65.22

56.18

60.36