Your privacy, your choice

We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media.

By accepting optional cookies, you consent to the processing of your personal data - including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection.

See our privacy policy for more information on the use of your personal data.

for further information and to change your choices.

Skip to main content

Table 6 Result of each method concatenating with our pretrained BERT. MLM stands for masked language model and NSP stands for next sentence prediction. In this table, the input disease sequence length is set to 75. The first five metric (AUC-ROC, accuracy, precision, recall, F1) represent the task of predicting the appearence of the target disease

From: al-BERT: a semi-supervised denoising technique for disease prediction

Model

AUC-ROC

Accuracy

Precision

Recall

f1

MLM

NSP

Med-BERT

0.9282

0.8591

0.8348

0.6329

0.7139

0.5375

0.8650

Random-BERT

0.9228

0.8523

0.7985

0.5894

0.6893

0.5419

0.7925

RETAIN-BERT

0.8508

0.7866

0.6781

0.6812

0.6395

0.5634

0.7975

al-BERT

0.9383

0.8725

0.8450

0.7874

0.7743

0.5626

0.8675

  1. The bold case represents the best result under a single metric