Your privacy, your choice

We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media.

By accepting optional cookies, you consent to the processing of your personal data - including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection.

See our privacy policy for more information on the use of your personal data.

for further information and to change your choices.

Skip to main content

Table 2 Comparison of performance between pretrained models and fine-tuned models

From: Leveraging large language models to mimic domain expert labeling in unstructured text-based electronic healthcare records in non-english languages

Performance Metrics

Pretrained Model (%)

Fine-tuned Model (%)

Accuracy

78.54

99.88

ROC-AUC

64.07

97.29

Precision

50.96

98.88

Recall

51.46

97.78

F1 Score

41.79

97.22

MCC

47.05

97.24

  1. This table compares the performance metrics of the pretrained GPT-3 model and the fine-tuned model in identifying RTI cases from unstructured clinical notes. Key performance indicators, including accuracy, ROC-AUC, precision, recall, F1 score, and MCC, are provided for both models. The fine-tuned model significantly outperforms the pretrained model across all metrics, demonstrating improved diagnostic accuracy and precision. The fine-tuned model achieved an accuracy of 99.88%, with a MCC of 97.24, reflecting its enhanced capability in identifying RTI cases based on the fine-tuned Turkish RTI-specific dataset. ROC-AUC: Receiver Operating Characteristic - Area Under the Curve, MCC: Matthews Correlation Coefficient