Skip to main content

Table 4 Description of the performance evaluation metrics

From: Classifying and fact-checking health-related information about COVID-19 on Twitter/X using machine learning and deep learning models

Metric

Formulation

Defination

Accuracy

\(\:\frac{TP+TN}{TP+TN+FP+FN}\)

Accuracy refered to the amount of accurate assumptions the algorithm produced for forecasts of all sorts.

Precision

\(\:\frac{TP}{TP+FP}\)

Precision was the percentage of successful cases that were reported correctly.

Recall

\(\:\frac{TP}{TP+FN}\)

It was the number of right positive outcomes divided by the number of all related samples (including samples that were meant to be positive).

F 1-score

\(\:2\text{*}\frac{\left(\text{P}\text{r}\text{e}\text{c}\text{i}\text{s}\text{o}\text{n}\text{*}\text{R}\text{e}\text{c}\text{a}\text{l}\text{l}\right)}{\left(\text{P}\text{r}\text{e}\text{c}\text{i}\text{s}\text{o}\text{n}+\text{R}\text{e}\text{c}\text{a}\text{l}\text{l}\right)}=\:\frac{TP}{TP+\:\frac{1}{2}(FP+FN)}\)

It was the harmonic mean of the precision and recall values.

  1. TN = True Negatives
  2. TP = True Positives
  3. FP = False Positives
  4. FN = False Negatives