Skip to main content

Application of deep learning in wound size measurement using fingernail as the reference

Abstract

Objective

Most current wound size measurement devices or applications require manual wound tracing and reference markers. Chronic wound care usually relies on patients or caregivers who might have difficulties using these devices. Considering a more human-centered design, we propose an automatic wound size measurement system by combining three deep learning (DL) models and using fingernails as a reference.

Materials and methods

DL models (Mask R-CNN, Yolov5, U-net) were trained and tested using photographs of chronic wounds and fingernails. Nail width was obtained through using Mask R-CNN, Yolov5 to crop the wound from the background, and U-net to calculate the wound area. The system’s effectiveness and accuracy were evaluated with 248 images, and users’ experience analysis was conducted with 30 participants.

Results

Individual model training achieved a 0.939 Pearson correlation coefficient (PCC) for nail-width measurement. Yolov5 had the highest mean average precision (0.822) with an Intersection-over-Union threshold of 0.5. U-net achieved a mean pixel accuracy of 0.9523. The proposed system recognized 100% of fingernails and 97.76% of wounds in the test datasets. PCCs for converting nail width to measured and default widths were 0.875 and 0.759, respectively. Most inexperienced caregivers consider convenience is the most important factor when using a size-measuring tool. Our proposed system yielded 90% satisfaction in the convenience aspect as well as the overall evaluation.

Conclusion

The proposed system performs fast and easy-to-use wound size measurement with acceptable precision. Its novelty not only allows for conveniences and easy accessibility in homecare settings and for inexperienced caregivers; but also facilitates clinical treatments and documentation, and supports telemedicine.

Highlights

  • • Using fingernails forms the core of this novel wound measurement system.

  • • The system was trained and tested with thousands of clinical wound images.

  • • Combination of three deep learning models enables automatic measurement.

  • • This system provided a fast and convenient measurement of the wound area.

  • • Inexperienced caregivers can easily use the system in homecare settings.

Peer Review reports

Introduction

Management of chronic wounds such as pressure ulcers (PUs) or diabetic foot ulcers (DFUs) are challenging and burdensome to healthcare systems. The temporal change in wound size is a good predictor of the healing process and can help with treatment planning by physicians [1]. If the wound is unresponsive, the applied therapy should be reevaluated and adjusted [2]. Wound-size monitoring can enhance patient care quality, standardize the treatment assessment, and facilitate efficient communication between medical professionals.

The most accurate and reliable parameter is surface area, which is widely used in clinical wound care. Wound areas have traditionally been estimated as rectangular areas with a length and width measured with a ruler, or as elliptical areas for better approximation. Alternatively, the wound contour is traced on a scaled transparent film, and the squares within the wound area are counted. This method is time-consuming but more accurate than other methods, thus becomes the gold standard for clinical and research purposes.

Recently, photographic wound assessment has gained popularity because it is cost-effective and requires only ubiquitous digital cameras and smartphones [1]. However, digital imaging requires manual tracing of the wound boundaries by graphics software and further calculation by a known length marker. To simplify this process, some researchers use machine learning (ML) or deep learning (DL) for wound and reference-marker segmentation. The wound area is then automatically calculated by planimetry [3].

Although improved imaging processes and artificial intelligence (AI) have facilitated digital wound-area measurements, they still require a reference marker. Especially in homecare settings, a ruler or other reference marker is not always available and can be difficult to fix on the peri-wound region while taking a photograph.

In this paper, we introduce a novel framework for wound-size measurement that addresses the limitations of existing methods by using fingernails as a reference. Inspired by the estimation of burn-wound areas using the palm area, our approach fundamentally differs from previous machine learning-based techniques that rely on external reference markers such as rulers or stickers. By leveraging fingernails—a handy and naturally ready feature—we eliminate the need for additional markers, making the system device-independent and more convenient for home use with a smartphone.

Our contributions are: (1) We propose the first wound-size measurement tool that does not require any extra reference markers or specific hardware, setting it apart from traditional methods that involve some level of manual calibration. (2) We integrate three deep learning models to fully automate the measurement process, reducing user effort and enhancing ease of use. (3) Experimental results demonstrate the potential of our system for facilitating remote wound treatment follow-up, enabling reliable wound monitoring at home. These contributions collectively establish the novelty of our approach and its advantages over existing wound measurement techniques.

Methodology

Proposed methods and materials

Our image wound-size measuring framework uses a fingernail as a reference object. It consists of three tasks: nail key-points localization (NKL), wound localization (WL), and wound segmentation (WS). These tasks are performed by Mask R-CNN, Yolov5, and U-net DL models, respectively. The Mask R-CNN is a robust model designed for segmentation and can be extended for key-point detection [4, 5]. In wound size measurement, key-point detection can be crucial for marking specific landmarks on the wound. Compared to simpler object detection models like YOLO or Faster R-CNN, Mask R-CNN provides finer granularity at the pixel level. YOLOv5 is a popular model for object detection and excels in localization tasks due to its speed and accuracy [6, 7]. In this wound size measurement, localizing the wound area within an image is a critical first step. U-Net is a widely used architecture for semantic segmentation tasks [8,9,10]. The major benefit of using U-Net is pixel-level segmentation, making it perfect for delineating the wound boundaries with high precision. U-Net performs well even with relatively small medical datasets, which is often the case in wound measurement studies. U-Net’s architecture, with its contracting and expanding paths, helps retain fine details of the wound’s boundary, facilitating accurate wound area measurement.

In the NKL task, the Mask R-CNN model localizes the leftmost and rightmost points of the nail in each image. The Euclidean distance between these points is computed in pixels, and the real size of the user’s nail (in centimeters) is entered into the system. By comparing the results, the area in square centimeters (cm2) for each pixel on the image is determined. The WL task utilizes the Yolov5 model to localize the wound on the input image. The wound, along with the surrounding skin or background, is cropped and padded to a size of 512 × 512 for the subsequent WS task which used the U-net model. It assigns each pixel on the image to either the wound or background class. The number of pixels belonging to the wound class is combined with the pixel area (in cm2) determined in the NKL task. This provides the wound size in cm2 as the final output. The overall system workflow is illustrated in Fig. 1 including 3 phases: preprocessing, model training, and wound size measurement. The details of these phases were described in Additional file1.

Clinical wound images of PUs and DFUs were retrieved from the digital database of the Plastic Surgery Department at the Far Eastern Memorial Hospital (FEMH), and the nail images were derived from volunteers. The study was approved by the Research Ethics Review Committee of FEMH (protocol code 110295-E, date of approval: 2020/11/25). The wounds were annotated by a plastic surgeon.

Experiments were conducted on four datasets: “Nail with key-point”, “Wound with the bounding box”, “Wound with mask”, and “Wound with nail” (Table 1). The “Nail with key-point” dataset consists of 135 images with 1 to 10 fingernails in each image. Each nail is annotated with the leftmost and rightmost key-points. This dataset was used to train the Mask R-CNN DL model for the NKL task. The “Wound with the bounding box” dataset contains 732 PU and DFU wound images with annotated bounding boxes. The “Wound with mask” dataset includes 721 wound images with labeled masks for the WS task. The “Wound with nail” dataset was collected for the final evaluation of the wound-size measuring task. It consists of 88 PUs and 160 DFUs (248 images in total) with wounds, fingernails, and a reference ruler. The ImageJ software (NIH) was used to calculate the wound area and nail width, providing a gold standard for evaluating the wound size measurement outcomes. In all, the first three datasets were used for training and evaluating individual DL models for each task, while the final dataset was used to evaluate the combined performance of all tasks in the proposed system.

Fig. 1
figure 1

Workflow of the present experiments including data preprocessing, model training, and wound size measurement phases

Table 1 Datasets deployed in the present study on two different level (1) individual model, and (2) proposed framework level

User experience analysis and evaluation

To evaluate the efficiency and effectiveness of the proposed system from the users’ perspective, we built an internet platform with a trial version of our system (http://140.138.148.125:8345/ - demo username: doctor, demo password: doctor) (Fig. 2). Some clinical users were enrolled to use their cell phones with the proposed web-based system to record the wounds and operate the nail-referenced wound size measurement.

After using the system for 1 week, the participants would complete a survey about wound care and the satisfaction of the system. (See Additional file 2) The survey was rated using a 5-point Likert scale (from very satisfied: 5 to very unsatisfied: 1) in terms of convenience, layout, speed, accuracy, and overall rating of the system.

Fig. 2
figure 2

Demonstration of the web-based wound size measurement system showing the wound size measured by our system

Results

Experiment and results of the individual models

The nail keypoints task

This task was performed using the Mask R-CNN model from the PyTorch library without changing the hyperparameters. Instead, we tuned the model by adjusting the confidence-score threshold, which indicates the likelihood that the predicted bounding box contains the object of interest. The confidence score ranges from 0 (no object) to 1 (fully contained object). As the threshold increases, bounding boxes become tighter, affecting mAP, PCC, and mean Euclidean distance (mED) - Table 2.

Table 2 PCCs and Mean euclidean distances obtained by Mask R-CNN with different confidence thresholds

Setting the threshold to 0.85 reduces mAP and slightly decreases PCC, while mED increases. To balance these metrics, we choose a confidence threshold of 0.80. The correlation between predicted nail width and ground truth is shown in Fig. 3.

Fig. 3
figure 3

Correlation plot of predicted nail width versus ground truth (Mask R-CNN DL model with a confidence threshold of 0.8)

The wound localization tasks

In the wound localization task, we used YOLOv5 with default hyperparameters to optimize performance on the testing set by adjusting the confidence threshold from 0.1 to 0.9 in increments of 0.1 (Table 3).

Table 3 Mean average precisions obtained by Yolov5 with different confidence thresholds

Similar to the Nail keypoint task, the confidence threshold of the wound localization indicates the likelihood that the predicted bounding box containing the object. The highest mAP@0.50, mAP@0.75, and mAP@0.5:0.95 scores (0.822, 0.806, and 0.725, respectively) were achieved at a threshold of 0.3, which also produced the best PR curve at an IoU threshold of 0.5. The results at the optimal threshold are shown in Fig. 4.

Fig. 4
figure 4

PR curve of Yolov5 with different confidence thresholds. The curve that shows the optimized mAP@0.5 is highlighted in bold. The rest having lower value of PR curve are blurred

The wound segmentation tasks

Model fine-tuning in this task involves two parameters: the loss function and the optimization function. Here, the U-net model for the segmentation task was trained with different pairs of loss and optimization functions (Table 4). The mDSC, mPA and mPP were maximized (0.9250, 0.9523, and 0.9068, respectively) while the Dice loss function was paired with the Adam optimization function. Meanwhile, the mPS was maximized (at 0.9455) after the binary cross-entropy loss function paired with the Adam optimization function. The predicted results of the segmentation task with the optimal parameters are displayed in Fig. 5.

Table 4 Experimental results of the U-net model with different pairs of loss and optimization functions

We evaluate the model at the end of each phase using the prepared dataset during the training process. This approach allows us to monitor closely the model’s performance and select the best phase based on the highest evaluation metrics for each task. By leveraging this method, we can effectively identify the optimal model parameters without cross-validation. Our testing set consists of entirely new data from real-world settings, which serves as a more reliable benchmark for evaluating model performance. Since the testing set is independent of the training data, it allows for a clear assessment of the model’s generalization capabilities. This strategy focuses on ensuring the model performs well on unseen data, rather than repeatedly partitioning the existing data through cross-validation.

Fig. 5
figure 5

The demonstrations of individual models on testing set on different tasks (A: The predicted keypoints by Mask R-CNN and the corresponding nail width; B: The cropped wound by Yolov5; C: The segmented wound by U-net)

To assess the real-time performance of our proposed system, we conducted experiments on a server equipped with an Intel® Core™ i9-9900 K CPU @ 3.60 GHz, 32GB RAM at 2400 MHz, and a Nvidia GeForce RTX 2080Ti GPU with 11GB of memory. The tests involved 248 images captured using various standard smartphones, with an average file size of 6.35 MB (± 3.13 MB). The images had an average height of 2883.70 pixels (± 846.53) and width of 3597.70 pixels (± 1012.43), showing a considerable range of image dimensions. The average processing time from image upload to result generation was 12.11 s (± 4.20 s) per image. The system facilitates quick wound assessment and treatment, enabling doctors and patients to obtain timely feedback. With an average processing time of approximately 12.11 s per image, it is manageable for both parties, as doctors are not always available to log in and monitor patient status continuously. Besides, the system includes an alert function that notifies doctors of severe cases by displaying a notification on the home screen, emphasizing patients requiring urgent assistance.

The proposed framework

The properly trained individual models were combined into the proposed framework, which implements the NKL task for calculating the size of a single pixel (cm2) and the WS task for inferring the number of pixels covered by the wound. The predicted wound sizes were aggregated and the input nail width was determined from individual images. In the case of missing nail width input in the future application, we also set a default nail width of 1.09 cm calculated by averaging the nail widths of all participants.

The Mask R-CNN model accurately localized all the “wound with nail” (without key-points localization) in total 248 images in the testing dataset. The YoloV5 model localized 156 wounds out of 160 DFU images and 86 wounds out of 88 PU images, constituting 97.76% of all wound localizations. Figure A-1 in Additional file 3 shows examples of undetected image. The proposed framework yielded better CPP and RSME results on individual images when the nail size was manually input than when the nail size was set to a default value. Moreover, among DFU wound images, both types of nail widths yielded excellent PCC and RMSE results (0.979 and 2.551, respectively, for the manual nail inputs are 0.875 and 8.081 respectively, for the default nail inputs; see Table 5). Figure A-2 in Additional file 3 correlates the predicted nail sizes with the ground truth for different ways of splitting the target testing dataset.

Table 5 The results of the deployed datasets in this study

User evaluation of the proposed system

From January 1st to March 31st 2023, thirty participants were included in the user experience evaluation. These consisted of 7 doctors, 8 nurses, 7 patients, 6 family members, and 2 caregivers. Among these participants, 7 had no previous experience in wound care while the other 23 had varied experiences (range 1–28 years). The demographics and profiles of participants were listed in Table A-5 in Additional file 3.

Before using our wound measurement system, the participants answered some background questions. 90% (27/30) of the participants thought wound size measurement is essential in wound care, while only 40% (12/30) had real experience of wound size measurement with certain tools. Manual measurement with a ruler was the most common method (11/12, 91.7%), followed by visual estimation (8/12, 66.7%) and some other AI tools (2/12, 16.7%). Participants were requested to choose one most concerning factor about wound size measurement tool, and 50% chose “accuracy”, 46.7% chose “convenience” and only 3.3% chose “cost”. However, this answer seemed to be different between experienced and inexperienced wound carers. Most inexperienced wound carers (71.4%) chose “convenience” while 56.5% of experienced ones chose “accuracy” (Fig. 6-A). After using our proposed system, 90% of the participants were very satisfied or satisfied with the system’s convenience while 80% were very satisfied or satisfied about the accuracy. The overall satisfaction rate was 90% and 86.7% of the participants would choose our system as their wound measurement tool in the future. However, when comparing the satisfaction level among different users, the nurses had a trend of higher scores in nearly every aspect. On the other hand, the patients or caregivers had relatively low satisfaction scores, especially at the layout, but these differences were not statistically significant. (Fig. 6-B).

Fig. 6
figure 6

The user evaluation report (5: very satisfied, 4: satisfied, 3: neutral, 2: unsatisfied, 1: very unsatisfied)

Case report

A 70-year-old diabetic male patient who had peripheral artery disease with left 1st to 5th toes amputated. He presented with left foot plantar DFU for 3 months and underwent artificial dermis graft (Integra LifeSciences, US) implantation to promote wound healing. Postoperatively, the wound condition had been recorded by a caregiver with our proposed system. Figure 7 shows the serial wound images with the caregiver’s referencing fingernails. The calculated wound sizes decreased over time and were compatible with clinical observation regardless of different photo-shooting angles or distances. The wound was confirmed to be healed at postoperation week 12 and the system didn’t register any wound in that image.

Fig. 7
figure 7

Process demonstration and outputs of the proposed system in a 70-year-old patient with diabetic foot ulcer treated by artificial dermis grafting

Discussion

Recently, wound-size measurements have been automated using commercial devices with embedded digital camera and image processing software or smartphone applications with or without add-on sensors. Commercial devices such as Visitrak (Smith & Nephew, London, UK), Silhouette Mobile system (Aranz Medical, Christchurch, New Zealand) and InSight (EKare, USA) require manual wound edge tracing and are expensive and not easily accessible. Meanwhile, smartphone applications can extract measurements through various techniques including depth from focus, inertial sensors, and an original pinch/zoom method [11, 12]. However, the measurements of these methods are less accurate than measurements based on a reference object.

Using a ruler for wound measurement is not always practical or available in certain situations, such as when photographing a sacral pressure sore in a lateral position or during single-person operations. Disposable rulers are often used due to disinfection concerns, but they are not suitable for long-term wound care. The proposed method provides a simple and effective way to measure wound size, making it suitable for clinical applications and home care services. Fingernails are easily measured body parts that can be placed on the wound during photography. In our user experience analysis, convenience was prioritized by inexperienced caregivers, and our proposed system fulfilled their needs, achieving high satisfaction in the convenience category. The nail width can be measured once and entered into the system for subsequent image processing. In the absence of a nail-width input, a default value can be used because the variation in fingernail width among the population is small (standard deviation = 1 mm in a Korean study [13]).

DL has diverse applications in healthcare, including medical data analysis, signal processing, and image analysis [14,15,16,17,18,19,20,21]. Automatic wound measurement involves wound segmentation using image processing, ML, or DL techniques, along with a scaled reference for calculating the wound area [22,23,24,25]. For instance, Kompalliy et al. designed a web-based tool to extract the ulcer and a ruler by combination of image processing algorithms and manual operation [26]. Similar to our method, CarriĂłn et al. used two DL techniques (YOLO and Unet) to measure the wound size in a mouse wound model, by detecting a ring-shaped splint surround the wound as a known size reference object [27]. We elaborated on the advantages of previous works, and utilize the most convenient reference marker with the key-point detection method, and this became the core and novel concept of this study.

Our proposed framework trains three DL models individually on different datasets, simplifying the training process and allowing for easy tuning. Each model is specialized for a specific task: Mask R-CNN excels in key-point detection, YoloV5 performs well in object detection with fast response time, and U-net is a reliable segmentation model. To optimize segmentation, we crop the wound from the original image before feeding it into the segmentation model. As our system is intended for homecare usage, the wound photographs are presumably taken by informal caregivers rather than professional wound recorders. Therefore, wound photographs of diverse quality against complex backgrounds are expected. Li et al., have proposed composite models for wound detection and segmentation [28]. the other studies applied image preprocessing skills such as chrominance channels of the HSV, YCbCr and normalized RGB, color correction, thresholding techniques to improve the system performance [29].

The user evaluation results demonstrated high satisfaction and acceptance of this newly developed system; however, the users’ feedbacks still give us information for future improvement. In every aspect, the nurses gave higher scores which may reflect their good tolerance to clinical challenges. Doctors usually had busy daily schedules and therefore had lower satisfaction with the system speed. Some doctors who specialized in wound care would care more about the accuracy, and they suggested to improve the system performance for clinical documentation or research purposes. Surprisingly, patients and caregivers had lowest satisfaction, especially with the “layout”. Because in this study most patients are elders and some of the caregivers were foreign workers (e.g. Indonesian, Filipino), and they could not operate the system well, either due to language gap or unfamiliarity of new technology. These findings indicate that the socio-environmental factors impact medical AI system performance and the patient/user experience, and similar result was also discovered by Google Health group when deploying the deep learning detection for diabetic retinopathy in Thailand [30].

In comparison with previously published DL models, our proposed method is novel, and to the best of our knowledge, there is no existing study that employs an approach exactly like ours. While some prior studies have applied deep learning to wound measurement, our method introduces unique aspects in terms of data sources, and work-flow which sets it apart from existing work. In comparison to traditional wound measurement tools such as ruler or wound tracing sheet, our proposed method is currently less precise and accurate, primarily because it relies on analyzing images rather than directly measuring the physical wound. Other factors leading to the system limitations include variations in image quality, lighting conditions, and differences in camera angles, which can affect the accuracy of the measurements.

Each DL model in the framework has its limitations as well. Nail key-point detection may be affected by finger placement errors, such as finger inclination or capturing the wrong fingers. YoloV5 may slightly over-crop detected wounds, leading to incomplete segmentation by U-net. Consequently, the derived area in cm2 may slightly differ from the actual area. Another limitation lies in the dataset size: 135 images for nail key-point localization, 732 for wound localization, and 721 for wound segmentation. This is relatively small for training DL models, and neither data augmentation techniques were used. These factors increase the risk of overfitting and may limit the model’s ability to generalize to diverse wound types and imaging conditions in real-world scenarios. To improve generalization and robustness, future work should aim to expand the dataset and incorporate data augmentation techniques Applying transfer learning by leveraging pre-trained models can also be beneficial, as it reduces the need for large datasets, shortens training time, and helps improve accuracy while mitigating overfitting, even with limited medical data. Moreover, education workshops or tutorial videos could also standardize the users’ operations and correct errors.

Despite of these limitations, our approach offers significant advantages, such as easy-to-use in an online system. It allows doctors for remote wound monitoring, reducing the need for patients to visit the hospital frequently. Additionally, it minimizes the requirement for constant supervision by healthcare professionals, and it can be used independently by patients, with medical intervention needed only in severe cases. We believe our proposed system can find practical use, particularly in homecare settings, with the potential for further improvements in performance. By incorporating the wound size measurement with other wound information (e.g. tissue classification, discharge amount, surrounding skin condition), either from AI detection or manual input, we can design a clinical decision support system to give treatment advice or referral suggestions. With a cloud platform to restore the uploaded images, the doctors can monitor patients’ wound condition with these objective parameters. We firmly believe this system will facilitate telemedical wound care, lower the medical expenses and even reduce carbon emissions.

Conclusion

In conclusion, we proposed a novel and convenient method of wound-size measurement using fingernails in a framework composed of three DL models. The framework will help clinicians and caregivers to monitor their patients’ wound conditions with simple equipment.

Data availability

The participants of this study did not give written consent for their data to be shared publicly, so due to the sensitive nature of the research supporting data is not available.

Abbreviations

AI:

Artificial intelligence

CNN:

Convolutional neural network

DFUs:

Diabetic foot ulcers

DL:

Deep learning

FEMH:

Far Eastern Memorial Hospital

NKL:

Nail key-points localization

mAP:

Mean average precision

mDSC:

Dice similarity coefficient

mED:

Mean Euclidean distance

mPA:

Mean pixel accuracy

mPP:

Mean pixel precision

PR:

Precision-Recall

RMSE:

Root mean squared error

TBSA:

Total body surface area

WS:

Wound segmentation

WL:

Wound localization

YOLO:

You Only Look Once

References

  1. Berle JO, et al. Actigraphic registration of motor activity reveals a more structured behavioural pattern in schizophrenia than in major depression. BMC Res Notes. 2010;3:1–7.

    Article  Google Scholar 

  2. Adadi A, Berrada M. Explainable AI for healthcare: from black box to interpretable models. in Embedded systems and artificial intelligence: proceedings of ESAI 2019, Fez, Morocco. 2020. Springer.

  3. Chang CW, et al. Deep learning–assisted burn wound diagnosis: diagnostic model development study. JMIR Med Inf. 2021;9(12):e22798.

    Article  Google Scholar 

  4. Jiao Z, et al. Deep learning for automatic detection of cephalometric landmarks on lateral cephalometric radiographs using the Mask Region-based Convolutional Neural Network: a pilot study. Oral Surg Oral Med Oral Pathol Oral Radiol. 2024;137(5):554–62.

    Article  PubMed  Google Scholar 

  5. Lang Y, et al. Localization of craniomaxillofacial landmarks on CBCT images using 3D mask R-CNN and local dependency learning. IEEE Trans Med Imaging. 2022;41(10):2856–66.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Ragab MG, Abdulkadir SJ, Muneer A, Alqushaibi A, Sumiea EH, Qureshi R, et al. A comprehensive systematic review of YOLO for medical object detection (2018 to 2023). IEEE Access, 2024;12:57815–36.

  7. Yeerjiang A, et al. YOLOv1 to YOLOv10: a Comprehensive Review of YOLO variants and their application in medical image detection. J Artif Intell Pract. 2024;7(3):112–22.

    Google Scholar 

  8. Azad R, Aghdam EK, Rauland A, Jia Y, Avval AH, Bozorgpour A, et al. Medical image segmentation review: the success of u-net. IEEE Trans Pattern Anal Mach Intell, 2024;46(12):10076–95.

  9. Shao J, et al. Application of U-Net and Optimized Clustering in Medical Image Segmentation: a review. Volume 136. CMES-Computer Modeling in Engineering & Sciences; 2023. 3.

  10. Siddique N, Paheding s, Elkin CP, Devabhaktuni V. U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications. IEEE Access, 2021;9:82031–57.

    Article  Google Scholar 

  11. Lucas Y, et al. Wound size imaging: ready for smart assessment and monitoring. Adv Wound care. 2021;10(11):641–61.

    Article  Google Scholar 

  12. Wang SC, et al. Point-of-care wound visioning technology: reproducibility and accuracy of a wound measurement app. PLoS ONE. 2017;12(8):e0183139.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Jung JW, et al. Fingernail configuration. Archives Plast Surg. 2015;42(06):753–60.

    Article  Google Scholar 

  14. Li A-HA et al. A deep learning approach to Lung Nodule Growth Prediction using CT image combined with Demographic and image features. in Proceedings of the 2023 7th International Conference on Medical and Health Informatics. 2023.

  15. Miotto R, et al. Deep learning for healthcare: review, opportunities and challenges. Brief Bioinform. 2018;19(6):1236–46.

    Article  PubMed  Google Scholar 

  16. Mittal S, Hasija Y. Applications of deep learning in healthcare and biomedicine. Deep learning techniques for biomedical and health informatics, 2020: pp. 57–77.

  17. Nguyen D-K et al. Deep Stacked Generalization Ensemble Learning models in early diagnosis of Depression illness from wearable devices data. in Proceedings of the 5th International Conference on Medical and Health Informatics. 2021.

  18. Nguyen D-K, et al. Decision support system for the differentiation of schizophrenia and mood disorders using multiple deep learning models on wearable devices data. Health Inf J. 2022;28(4):14604582221137537.

    Article  Google Scholar 

  19. Nguyen D-K, Lan C-H, Chan C-L. Deep ensemble learning approaches in healthcare to enhance the prediction and diagnosing performance: the workflows, deployments, and surveys on the statistical, image-based, and sequential datasets. Int J Environ Res Public Health. 2021;18(20):p10811.

    Article  Google Scholar 

  20. Nguyen T-T-D, Nguyen D-K, Ou Y-Y. Addressing data imbalance problems in ligand-binding site prediction using a variational autoencoder and a convolutional neural network. Brief Bioinform. 2021;22(6):bbab277.

    Article  PubMed  Google Scholar 

  21. Phan D-V, Chan C-L, Nguyen D-K. Applying deep learning for prediction sleep quality from wearable data. in Proceedings of the 4th International Conference on Medical and Health Informatics. 2020.

  22. Pasero E, Castagneri C. Application of an automatic ulcer segmentation algorithm. 2017 IEEE 3rd International Forum on Research and Technologies for Society and Industry (RTSI). IEEE; 2017.

  23. Papazoglou ES, et al. Image analysis of chronic wounds for determining the surface area. Wound Repair Regeneration. 2010;18(4):349–58.

    Article  PubMed  Google Scholar 

  24. Zahia S, et al. Pressure injury image analysis with machine learning techniques: a systematic review on previous and possible future methods. Artif Intell Med. 2020;102:p101742.

    Article  Google Scholar 

  25. Rao KN, et al. Sobel edge detection method to identify and quantify the risk factors for diabetic foot ulcers. Int J Comput Sci Inform Technol. 2013;5(1):39.

    Google Scholar 

  26. Kompalliy S, Bakarajuy V, Gogia SB. Cloud-driven application for measurement of wound size. MEDINFO 2019: Health and Wellbeing e-Networks for all. IOS; 2019. pp. 1639–40.

  27. CarriĂłn H, et al. Automatic wound detection and size estimation using deep learning algorithms. PLoS Comput Biol. 2022;18(3):e1009852.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Li F, et al. A composite model of wound segmentation based on traditional methods and deep neural networks. Comput Intell Neurosci. 2018;2018(1):4149103.

    PubMed  PubMed Central  Google Scholar 

  29. Ferreira F, et al. A systematic investigation of models for color image processing in wound size estimation. Computers. 2021;10(4):43.

    Article  Google Scholar 

  30. Beede E et al. A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. in Proceedings of the 2020 CHI conference on human factors in computing systems. 2020.

Download references

Acknowledgements

We appreciate Enago for professional language editorial services.

Funding

This work was supported by Far Eastern Memorial Hospital and Yuan Ze University (Grant number: FEMH-YZU-2023-015).

Author information

Authors and Affiliations

Authors

Contributions

DHC and DKN. wrote the main manuscript text. DHC collected clinical pictures and conducted the image labeling. DKN and TNN conducted the AI model training and system development. DHC and DKN prepared Figs. 1, 2, 3, 4, 5 and 6. CLC did data analysis, interpretation and manuscript revision. All authors reviewed the manuscript.

Corresponding author

Correspondence to Chien-Lung Chan.

Ethics declarations

Ethics approval and consent to participate

The study was approved by the Research Ethics Review Committee of Far Eastern Memorial Hospital (protocol code 110295-E, date of approval: 2020/11/25). Participants provided written informed consent before participating the study and using the proposed system.

Consent for publication

Not Applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary Material 2

Supplementary Material 3

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, DH., Nguyen, DK., Nguyen, TN. et al. Application of deep learning in wound size measurement using fingernail as the reference. BMC Med Inform Decis Mak 24, 390 (2024). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12911-024-02778-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12911-024-02778-8

Keywords