Comparison between “Machine-learning” and human to identify and classify hypospadias.
Nicolas Fernandez, M.D., PhD1, Armando Lorenzo, M.D.2, Joao Luiz Pippi Salle, M.D., PhD3, Luis Braga, M.D.4, Jaime Perez, M.D.1, Sam Raisbeck, BS2, Yui Lo, BS2, Clyde Matava, M.D.2.
1Hospital Universitario San Ignacio Pontificia Universidad Javeriana, Bogota, Colombia, 2Hospital for SickKids, Toronto, ON, Canada, 3Division of Urology, Department of Surgery, Sidra Medical and Research Center. Doha, Qatar., Doha, Qatar, 4McMaster University, Hamilton, ON, Canada.
Introduction: Hypospadias is the most common congenital anomaly affecting the penis. Anatomical variables such as location of the meatus, quality of the urethral plate, glans size and ventral curvature have been identified as surgical predictors for post-operative outcomes. Nonetheless, classification systems are now becoming more objetive, there is still significant subjectivity and variability between evaluators. Hereby we propose the use of machine learning/image recognition to increase the objectivity of hypospadias recognition and classification. Methods: Using an image database from the authors, a total of 1169 anonymous images (837 distal and 332 proximal) were used. All pictures followed the same format and showed the ventral aspect of the penis including the glans, shaft and scrotum and all included a clear view of the meatus. (Figure 1) These images were classified by surgeons into distal or proximal and shown the computer for training using TensorFlow®. (Figure 2) Data from the training was outputted to TensorBoard, to assess the loss function. The model was then run on the set of 29 “Test” images arbitrarily selected. Same set of images were distributed amongst expert clinicians in pediatric urology. Inter and intra-rater analysis were performed using Fleiss Kappa statistical analysis. Results: After training phase, image recognition of test images showed that after a training with 627 images (440 distal and 187 proximal) detection accuracy was 60% when detecting. When training was increased to 1169 images accuracy increased to 90%. Overall inter-rater analysis amongst expert pediatric urologist was k= 0.86 and intra-rater was 0.74. Conclusion: Image recognition model after established training has an accuracy detection rate of 90% which emulates the almost perfect inter-rater agreement between experts. Future applications of this technology may be used as a predictive tool for surgical outcomes and to identify image properties to better define difficult variables such as the quality of the urethral plate.
Back to 2018 Program