Spinal Health Diagnostics Gets Deep Learning Automation

An advanced deep-learning model that automates X-ray analysis for faster and more accurate assessments could transform spinal health diagnostics. Capable of handling even complex cases, the research promises to help doctors save time, reduce diagnostic errors, and improve treatment plans for patients with spinal conditions like scoliosis and kyphosis. 

“Although spinopelvic alignment analysis offers promising insights, current research relies on relatively small patient cohorts. Automated annotation could enable the analysis of larger cohorts, leading to a better understanding and clearer identification of existing trends. AI-based approaches can complement human raters for better consistency in assessments,” said study co-author Moritz Jokeit, a PhD candidate at the Institute for Biomechanics at ETH Zurich.

Reimagining spinal diagnostics

As the most common spinal condition, about 7M people in the US and 3% globally are diagnosed with scoliosis. It and other spinal misalignment issues often cause pain, limit mobility, and lead to health complications such as respiratory problems, reducing a person’s quality of life. 

Accurate diagnostics and monitoring are key to effectively treating patients, however, traditional methods, such as X-ray measurements, visual assessments, and relying on clinical expertise, can be labor-intensive, slow, and inconsistent.

Existing AI models struggle with complex spinal misalignment cases in patients with atypical anatomy. These can be caused by congenital conditions, surgery, degeneration, or trauma.

Mapping the spine with AI

The study, published in Spine Deformity, addresses these limitations using a modified U-Net architecture, which uses an advanced segmentation approach and identifies key spinal structures. This AI architecture combines spatial details with its understanding of anatomical relationships, which it gathers through training on annotated datasets. 

The model analyzes radiographs (X-rays), taken from front to back and from the side, for a comprehensive multiview of a patient’s spinal curvature and alignment. As it finds the anatomical features key to predicting spinal alignment in the images, such as vertebrae, the pelvis, hip joints, and sacral regions, it determines their boundaries and shapes. 

An illustration showing the workflow of the model for spinal predictions.
Figure 1. An overview of the automatic pipeline for spinal predictions

The researchers trained the model using a dataset of 555 radiographs manually annotated by medical experts, with 455 images used for training and 100 for testing. During inference, model initialization took about four seconds, while image prediction took less than one second.  

One NVIDIA RTX A6000 GPU on the cuDNN-accelerated TensorFlow deep learning framework powered the processing of the high-resolution images and accelerated model training. The team received the GPU as awardees of the NVIDIA Academic Grant Program, which aims to advance academics by providing researchers with world-class computing access and resources.

The future of care

The researchers found that the model can predict spinal alignment measurements accurately, even in challenging cases involving abnormalities, and it does so across different age groups and spinal regions, meaning it’s capable in many use cases.

Delivering results similar to the experts, the AI model achieved an impressive 88% reliability score for predicting spinal curvature. It also performed strongly with other spinal measurements, such as pelvic tilt and sacral slope, with predictions differing by an average of just 3.3 degrees compared to manual measurements. 

Overall, the system successfully analyzed spinal health data in 61% of cases, with some measurements scoring near-perfect reliability of up to 99%. 

The study highlights the potential of AI to streamline clinical workflows, save doctors time by analyzing large volumes of radiographs quickly, and help with diagnosing challenging cases.

However, according to Jokeit, the model requires further development. Bright artifacts on X-rays can compromise segmentation accuracy in patients with medical implants, while reduced image quality in obese patients makes it more difficult to distinguish between anatomical structures.

The researchers plan to explore how other pretrained model architectures, such as keypoint R-CNN or transformer-based modes extend the approach to different types of X-rays. They also are focused on gathering more training data, especially for challenging anatomies and patients with implants. 

Contact the corresponding author to request the code used in the research.

Read the research Anatomical landmark detection on bi-planar radiographs for predicting spinopelvic parameters.

Applications are now open for full-time faculty members at accredited academic institutions using NVIDIA technology to process large-scale datasets, train graph neural networks, and accelerate projects in data analytics, robotics, 6G, federated learning, and smart spaces.

Published by: Michelle Horton