Deep-learning method using ultrasound predicts fetal birth weight

A novel deep-learning technique using ultrasound videos predicts fetal birth weight, according to a study from the University of Amsterdam published on 20 October in Computers in Biology and Medicine.

A team led by Szymon Płotka found that their method, which uses tabular clinical data to evaluate features in fetal ultrasound video scans, outperforms current algorithms and has comparable performance to that of clinicians.

“Our method has the potential to be applied in the clinical environment to assist in the selection of the safest type of delivery for both the mother and the child,” Płotka and co-authors wrote.

While ultrasound is the gold standard for assessing fetal growth and development, the modality is suspect to user dependence, which could cause variability in fetal biometric measurements. Accurate measuring of the head circumference, biparietal diameter, abdominal circumference, and femur length to predict fetal birth weight is important for proper pregnancy and delivery management.

The Płotka team previously described their automated fetal birth weight prediction method, which uses multimodal data and visual data processing, named BabyNet. BabyNet is a hybrid model that combines transformers and convolutional neural networks. It extends the 3D ResNet-18 architecture with a residual transformer module.

For the current study, the researchers wanted to test the performance of a refined version of BabyNet, which now includes a dynamic affine feature map transform module. This module uses tabular clinical data to improve fetal birth weight estimation. The researchers used fetal ultrasound video scans conducted within 24 hours before delivery along with relevant clinical indicators to estimate fetal birth weight. They also used the actual birth weight after delivery as the ground truth.

The team performed the development and evaluation of the latest version of BabyNet on a clinical set consisting of 582 2D fetal ultrasound videos and clinical records of pregnancies from 194 patients performed less than 24 hours predelivery.

It found that in fivefold cross-validation, BabyNet showed superior performance to that of clinicians and the original version of BabyNet. This included having lower error rates.



BabyNet (original)

Clinicians

BabyNet (refined)

Mean absolute error (in grams)

285

188

179

Root mean squared error (in grams)

374

238

203

Mean absolute percentage error

8.5%

5.4%

5.1%

Furthermore, the refined version of BabyNet outperformed six other algorithms measuring fetal birth weight prediction.

In multicenter analysis, where BabyNet’s performance was compared to that of clinicians from four centers, the researchers found that BabyNet had significantly higher performance in all but one center (p = 0.09 for the one center).

Finally, the team reported that BabyNet showed the highest accuracy when measuring fetal abdominal ultrasound video scans. This included a mean absolute value of 175 grams, a root mean squared error of 200 grams, and a mean absolute percentage error of 5%.

The study authors wrote that using video analysis in fetal ultrasound exams has “several” advantages over static images. This includes using 2D spatiotemporal feature representations to improve performance.

“The utilization of short video sequences in our method might obviate the need for expert knowledge and skills for the precise estimation of fetal birth weight, as it did not fully depend on a reference standard plane,” they wrote.

The team called for future studies to have larger datasets with more operators from different skill groups and more scans per operator.

The full study can be found here.

Page 1 of 110
Next Page