Categories: Photography

Revolutionizing Alzheimer’s Detection: The Intersection of Color Fundus Photography and Deep Learning Techniques


This page was generated programmatically; to view the article at its original source, you can visit the link below:
https://pubmed.ncbi.nlm.nih.gov/39748801/
if you wish to have this article removed from our site, please reach out to us



Objective:

This report discusses the creation and efficacy of 2 unique deep learning models specifically trained on retinal color fundus images to identify Alzheimer disease (AD).


Patients and methods:

Two separate datasets (UK Biobank and our tertiary educational institution) containing high-quality retinal images from individuals diagnosed with AD and control subjects were utilized to develop the 2 deep learning models, between April 1, 2021, and January 30, 2024. ADVAS employs a U-Net-based framework that incorporates retinal vessel segmentation. ADRET uses a self-supervised learning convolutional neural network modeled after bidirectional encoder representations from transformers, which has been pretrained on a substantial dataset of retinal color images obtained from the UK Biobank. The models’ ability to differentiate AD from non-AD was evaluated by calculating mean accuracy, sensitivity, specificity, and receiving operating curves. The attention heatmaps produced were examined for distinctive characteristics.


Results:

The self-supervised ADRET model demonstrated greater accuracy compared to ADVAS, in both UK Biobank (98.27% vs 77.20%; P<.001) and our institutional testing datasets (98.90% vs 94.17%; P=.04). No significant differences were observed between the original and binary vessel segmentation, nor between models using both eyes versus single-eye models. The attention heatmaps derived from patients with AD indicated areas surrounding minor vascular branches as the most pertinent to the model’s decision-making process.


Conclusion:

A self-supervised convolutional neural network, modeled on bidirectional encoder representations from transformers and pretrained on a large cohort of retinal color photographs, can effectively screen symptomatic AD with high precision, surpassing U-Net-pretrained models. To be applicable in clinical settings, this approach requires additional validation across larger and more diverse populations, as well as integrated methodologies to unify fundus images and mitigate imaging-related noise.


This page was generated programmatically; to view the article at its original source, you can visit the link below:
https://pubmed.ncbi.nlm.nih.gov/39748801/
if you wish to have this article removed from our site, please reach out to us

fooshya

Share
Published by
fooshya

Recent Posts

Temple Ambler’s Ultimate Esports and Gaming Hub

This page was generated programmatically; to view the article in its initial location, please visit…

1 month ago

Exploring the Heartbeat of Innovation: Northwestern University Unveiled

This webpage was generated automatically, to view the article in its original setting you can…

1 month ago

“Prepare for the Ultimate Gameplay Revolution: ‘inZOI’ Set to Dethrone The Sims!”

This page was generated automatically; to view the article at its initial source, please follow…

1 month ago

“Leveling Up: Understanding Gaming Addiction Among Students”

This webpage was generated automatically; to read the article at its original site, you can…

1 month ago

“How a York Car Park Scam Unexpectedly Enrolled Me in a Gaming Subscription!”

This webpage was generated automatically; to view the article in its original setting, you can…

1 month ago

Turner Shines Bright: RMAC Swimmer of the Week Honors Awarded!

This page was generated automatically; to view the article in its original context, you can…

1 month ago