Categories: Photography

Revolutionizing Alzheimer’s Detection: The Intersection of Color Fundus Photography and Deep Learning Techniques


This page was generated programmatically; to view the article at its original source, you can visit the link below:
https://pubmed.ncbi.nlm.nih.gov/39748801/
if you wish to have this article removed from our site, please reach out to us



Objective:

This report discusses the creation and efficacy of 2 unique deep learning models specifically trained on retinal color fundus images to identify Alzheimer disease (AD).


Patients and methods:

Two separate datasets (UK Biobank and our tertiary educational institution) containing high-quality retinal images from individuals diagnosed with AD and control subjects were utilized to develop the 2 deep learning models, between April 1, 2021, and January 30, 2024. ADVAS employs a U-Net-based framework that incorporates retinal vessel segmentation. ADRET uses a self-supervised learning convolutional neural network modeled after bidirectional encoder representations from transformers, which has been pretrained on a substantial dataset of retinal color images obtained from the UK Biobank. The models’ ability to differentiate AD from non-AD was evaluated by calculating mean accuracy, sensitivity, specificity, and receiving operating curves. The attention heatmaps produced were examined for distinctive characteristics.


Results:

The self-supervised ADRET model demonstrated greater accuracy compared to ADVAS, in both UK Biobank (98.27% vs 77.20%; P<.001) and our institutional testing datasets (98.90% vs 94.17%; P=.04). No significant differences were observed between the original and binary vessel segmentation, nor between models using both eyes versus single-eye models. The attention heatmaps derived from patients with AD indicated areas surrounding minor vascular branches as the most pertinent to the model’s decision-making process.


Conclusion:

A self-supervised convolutional neural network, modeled on bidirectional encoder representations from transformers and pretrained on a large cohort of retinal color photographs, can effectively screen symptomatic AD with high precision, surpassing U-Net-pretrained models. To be applicable in clinical settings, this approach requires additional validation across larger and more diverse populations, as well as integrated methodologies to unify fundus images and mitigate imaging-related noise.


This page was generated programmatically; to view the article at its original source, you can visit the link below:
https://pubmed.ncbi.nlm.nih.gov/39748801/
if you wish to have this article removed from our site, please reach out to us

fooshya

Share
Published by
fooshya

Recent Posts

The $20 Aldi Kitchen Appliance That Makes Prepping Meals 10 Times Easier

This web page was created programmatically, to learn the article in its authentic location you'll…

4 minutes ago

Best HyperX Gaming Headsets 2026 Information < Weblog

This web page was created programmatically, to learn the article in its authentic location you…

9 minutes ago

Schedule, channels, occasions for Saturday playoff video games

This web page was created programmatically, to learn the article in its authentic location you…

10 minutes ago

You have lower than 2 weeks to enter the £10,000 LCE Photographer of the Year

This web page was created programmatically, to learn the article in its authentic location you…

14 minutes ago

UCLA Swimming Thumps Iowa In First-Ever Meeting Between Big Ten Opponents

This web page was created programmatically, to learn the article in its authentic location you'll…

19 minutes ago

Atlanta Simply Bought Two New Nonstop Mexico Seaside Routes for Winter Journey

This web page was created programmatically, to learn the article in its authentic location you'll…

23 minutes ago