Categories: Photography

Revolutionizing Alzheimer’s Detection: The Intersection of Color Fundus Photography and Deep Learning Techniques


This page was generated programmatically; to view the article at its original source, you can visit the link below:
https://pubmed.ncbi.nlm.nih.gov/39748801/
if you wish to have this article removed from our site, please reach out to us



Objective:

This report discusses the creation and efficacy of 2 unique deep learning models specifically trained on retinal color fundus images to identify Alzheimer disease (AD).


Patients and methods:

Two separate datasets (UK Biobank and our tertiary educational institution) containing high-quality retinal images from individuals diagnosed with AD and control subjects were utilized to develop the 2 deep learning models, between April 1, 2021, and January 30, 2024. ADVAS employs a U-Net-based framework that incorporates retinal vessel segmentation. ADRET uses a self-supervised learning convolutional neural network modeled after bidirectional encoder representations from transformers, which has been pretrained on a substantial dataset of retinal color images obtained from the UK Biobank. The models’ ability to differentiate AD from non-AD was evaluated by calculating mean accuracy, sensitivity, specificity, and receiving operating curves. The attention heatmaps produced were examined for distinctive characteristics.


Results:

The self-supervised ADRET model demonstrated greater accuracy compared to ADVAS, in both UK Biobank (98.27% vs 77.20%; P<.001) and our institutional testing datasets (98.90% vs 94.17%; P=.04). No significant differences were observed between the original and binary vessel segmentation, nor between models using both eyes versus single-eye models. The attention heatmaps derived from patients with AD indicated areas surrounding minor vascular branches as the most pertinent to the model’s decision-making process.


Conclusion:

A self-supervised convolutional neural network, modeled on bidirectional encoder representations from transformers and pretrained on a large cohort of retinal color photographs, can effectively screen symptomatic AD with high precision, surpassing U-Net-pretrained models. To be applicable in clinical settings, this approach requires additional validation across larger and more diverse populations, as well as integrated methodologies to unify fundus images and mitigate imaging-related noise.


This page was generated programmatically; to view the article at its original source, you can visit the link below:
https://pubmed.ncbi.nlm.nih.gov/39748801/
if you wish to have this article removed from our site, please reach out to us

fooshya

Share
Published by
fooshya

Recent Posts

Fantastically Bizarre | The Road Photography Of Carmina Ripolles

This web page was created programmatically, to learn the article in its authentic location you…

20 minutes ago

Stephen Ok Amos: Now We’re Talking! evaluate – convivial good enjoyable retains the laughter flowing | Stage

This web page was created programmatically, to learn the article in its unique location you…

35 minutes ago

Jackie Shroff celebrates 69th birthday: Know the key morning drink he has day by day for immunity and vitality

This web page was created programmatically, to learn the article in its authentic location you…

40 minutes ago

Rain, Fog, Snow: 12 Pictures That Show the Plan Is Non-obligatory

This web page was created programmatically, to learn the article in its authentic location you…

44 minutes ago

In the present day’s well-known birthdays listing for February 1, 2026 contains celebrities Julia Garner, Pauly Shore

This web page was created programmatically, to learn the article in its unique location you'll…

60 minutes ago

Can Google’s Project Genie actually ‘make Mario and Zelda video games’ from textual content prompts?

This web page was created programmatically, to learn the article in its unique location you…

1 hour ago