Categories: Photography

Revolutionizing Alzheimer’s Detection: The Intersection of Color Fundus Photography and Deep Learning Techniques


This page was generated programmatically; to view the article at its original source, you can visit the link below:
https://pubmed.ncbi.nlm.nih.gov/39748801/
if you wish to have this article removed from our site, please reach out to us



Objective:

This report discusses the creation and efficacy of 2 unique deep learning models specifically trained on retinal color fundus images to identify Alzheimer disease (AD).


Patients and methods:

Two separate datasets (UK Biobank and our tertiary educational institution) containing high-quality retinal images from individuals diagnosed with AD and control subjects were utilized to develop the 2 deep learning models, between April 1, 2021, and January 30, 2024. ADVAS employs a U-Net-based framework that incorporates retinal vessel segmentation. ADRET uses a self-supervised learning convolutional neural network modeled after bidirectional encoder representations from transformers, which has been pretrained on a substantial dataset of retinal color images obtained from the UK Biobank. The models’ ability to differentiate AD from non-AD was evaluated by calculating mean accuracy, sensitivity, specificity, and receiving operating curves. The attention heatmaps produced were examined for distinctive characteristics.


Results:

The self-supervised ADRET model demonstrated greater accuracy compared to ADVAS, in both UK Biobank (98.27% vs 77.20%; P<.001) and our institutional testing datasets (98.90% vs 94.17%; P=.04). No significant differences were observed between the original and binary vessel segmentation, nor between models using both eyes versus single-eye models. The attention heatmaps derived from patients with AD indicated areas surrounding minor vascular branches as the most pertinent to the model’s decision-making process.


Conclusion:

A self-supervised convolutional neural network, modeled on bidirectional encoder representations from transformers and pretrained on a large cohort of retinal color photographs, can effectively screen symptomatic AD with high precision, surpassing U-Net-pretrained models. To be applicable in clinical settings, this approach requires additional validation across larger and more diverse populations, as well as integrated methodologies to unify fundus images and mitigate imaging-related noise.


This page was generated programmatically; to view the article at its original source, you can visit the link below:
https://pubmed.ncbi.nlm.nih.gov/39748801/
if you wish to have this article removed from our site, please reach out to us

fooshya

Share
Published by
fooshya

Recent Posts

“2024: Unprecedented Heat Waves and Record-Breaking Temperatures Transform Our Planet”

This webpage was generated automatically; to read the article in its initial setting, you can…

1 minute ago

Pioneering Wooden Satellite Launches from ISS: A Sustainable Leap for Space Technology (with Images)

This webpage was generated automatically, to view the article in its initial source you can…

5 minutes ago

Spartans Sprint Back Into Action at the Exciting Wolverine Invitational!

This page was generated programmatically. To view the article in its initial location, you may…

8 minutes ago

Fierce Showdown: Tiger Women Battle CSU Pueblo This Saturday!

This page was generated programmatically; to view the article in its initial source, you can…

10 minutes ago

Dedicated Fans Rework Travel Itineraries to Experience the Thrills of the Cotton Bowl in Texas!

This webpage was generated automatically; to view the article in its original format, you can…

13 minutes ago

CES 2025 Showdown: Unveiling Our Top Picks for Must-Have Gadgets!

This webpage was generated automatically; to access the article in its original setting, kindly visit…

14 minutes ago