This web page was created programmatically, to learn the article in its unique location you’ll be able to go to the hyperlink bellow:
https://www.nature.com/articles/s41566-025-01718-w
and if you wish to take away this text from our web site please contact us
Ultra-thin full-colour 3D holographic waveguide show
Our structure is designed to supply high-quality full-colour 3D photographs with massive étendue in a compact system kind issue to assist artificial aperture holography based mostly on steered waveguide illumination, as proven in Fig. 1. Our waveguide can successfully improve the dimensions of the beam with out scrambling the wavefront, in contrast to diffusers or lens arrays27,45. Moreover, it permits a minimal footprint for beam steering utilizing a MEMS mirror on the enter facet. For these causes, waveguide-based steered illumination has been prompt for holographic shows19,46. Existing architectures, nevertheless, endure from two issues: world-side mild leakage from the waveguide and chromatic dispersion of the eyepiece lens. Here, we overcome standard limitations with two state-of-the-art optical elements to beat standard limitations: the angle-encoded holographic waveguide and the apochromatic holographic eyepiece lens.

a, An illustration of the artificial aperture waveguide holography precept. The illumination module consists of a collimated fibre-coupled laser, a MEMS mirror that steers the enter mild angle, and a holographic waveguide. Together, these elements function {a partially} coherent backlight of the SLM. The SLM is synchronized with the MEMS mirror and creates a holographic mild area, which is concentrated in the direction of the person’s eye utilizing an eyepiece lens. Our design achieves an ultra-thin kind issue because it consists solely of flat optical parts and it doesn’t require optical path size to kind a picture. The steered illumination mechanism produces an artificial aperture, which helps a two-orders-of-magnitude bigger étendue than that intrinsic to the SLM. Our angle-encoded holographic waveguide and apochromatic holographic lens design clear up bidirectional diffraction noise and chromatic dispersion points, respectively. b, An exploded view of the schematic picture and captured photographs of the holographic MR show prototype. The use of skinny holographic optics achieves a complete optical stack thickness of lower than 3 mm (panel to lens). Top- and side-view images of the prototype are proven.
Our holographic waveguide shares many traits with standard floor aid grating (SRG)-type waveguides. Specifically, our design makes use of the same pupil replication precept and structure, which consists of an in-coupler, an exit-pupil increasing grating and an out-coupler grating. By distinction, our couplers are constructed of uniquely designed quantity Bragg gratings (VBGs) as a substitute of SRGs. SRGs utilized in standard waveguides47 have a degeneracy that helps a number of modes of diffraction. This causes a light-weight leakage problem as the sunshine is out-coupled bidirectionally, to each the world facet and viewer’s eye facet. When the waveguide is used for illumination, the leakage enters the eyepiece with out being modulated by the SLM, leading to d.c. noise all through the whole area of view. This d.c. noise considerably degrades the distinction of the displayed picture19, and it’s difficult to filter out this noise because it shares the optical path with the sign. On the opposite, VBGs exhibit a kind of diffraction referred to as Bragg diffraction48,49, the place solely mild that satisfies a particular incident angle and a slim spectral bandwidth is diffracted to a single diffraction order with excessive effectivity. This single-direction diffraction enormously suppresses stray mild and ghost photographs in contrast with standard waveguides. In addition, we use the angle-encoded multiplexing technique to cowl the bigger steering angle, the place every grating helps solely a portion of the goal angular bandwidth with a particular slim band wavelength with excessive effectivity. The multiplexing is carried out by overlapping quantity gratings with the identical floor pitch however with completely different slant angles (see Supplementary Note 1 for added particulars). This permits us to collectively cowl the goal angular bandwidth with excessive effectivity, whereas supporting three wavelengths of pink (638 nm), inexperienced (520 nm) and blue (460 nm) with out crosstalk.
Our show structure achieves an ultra-thin kind issue of solely 3 mm thickness from the SLM to the eyepiece lens, together with a 0.6-mm waveguide and 2-mm holographic lens. The MEMS mirror in our show steers the illumination angles incident on our SLM, which creates an artificial aperture (that’s, eyebox) of dimension 9 × 8 mm2, supporting an eyebox quantity that’s two orders of magnitude bigger than that intrinsic to the SLM. The diagonal area of view of our show is 38° (34.2° horizontal and 20.2° vertical). An in-depth dialogue of the complete system and extra elements is discovered within the Methods and Supplementary Note 1.
Partially coherent implicit neural waveguide mannequin
Our artificial aperture waveguide holographic show is partially coherent as a result of its artificial aperture consists of a scanned set of mutually incoherent apertures. In addition, the instantaneous eyebox reveals partial coherence as a result of components reminiscent of millimetre-scale optical path size variations generated by the pupil replication course of within the waveguide, mode instability of diode lasers and imperfect polarization administration. Neither present AI-based wave propagation fashions for coherent wavefronts21,33,34 nor latest waveguide propagation fashions20 are adequate in adequately describing the complicated behaviour of partially coherent mild50,51. In this part, we first develop {a partially} coherent waveguide mannequin to precisely characterize bodily optical techniques based mostly on implicit neural representations52. Then, we adapt this mannequin to our artificial aperture waveguide holography setting. Our mannequin will be skilled mechanically utilizing digicam suggestions, it generalizes nicely to unseen spatial frequencies and it’s used to supply the high-quality experimental outcomes demonstrated in later sections.
{A partially} coherent wavefront will be represented by its MI or its Fourier rework—the Wigner distribution perform53,54. With our waveguide mannequin, we intention to mannequin the MI of the waveguide, (Jleft({{bf{r}}}_{1},{{bf{r}}}_{2}proper)), the place r1 and r2 are spatial coordinates on the SLM airplane. This MI depends upon the steered illumination angle, so it must be characterised for every of them individually. Representing such a high-resolution, four-dimensional perform for a lot of apertures inside the artificial aperture, nevertheless, is computationally intractable. Consider a discretized MI for 1,920 × 1,080 spatial SLM coordinates for every of 10 × 10 steering angles—the corresponding ensemble of MIs would require greater than 100 terabytes of reminiscence to be saved. Moreover, it’s unclear methods to easily interpolate these MIs between aperture positions as they’re characterised at discrete positions.
To deal with these challenges, we introduce a low-rank implicit neural illustration for the ensemble of steering-angle-dependent MIs of the waveguide. Specifically, for every steering angle um, m = 1, …, M, the MI is approximated by Ok coherent spatial modes which can be incoherently summed55 as (J({{bf{r}}}_{1},{{bf{r}}}_{2},{bf{u}})approx mathop{sum }nolimits_{okay = 1}^{Ok}{f}_{{rm{WG}}}^{;okay}({{bf{r}}}_{1},{bf{u}}){f}_{{rm{WG}}}^{;okay}{({{bf{r}}}_{2},{bf{u}})}^{{rm{H}}}), the place ({f}_{{rm{WG}}}^{;okay}({{bf{r}}}_{1},{bf{u}})) is a single coherent mode at spatial frequency u, okay = 1, …, Ok signifies the mode index and H denotes the Hermitian transpose. Note that coherent waveguide fashions are a particular case of this partially coherent mannequin, specifically, these of rank 1. This low-rank MI illustration is impressed by coherence retrieval strategies31. Rather than representing the ensemble of low-rank MIs explicitly, nevertheless, we introduce a novel waveguide illustration based mostly on rising neural implicit representations37. This neural-network-parameterized illustration is extra reminiscence environment friendly than a discretized illustration and it’s steady, so it may be queried at any spatial or spatial frequency (that’s, aperture) coordinate. Specifically, our implicit neural illustration is a multilayer perceptron (MLP) community structure, ({f}_{{rm{WG}}}left({bf{r}},{bf{u}};{varPsi} proper):{{mathbb{R}}}^{4}to {{mathbb{C}}}^{Ok}), that represents the MI ensemble of the waveguide utilizing community parameters Ψ. Once skilled, the MLP will be queried with the enter of spatial (r) and frequency (u) coordinates and it outputs the Ok modes of the corresponding MI. The major advantages of this implicit neural illustration over a standard discrete illustration are its reminiscence effectivity (just a few tens of megabytes versus terabytes) and, as we show within the following experiments, the truth that our illustration generalizes higher to aperture positions that weren’t a part of the coaching knowledge. Remarkably, even in a low-étendue setting, our implicit mannequin achieves quicker convergence and higher accuracy than state-of-the-art CNN-based fashions based mostly on express representations (Fig. 2nd), so the implicit mannequin can function a drop-in alternative for present holographic shows. In a large-étendue setting, our implicit neural mannequin is the one one reaching high-quality outcomes.

a, Using our prototype, we seize coaching and validation datasets consisting of units of an SLM section sample in addition to the corresponding aperture place and depth picture. The aperture positions are uniformly distributed throughout the artificial aperture, enabling mannequin coaching with massive étendue. b, The captured dataset is used to coach our implicit neural waveguide mannequin. The parameters of our mannequin are realized utilizing backpropagation (dashed gray line) to foretell the experimentally captured depth photographs. c, A visualization of two waveguide modes of the skilled mannequin, together with amplitude and section, at two completely different aperture positions. Our mannequin faithfully reconstructs the wavefront rising out of the waveguide, exhibiting the patch-wise wavefront shapes anticipated from its pupil-replicating nature. d, Evaluation of wave propagation fashions with various coaching dataset sizes for a single aperture (that’s, low-étendue setting). Our mannequin achieves a greater high quality utilizing a dataset dimension that’s one magnitude decrease than state-of-the-art fashions34. e, Experimentally captured picture high quality for various wave propagation fashions within the low-étendue setting. Our mannequin outperforms the baselines, together with the ASM60 and the time-multiplexed neural holography mannequin (TMNH)34, by a big margin.
Our implicit waveguide fashions the MI of enter mild arriving on the SLM airplane. Then, the section of every coherent mode of the MI is modulated by the corresponding SLM section sample ϕm. Each of the modes continues to propagate in free house, by distance z, to the picture airplane within the scene. We use an off-axis angular spectrum technique (ASM)35,56 to implement this free-space wave propagation operator for every coherent mode, migrating the bandwidth over a big étendue. Finally, all propagated modes are incoherently summed over the complete artificial aperture to kind the specified depth, Iz, at distance z.
$${f}^{;okay}({bf{x}},{{bf{u}}}^{m},{phi }^{m})={{mathcal{P}}}_{z}left({f}_{{rm{WG}}}^{;okay}left({bf{r}},{{bf{u}}}^{m}proper){e}^{i{phi }^{m}};{{bf{u}}}^{m}proper),$$
(1)
$${I}_{z}({bf{x}})={leftlangle {leftvert; {f}^{;okay}left({bf{x}},{{bf{u}}}^{m},{phi }^{m}proper)rightvert }^{2}rightrangle }_{okay,m},$$
(2)
the place ({{mathcal{P}}}_{z}(cdot ;{bf{u}})) is the coherent wave propagation operator with propagation distance z by means of the aperture centred at u, and (leftlangle cdot rightrangle) is the imply operator denoting the incoherent sum of modes. As detailed within the Methods, to optimize experimental picture high quality, the wave propagation operator ({{mathcal{P}}}_{z}) additionally consists of a number of realized elements, together with pupil aberrations and diffraction effectivity of the SLM.
Using these equations, we will map section patterns proven on the SLM ϕm with steering angle um to the picture {that a} person would observe. To assess our partially coherent waveguide mannequin with respect to state-of-the-art holographic wave propagation fashions, we seize a dataset from our show prototype consisting of triplets, every containing an SLM section sample, an aperture place and the corresponding depth on the picture airplane (Fig. 2a). We then prepare fashions to foretell the experimentally captured depth photographs, given the identified enter section patterns and aperture positions.
To consider and evaluate waveguide fashions in a large-étendue setting, we prepare ours together with baseline fashions on a dataset captured at 72 aperture positions with our prototype. We consider the generalization capabilities of those fashions on a take a look at set containing 9 aperture positions that weren’t a part of the coaching set. Quantitative outcomes are offered in Table 1. Previously proposed express CNN fashions generalize poorly to the unseen aperture positions. Importantly, it takes greater than 2 days of coaching for them to converge, which is considerably impractical. Our implicit waveguide mannequin is smaller in dimension and reveals higher generalization capabilities than express fashions, whereas additionally converging a lot quicker. Moreover, when utilized in {a partially} coherent configuration, the accuracy of our mannequin is drastically improved over all coherent fashions, as seen by the prime quality achieved on each coaching and take a look at units when used with three or six modes.
Figure 2a illustrates the information assortment facet of our method: we present a sequence of section patterns, ϕm, on the SLM and seize corresponding depth photographs, Im, with a digicam targeted on the picture airplane. The implicit neural mannequin is skilled on these knowledge, as proven in Fig. 2b. For this objective, we simulate the ahead picture formation given the section patterns ϕm and a random set of initialized mannequin parameters. A loss perform ({mathcal{L}}) measures the mean-squared error between simulated and captured depth photographs. The backpropagation algorithm is utilized to be taught all mannequin parameters, together with the partially coherent waveguide mannequin. This mannequin contains a set of coherent modes ({f}_{{rm{WG}}}^{;okay}), every being a complex-valued picture on the SLM airplane with amplitude and section, that may be queried on the spatial frequencies akin to the MEMS steering angles um (Fig. 2c). In a low-étendue setting, that’s, when working with a hard and fast steering angle um, this implicit neural mannequin achieves the next high quality with considerably much less coaching knowledge in contrast with state-of-the-art express CNN fashions34 (Fig. 2nd). Therefore, our mannequin serves as a drop-in alternative for present wave propagation fashions with strictly higher efficiency. Even although the acquire in peak signal-to-noise ratio for express and implicit fashions on the validation set at convergence is only some decibel (Fig. 2nd), the generalization capabilities of our implicit mannequin are far superior to these of the specific mannequin. This is noticed in Fig. 2e, the place we present experimental outcomes evaluating a model-free free-space wave propagation operator35, the outcomes achieved by the coherent express mannequin34 and our partially coherent implicit mannequin. The photographs used listed here are consultant of a take a look at set of photographs unseen throughout coaching and present that our mannequin outperforms each baselines by a big margin.
The 3D eyebox describes the quantity inside which a hologram is seen to the observer’s eye. A big eyebox is essential for guaranteeing excessive picture high quality when the attention strikes and for making any show accessible to a various set of customers with completely different binocular traits. Figure 3a illustrates the supported eyebox quantity of our system (inexperienced quantity), which is 2 orders of magnitude bigger than that intrinsic to the SLM with out steered illumination (orange quantity). Moreover, our CGH framework totally makes use of the bandwidth akin to an eyebox dimension of 9 × 8 mm, in contrast to standard holographic shows16,44,57, which use solely a small fraction of the obtainable bandwidth supported by the SLM (pink quantity). Figure 3b visualizes photographs of the section aberrations of the waveguide system realized by our mannequin at numerous positions inside the supported eyebox. These aberrations range drastically over the eyebox, making it difficult to be realized utilizing roughly shift-invariant CNN-based fashions. We additional validate our mannequin’s skill to supply high-quality imagery throughout an prolonged étendue and evaluate its efficiency with that of standard holography in Fig. 3c,d. Figure 3c reveals experimentally captured mild fields of a two-dimensional decision chart picture positioned at optical infinity over the prolonged eyebox within the prime row, whereas the insets within the backside current the outcomes at particular positions, indicated by corresponding colors. As anticipated, standard holography offers a really small eyebox, which restricts the picture to be seen solely from the centre place. Our method helps a considerably bigger eyebox with excessive uniformity for transversely shifting eye positions. Figure 3d demonstrates longitudinal eyebox growth. Conventional holography suffers from vignetting, and naive pupil-steered holography25 doesn’t account for the axial shift of the attention, leading to inconsistent picture overlap. In addition, variations within the waveguide output and aberrations throughout completely different steering states considerably degrade picture high quality, even when the problem of inconsistent picture overlap is addressed, as proven within the third row of Fig. 3d. Our technique generates sturdy, high-quality imagery as the attention strikes alongside the optical axis. Note that the pupil place denoted with a blue marker shouldn’t be used throughout mannequin coaching, but our mannequin achieves comparable picture high quality and constant interpolation of parameters, demonstrating its generalization capabilities.

a, Visualizations of the supported étendue (3D eyebox). Conventional holography, reminiscent of easy section strategies (pink quantity on the origin), helps solely a portion of the eyebox that the SLM can produce (intrinsic étendue, orange quantity). Our system helps a two-orders-of-magnitude bigger eyebox than standard holographic show techniques (inexperienced quantity). b, Visualization of our realized mannequin throughout the expanded étendue. For higher visualization and interpolation capabilities, consult with the Supplementary Video. c, Experimentally captured decision chart photographs with expanded eyebox. Our system helps an eyebox dimension of 9 × 8 mm, which is considerably bigger than the display-limited eyebox of standard holographic shows. Using our implicit mannequin, we will effectively calibrate the system, reaching excessive picture high quality uniformly throughout the prolonged eyebox, as proven within the inset. d, Longitudinal eyebox growth outcomes. Markers present the corresponding pupil positions within the octahedron proven in a.
CGH framework for artificial aperture waveguide holography
At runtime, a phase-retrieval-like CGH algorithm computes one or a number of section patterns which can be displayed on the SLM to create a desired depth picture, quantity or mild area. In the large-étendue setting, standard 3D content material representations, reminiscent of level clouds, multilayer photographs or polygons40,41, are infeasible as a result of the big artificial aperture requires view-dependent results to be modelled. For this motive, a light-weight area L is a pure method to characterize the goal content material on this setting. Existing light-field-based CGH algorithms intention to attenuate the error between every ray of the goal mild area and the hologram transformed into a light-weight area illustration34,43,44, sometimes utilizing variants of the short-time Fourier rework. Supervising the optimized SLM section on mild rays, nevertheless, shouldn’t be bodily significant as a result of the notion of a ray doesn’t exist in bodily optics on the scale of the wavelength of seen mild.
To deal with this problem, we suggest a brand new light-field-based CGH framework. Unlike present algorithms that depend on ray-based representations for light-field holograms or use random pupils within the Fourier area58, we supervise our optimization reflecting the wave nature of sunshine extra precisely, by incorporating partial coherence and section continuity on the goal area. For this objective, we formulate the bodily right partially coherent picture formation of our hologram to simulate a picture passing by means of the person’s pupil, Iholo, and evaluate it with the corresponding picture simulated incoherently from the sunshine area, Ilf, as
$${I}_{{rm{lf}}}^{mu }left({bf{x}}proper)={int}_{!{bf{u}}}Lleft({bf{x}},{bf{u}}proper),{{bf{1}}}_{mu }left({bf{u}}proper)d{bf{u}},$$
(3)
$${I}_{{rm{holo}}}^{mu }left({bf{x}}proper)={leftlangle {leftvert {{mathcal{F}}}^{-1}left(;{mathcal{F}}left({f}^{;okay}({bf{x}},{{bf{u}}}^{m},{phi }^{m})proper){{bf{1}}}_{mu }left({bf{u}}proper)proper)rightvert }^{2}rightrangle }_{okay,m},$$
(4)
$$mathop{min }limits_{{{phi }^{m = 1ldots M}}}sum _{forall mu subset {bf{U}}}{mathcal{L}}left({I}_{{rm{lf}}}^{mu }left({bf{x}}proper),{I}_{{rm{holo}}}^{mu }left({bf{x}}proper)proper),$$
(5)
the place 1μ is the indicator perform for a set of subapertures μ (a subaperture is a small aperture state spanning the pupil positions closest to the corresponding mild area view), and U is the set of all potential subaperture combos inside our artificial aperture. We thus intention to optimize section patterns for any potential pupil place, diameter and form of the person concurrently. For this objective, we randomly pattern a batch of subaperture configurations in every iteration of our optimization routine, with the loss perform ({mathcal{L}}) measuring the mean-squared error for every subset of the apertures inside the batch (see the Methods and Supplementary Note 5 for added particulars).
In Fig. 4, we present experimentally captured 3D holograms. Figure 4a reveals images captured at numerous focal distances—0 D (∞ m), 1.5 D (0.67 m) and a pair of.5 D (0.4 m)—for 2 completely different digicam positions inside the eyebox (that’s, shifted laterally by 4.5 mm). Moreover, we evaluate a standard 3D CGH algorithm57, the state-of-the-art light-field-based CGH algorithm34, and our CGH framework for all of those settings. Our outcomes obtain the best high quality, exhibiting the most effective distinction and sharpness, they usually show clear 3D refocusing capabilities in addition to view-dependent parallax. In Fig. 4b, we seize a scene from a single digicam place and a hard and fast focus with a various digicam pupil diameter. As is anticipated, the in-focus a part of the 3D scene stays sharply targeted with out the traditional decision degradation whereas the out-of-focus blur, that’s, the depth-of-field impact, will get stronger with an rising pupil diameter. The section SLM patterns don’t have to be recomputed for various pupil positions, diameters or shapes, as all potential configurations are intrinsically accounted for by our distinctive CGH framework. All diffraction orders created by the SLM are collectively optimized to cut back artefacts59.

a, Comparison of holographic rendering strategies utilizing experimentally captured outcomes with completely different focus states (far: 0 D, center: 1.5 D, close to: 2.5 D) and pupil positions. b, Experimental outcomes with numerous pupil sizes show sturdy picture high quality within the targeted object (left insets), with accurately represented depth of area (proper insets) in response to the pupil dimension. RGB channel photographs had been captured individually on the identical wavelength and merged to boost the visible notion of the 3D impact and picture high quality (pseudo-colour).
This web page was created programmatically, to learn the article in its unique location you’ll be able to go to the hyperlink bellow:
https://www.nature.com/articles/s41566-025-01718-w
and if you wish to take away this text from our web site please contact us
