3d lcd panel free sample
If you have any futher inquires about 3D scanning and photogrammetry, contact our team at Digital Reality Lab. We are going to help with this kind of work in the best ways possible.
Browse by categoryBest sellingAnimatedPBRLow polyHigh poly3D Printable3D ScanAnimals & PetsArchitectureArt & AbstractCars & VehiclesCharacters & CreaturesCultural Heritage & HistoryElectronics & GadgetsFashion & StyleFood & DrinkFurniture & HomeMusicNature & PlantsNews & PoliticsPeoplePlaces & TravelScience & TechnologySports & FitnessWeapons & Military
The Naked Eye 3D LED display allows people to watch the large 3D LED screen without wearing glasses. This is a new type of LED screen. Compared with the traditional LED screen, more differences come from the realistic 3D visual experience. The customized 3D video perfectly integrates the LED display into your building. It breaks through the functional limitation of only using the LED display for advertising. Instead, it helps build a new public multimedia space.
The 3D glassesless screens, by displaying realistic 3D high-definition videos, attracted a lot of people to watch, and people proactively shared it on social platforms. That brought more widespread dissemination on the Internet, which in turn led to more and more people come to watch.
For example, South Korea’s SM TOWN’s 3D wave screen is well known as a glasses-free 3D LED display. It has become an outstanding case of LED display application in 2020 and has been widely spread.
Gradually, some well-known building owners and famous companies began to cooperate with other good LED display manufacturers to customize the Naked Eye 3D LED displays.
A 900 sqm, LianTronics L-shaped, curved video wall in China has gone viral on social media, as it appears to show a ‘3D’ spaceship emerging from the screen.Writing across multiple social media accounts, immersive art entrepeneur and LinkedIn influencer Dorothy Di Stefano, who was featured in AV Magazine in July, wrote: “The effect of depth, which gives the illusion that the spaceship is flying out of the building is achieved without special devices and glasses but can only be viewed from a certain angle.”
Principle of 3D Display:Below, let’s briefly introduce its principle.As we all know, our human brain is an extremely complex nervous system. And everything the human eye sees is actually three-dimensional.
What a person’s two eyes see are actually two tiny different pictures. And this subtle difference is achieved due to the small distance between the two eyeballs of a person.This subtle difference allows the brain to calculate the spatial coordinates between objects. We can also feel the distance and size of objects, which is the sense of 3D space.
The ordinary 3D display uses this principle of different images in the left and right eyes. For example, in a 3D movie, the content of the viewer’s left and right eyes is separated by glasses.
One key is to make good use of the reference object. The screen constructs a 3D effect with the help of the distance, size, shadow, and perspective relationship of the reference object. Just like below sketch, painters can use pencils to draw 3D images on paper.
Below, we divide the ordinary picture into several layers by white lines, and then enable the animation part to “break through” the white lines, covering other elements of the layer, and then using the parallax of the eyes to form a 3D illusion.
The 3D wave screen of the SM building uses the shadow of the background as a static 3D reference line, so that the moving waves have a feeling of breaking through the screen.
This is not enough. Have you discovered that the recent popular 3D screens are all angled curved screens composed of two faces? That is to say, they use the two walls at the corners. The display screen folds 90°, using video materials that conform to the perspective principle. The left screen displays the left view of the image, and the right screen displays the main view of the image. When people stand in front of the corner, they can see the side and front of the object at the same time, showing a realistic 3D effect. The following is a simplified animation of 3D ocean waves to show you the principle.
The most important factor to create glasses-free 3D led screen is the 3D video content. Do you know how important video material is to create a 3D display? Even a flat LED display screen, it can produce a good 3D effect with the right content. Please refer to the following video. You can also feel the 3D visual effect on this very common flat outdoor led display.
3,Leading the new direction of technology3D LED display is a new breakthrough in the field of outdoor display, and the interactive 3D display is also the direction of future screen development.
3D content production costs are highFirst of all, the production cost of creative content ranges from 500 US dollars per second to several thousand US dollars per second. Not all customers can afford such high costs.
Each screen requires customized 3D contentFor curved glasses-free 3D screens, there is only one direction for the best viewing angle.In addition, 3D content needs to adapt to the screen shape, sizes, viewing angles, and other parameters. It is difficult to copy content in different locations and on different screens. Therefore, each additional large LED screen requires additional time and costs for the 3D content production.
Avoid visual fatigueTaking into account the enthusiasm of people’s perception of creative contents, in order to avoid people’s visual fatigue, the actual broadcast cycle of naked-eye 3D advertisements is usually very short. That is not suitable for the daily commercial placement of outdoor LED media.
1. Enough pixels to create rich detailsOutdoor giant screens larger than 500 square meters which has enough pixels is very suitable for achieving naked-eye 3D effects.
2. Higher contrast and HDR (high dynamic range)The naked-eye 3D LED display with high refresh rate, high grayscale and high contrast can not only express the details of the dark and bright parts clearly and vividly, but also can easily display realistic 3D content. The realistic effect makes the audience feel the immersive experience effect.
8. Excellent video materialExcellent playback materials, by showing unexpected visual effects, make people impressed and give people a highly immersive visual experience.For example, you can make video content according to special seasons, play 3D waves and marine life in summer to give people a cool feeling, or make characters or cartoons that everyone knows to immediately arouse people’s enthusiasm.
2. ParkThe naked eye 3D screen is installed in the municipal park. The creative focus of the project is the flamingo, which symbolizes love.It creates a natural habitat for flamingos and provides an excellent 3D landscape: a huge flamingo is attracted by the butterfly flying off the screen, and then rushes out of the screen to bring the butterfly into the garden.The creative LED screen on the other side is in the shape of a huge wine glass, flowing with crystal clear liquid.
4. Media facadeThe building facade has an original design. One of the most important requirements for building media facade is that it will not affect the original aesthetic design of the wall, which keeps the uniqueness of the building itself.During the day, the pixel density and brightness are also enough to ensure that the video content on the screen is clearly visible and colorful.In the evening, the naked-eye 3D screen brings shock to pedestrians passing by.
5. Crossroads / Street corner3D LED display installed on the buildings at the crossroads or street corner is an excellent platform for brand and advertising. Pedestrians, drivers and passengers passing by on the road are the main audience.
(ps:We can only provide you with Led displays that are compatible with 3D video, custom videos need to be made by local companies that create visual effects!)
3D TVs are effectively dead — consequently, so is the race to deliver glasses-free 3D sets at home. But that doesn’t mean the technology is entirely useless. Sony’s new Spatial Reality Display (or SR Display), for example, uses eye-tracking technology to render believable 3D objects, without the need to wear 3D glasses or put on a VR headset. It"s something CG and VR artists could use to preview their work easily. And no, it"s not meant for consumers -- not at its $5,000 price, anyway.
Sony first previewed the SR Display at CES this year, where it was called its "Eye-Sensing Light Field Display." It"s made up of a 15.6-inch 4K LCD; a high-speed vision sensor that tracks eye movement, as well as your position as you walk around the display; and a micro-optical lens that"s layed over the LCD, and divides the screen for your left and right eyes to create a stereoscopic image. The SR Display requires a beefy PC, with at least a modern Intel Core i7 CPU and NVIDIA"s RTX 2070 Super GPU, to process its complex real-time rendering algorithm. That makes sense, since it"s producing two separate 2K images constantly to match your eye movement.
For obvious reasons, we couldn"t see the Spatial Reality Display in action ahead of this announcement. But Engadget UK Bureau Chief Mat Smith, who previewed it at CES, describes the screen as something like a "potted hologram demo," that was "small, fuzzy and unremarkable." While Sony is talking about it as a glasses-free 3D screen, promotional videos make it seem like a small stage for holographic objects. (It seems vaguely reminiscent of Sega"s "90s-era holographic arcade cabinet for Time Traveler, which relied on mirrors to create a holographic optical illusion.)
I"ll reserve full judgement until we see a final version of the Spatial Reality Display in action. It’s already being used by a few companies: Volkswagen says it"s found "considerable usefulness and multiple applications" for the display during its ideation and design process. Sony Pictures Entertainment also used while filming Ghostbusters: Afterlife for pre-visualization for scenes and previewing 3D models. It could be particularly useful for VFX-heavy movies, since it allows filmmakers to get a glimpse of CG effects and 3D models from multiple angles.
Given how difficult it is for creatives to collaborate today, the Spatial Reality Display could also be a useful way to view complex 3D models and scenes remotely. That ties into the company"s "3R Technology," its new focus on "reality, real-time and remote" solutions during the pandemic era. That"ll include things like volumetric capture technology, as well as its new intelligent image sensors with AI processing, based on a description of Sony"s presentation during the CEATEC conference next week.
Glasses-free three-dimensional (3D) displays are one of the game-changing technologies that will redefine the display industry in portable electronic devices. However, because of the limited resolution in state-of-the-art display panels, current 3D displays suffer from a critical trade-off among the spatial resolution, angular resolution, and viewing angle. Inspired by the so-called spatially variant resolution imaging found in vertebrate eyes, we propose 3D display with spatially variant information density. Stereoscopic experiences with smooth motion parallax are maintained at the central view, while the viewing angle is enlarged at the periphery view. It is enabled by a large-scale 2D-metagrating complex to manipulate dot/linear/rectangular hybrid shaped views. Furthermore, a video rate full-color 3D display with an unprecedented 160° horizontal viewing angle is demonstrated. With thin and light form factors, the proposed 3D system can be integrated with off-the-shelf purchased flat panels, making it promising for applications in portable electronics.
Inspired by the vertebrate eyes, we propose a general approach of 3D display, through which spatially variant information is projected based on the frequency of observation. Densely packaged views are arranged at the center, while sparsely arranged views are distributed at the periphery. In fact, package views in a gradient density are straightforward, but nontrivial. First, the angular separation of the views needs to be varied. Second, the irradiance pattern of each view has to be tailored so as to eliminate overlap between views to avoid crosstalk. Third, one should avoid gaps between views to ensure smooth transition within the field of view (FOV). As a result, views with hybrid dots, lines, or rectangle distributions are desirable to achieve gradient density. However, 3D display based on geometric optics, such as lenticular lens, microlens arrays, or pinhole arrays, can neither manipulate gradient view distribution nor expand the FOV
To manipulate view distribution over a large scale, we design and propose a feasible strategy based on the two-dimensional (2D)-metagrating complex (2DMC). The 2DMCs are proposed to individually control both the propagation direction and the irradiance distribution of the emergent light from each 2D metagrating. As a result, the 3D display system provides a high spatial and angular resolution at the central viewing zone, i.e., the most comfortable observing region. Since the periphery viewing zone is less used in most occasions, we suppress the redundant depth information and broaden the FOV to a range comparable to that of a 2D display panel. Furthermore, a homemade flexible interference lithography (IL) system is developed to enable the fabrication of the view modulator with >1,000,000 2D metagratings over a size >9 inch. With total display information <4 K, a static or video rate full-color 3D display with an unprecedented FOV of 160° is demonstrated. The proposed 3D display system has a thin form factor for potential applications in portable electronic devices.
Generally, the spatial resolution (multiview display pixels Nmul) and the angular resolution (angular separation ∆θ) determine the visual experience provided by a multiview 3D display
where ID represents the information density. A higher information density provides a higher spatial resolution with more fluidic motion parallax. In prior studies, constant information density was provided within the viewing angle by views with the same distribution pattern (Fig. (Fig.1a).1a). In contrast, we propose 3D display with spatially variant information density by precisely manipulating the view distribution into hybrid dot/line/rectangle shape (Fig. (Fig.1b1b).
a State-of-the-art glasses-free 3D display with uniformly distributed information. The irradiance distribution pattern of each view is a dot or a line for current 3D displays based on microlens or cylindrical lens array. b The proposed glasses-free 3D display with variant distributed information. The irradiance distribution pattern of each view consists of dots, lines, or rectangles. To make a fair comparison, the number of views (16 views) is consistent with a. c Schematic of a foveated glasses-free 3D display. An LCD panel matches the view modulator pixel by pixel. For convenience, two voxels are shown on the view modulator. Each voxel contains 3 × 3 pixelated 2D metagratings to generate View 1–View 9
3D imaging characteristicsView modulator in Fig. Fig.3e3eView modulator in Fig. Fig.5a5aView modulator in Fig. Fig.5b5bScreen size12 cm × 9 cm5.4 cm × 5.4 cm20.6 cm × 12.9 cm
a Variation of the scaling factor for periods of the 2D metagratings. The blue dashed line marks an area containing 3 × 3 voxels. The red dashed line marks a voxel. b The microscopic image of the 2DMCs, captured by a laser confocal microscope (OLYMPUS, OLS4100). The red dashed line also marks a voxel. c The irradiance of view distribution and the intensity distribution along the white dashed line of the views. d The variant information density distribution (blue solid line) and its comparison with two cases for uniformly distributed information. Case A is that the angular separation between views is set to 10° with decreased FOV (green dashed line). In case B, the FOV is kept to 160°, but the information density is greatly reduced (red dashed line). e Images of numbers “1–9” observed from left to right views. A dinosaur toy is adhered to the left corner of the view modulator and is served as a reference for the viewing angle. See another 3D images in Figs. S5 and S7
A shadow mask with hybrid images of numbers is adopted to match the 9-view modulator pixel by pixel. When the light from a collimated light-emitting diode (LED) illuminates the prototype, we record the “1–9” numbers projected to each view, as shown in Fig. Fig.3e.3e. The horizontal FOV is 160°, and the vertical FOV is 50° (Visualization 1). The information density is modulated to 80 PPD at the central region and 26.7 PPD at the periphery (Fig. (Fig.3d3d).
For video rate full-color 3D displays, we successively stack a liquid crystal display (LCD) panel, color filter, and view modulator together to keep the system thin and compatible (Fig. (Fig.4a).4a). Since most LCD panels have already been integrated with a color filter, the system integration can be simply achieved by pixel to pixel alignment of the 2D-metagrating film with the LCD panel via one-step bonding assembly. The layout of 2DMCs on the view modulator is designed according to the off-the-shelf purchased LCD panel (P9, HUAWEI) (Fig. (Fig.4b).4b). To minimize the thickness of the prototype, 2D metagratings are nanoimprinted on a flexible polyethylene terephthalate (PET) film with a thickness of 200 µm (Fig. (Fig.4c),4c), resulting in a total thickness of <2 mm for the whole system (Fig. (Fig.4d4d).
a Schematic of the full-color video rate 3D displays that contain an LCD panel, a color filter, and a view modulator. b The microscopic image of the RGB 2DMCs on the view modulator. The red dashed line marks a voxel containing 3 × 3 full-color pixels, and the blue dashed line marks a full-color pixel containing three subpixels for R (650 nm), G (530 nm), and B (450 nm). c Photo of the nanoimprinted flexible view modulator with a thickness of 200 µm. d A full-color, video rate prototype of the proposed 3D display. The backlight, battery, and driving circuit are extracted
a Images of “Albert Einstein” and b “whales” and “lotus leaves” observed from various views with natural motion parallax and color mixing. The number shown in the lower left corner represents the viewing angle of the image. See other 3D images in Fig. S6
Here we achieved full-color 3D display with significantly suppressed color dispersion by several ways. First, since 2D-metagratings are wavelength sensitive, the structure of 2DMC is designed pixel by pixel according to the wavelength. Second, from the system point of view, the introduction of color filter significantly filters out the influence of color dispersion. Third, we pre-calibrate the white balance of the prototype. The displayed images can be pre-processed to further reduce the color dispersion.
Facilitated by the rapid advancement of nano-optics, we presented a general design strategy of glasses-free 3D display from view modulation aspect. The view modulator for multiple view projection is no longer a simple conjugate relation between image and object. We proved that 2DMC can be designed to precisely tailor the view distribution for gradient view arrangement and enlarged viewing angle. The view modulator with 2DMCs can be further designed to eliminate crosstalk or increase viewing depth. Moreover, by combining views with a fan-shaped irradiance pattern, a tabletop 3D display system with variant information density can be realized.
In summary, we propose a facile and robust approach for spatially variant information density 3D display with a large-scale 2DMC served as a view modulator. A homemade flexible IL system is developed to enable the nanopatterning of view modulator with increased complexity for portal electronic devices. As a result, high angular resolution is preserved in the central region, while a wide viewing angle is maintained. The display information is arranged nonuniformly based on the observing habit of human beings. Hence, we demonstrate a full-color, video rate 3D display with a thin form factor. The viewing angle sets a record of 160° for the glasses-free 3D display.
The demonstrated spatially variant information density 3D display opens a new avenue for glasses-free 3D displays by tackling the critical trade-off among the spatial resolution, angular resolution, and viewing angle. We anticipate the ultrawide-FOV foveated 3D display to be used in commercial applications, such as consumer electronic devices.
3D simulations were performed using the finite-difference time-domain (FDTD) method, and FDTD simulations were conducted using Lumerical’s FDTD solver. The refractive index of the photoresist was set as 1.476. We used a plane wave source with an incident angle of 30°, and the wavelength was 540 nm. We used Bloch and perfectly matching layer boundary conditions for the transverse and longitudinal directions, respectively. The practical 2DMCs were replaced with spatial-multiplexing gratings with multiple periods. The periods ranged from 600 to 1400 nm. The mesh accuracy was chosen as a compromise among the accuracy, memory requirements, and simulation time.
1. Nam D, et al. Flat panel light-field 3-D display: concept, design, rendering, and calibration. Proc. IEEE.2017;105:876–891. doi: 10.1109/JPROC.2017.2686445. [CrossRef]
8. Ni LX, et al. 360-degree large-scale multiprojection light-field 3D display system. Appl. Opt.2018;57:1817–1823. doi: 10.1364/AO.57.001817. [PubMed] [CrossRef]
9. Lee JH, et al. Optimal projector configuration design for 300-Mpixel multi-projection 3D display. Opt. Express.2013;21:26820–26835. doi: 10.1364/OE.21.026820. [PubMed] [CrossRef]
11. Yang SW, et al. 162-inch 3D light field display based on aspheric lens array and holographic functional screen. Opt. Express.2018;26:33013–33021. doi: 10.1364/OE.26.033013. [PubMed] [CrossRef]
12. Okaichi N, et al. Integral 3D display using multiple LCD panels and multi-image combining optical system. Opt. Express.2017;25:2805–2817. doi: 10.1364/OE.25.002805. [PubMed] [CrossRef]
14. Krebs P, et al. Homogeneous free-form directional backlight for 3D display. Opt. Commun.2017;397:112–117. doi: 10.1016/j.optcom.2017.04.002. [CrossRef]
15. Wan WQ, et al. Multiview holographic 3D dynamic display by combining a nano-grating patterned phase plate and LCD. Opt. Express.2017;25:1114–1122. doi: 10.1364/OE.25.001114. [PubMed] [CrossRef]
16. Zhou F, et al. Pixelated blazed gratings for high brightness multiview holographic 3D display. IEEE Photonics Technol. Lett.2020;32:283–286. doi: 10.1109/LPT.2020.2971147. [CrossRef]
17. Watanabe H, et al. Pixel-density and viewing-angle enhanced integral 3D display with parallel projection of multiple UHD elemental images. Opt. Express.2020;28:24731–24746. doi: 10.1364/OE.397647. [PubMed] [CrossRef]
18. Zhao ZF, et al. Bionic-compound-eye structure for realizing a compact integral imaging 3D display in a cell phone with enhanced performance. Opt. Lett.2020;45:1491–1494. doi: 10.1364/OL.384182. [PubMed] [CrossRef]
20. Ting CH, et al. Multi-user 3D film on a time-multiplexed side-emission backlight system. Appl. Opt.2016;55:7922–7928. doi: 10.1364/AO.55.007922. [PubMed] [CrossRef]
31. Lv GJ, et al. Autostereoscopic 3D display with high brightness and low crosstalk. Appl. Opt.2017;56:2792–2795. doi: 10.1364/AO.56.002792. [PubMed] [CrossRef]
33. Yang L, et al. Demonstration of a large-size horizontal light-field display based on the LED panel and the micro-pinhole unit array. Opt. Commun.2018;414:140–145. doi: 10.1016/j.optcom.2017.12.069. [CrossRef]
40. Hu YQ, et al. 3D-Integrated metasurfaces for full-colour holography. Light. Sci. Appl.2019;8:86. doi: 10.1038/s41377-019-0198-y. PubMed] [CrossRef]
51. Wan WQ, et al. Efficient fabrication method of nano-grating for 3D holographic display with full parallax views. Opt. Express.2016;24:6203–6212. doi: 10.1364/OE.24.006203. [PubMed] [CrossRef]
It is important to note that this technology does not allow for vertical parallax, meaning the stereoscopic effect is only created along the horizontal axis, from left to right. And while the 3D effect is certainly perceivable while standing still, it is much more effective when the user can move and truly appreciate the parallax.
Within the 3D scene being visualized, content positioned too close or too far from the viewer maycause image ghosting. This results in a “sweet spot” for content somewhere in the middle that gives the best parallax effect while reducing artifacts.
Facial tracking displays like Sony’s SRD take advantage of the micro lenses mentioned above in addition to a facial tracking camera embedded within the display’s housing. This allows the display to track the position of the user in space and show a view of the 3D subject that would correspond to where they are with respect to the screen. This allows for both horizontal and vertical parallax, giving users more freedom to examine the subject from various angles.
With any of the displays below, one thing is exceedingly clear: they are NOT regular screens. The content intended to be shown MUST be carefully curated and designed to maximize the 3D effect. Below are some principles and suggestions we noted to make 3D visuals more successful.
Construct the overall scene to give the viewer as many visual hints for depth as possible. Things like lighting, movement, and background selection give the brain additional cues that cause us to perceive depth. A high contrast object on a flat background, artificially scaling an object up as it moves toward the user in 3D space, and backgrounds that provide perspective or a horizon line will amplify perceived depth.
Since Charles Wheatstone first invented stereoscopy, the research interest in three-dimensional (3D) displays has extended for 150 years, and its history is as long as that of photography (Charles, 1838). As a more natural way to present virtual data, glasses-free 3D displays show great prospects in various fields including education, military, medical, entertainment, automobile, etc. According to a survey, people spend an average of 5 h every day watching display panel screens. The visualization of 3D images will have a huge impact on improving work efficiency. Therefore, glasses-free 3D displays are regarded as next-generation display technology.
Generally, we assign glasses-free 3D displays into three main categories: holographic 3D displays, volumetric 3D displays and autostereoscopic 3D displays (Geng, 2013). A holographic 3D display is a technology that records both the amplitude and phase information of a real object and reproduces it through specific mediums (e.g., photorefractive polymers) (Tay et al., 2008; Blanche et al., 2010). Furthermore, by using a spatial light modulator that directly modulates the coherent wave, computer-generated hologram systems can be implemented via numerical simulation. (Hahn et al., 2008; Sasaki et al., 2014). Currently, powerful acceleration chips or video processors have enabled the reproduction of high-quality 3D holograms at video rates (An et al., 2020; Shi et al., 2021). In the future, real-time holographic 3D displays will have wide applications in mobile displays and AR displays (Peng et al., 2021; Lee et al., 2022). Volumetric 3D display is another technology that generates luminous image points (i.e., voxels) in space via special media, such as trapped particles and fluorescent screens. These image points form 3D graphics that can be observed within 360° (Kumagai et al., 2015; Kumagai et al., 2018; Smalley et al., 2018; Hirayama et al., 2019). Both the holographic 3D display and volumetric 3D display require a large amount of data to provide 3D content, which brings challenges to data processing and transportation.
In contrast, autostereoscopic 3D displays reduce computing costs by discretizing a continuously distributed light field of 3D objects into multiple “views”. The properly arranged perspective views can approximate the 3D images with motion parallax and stereo parallax. Moreover, by modulating the irradiance pattern of each view, only a small number of views are required to reconstruct the light field. A typical autostereoscopic 3D display only needs to integrate two components: an optical element and an off-the-shelf refreshable display panel (e.g., liquid crystal display, organic light-emitting diode display, light-emitting diode display) (Dodgson, 2005). With the advantages of a compact form factor, ease of integration with flat display devices, ease of modulation, and low cost, autostereoscopic 3D displays can be applied in portable electronics and redefine human-computer interfaces. The function of the optical element in an autostereoscopic 3D display is to manipulate the incident light and generate a finite number of views. To improve the display effect, the optical elements also need to modulate the views and angular separation between views, which is called the “view modulator” in this paper. View modulators represent a special class of optical elements that are used in glass-free 3D displays for view modulation, such as parallax barriers, lenticular lens arrays, and metagratings.
One of the most critical issues in autostereoscopic 3D displays is how to design view modulators. When we design view modulators, several essential problems need to be considered that are directly related to 3D display performance (Figure 1): 1) To minimize crosstalk and ghost images, the view modulators should confine the emerging light within a well-defined region; 2) To address the vergence-accommodation conflict, the view modulators need to provide both correct vergence and accommodation cues. Vergence-accommodation conflict occurs when the depth of 3D images induced by binocular parallax lies in front of or behind the display screen, whereas the depth recognized by a single eye is fixed at the apparent location of the physical display panel because the image observed by a single eye is 2D (Zou et al., 2015; Koulieris et al., 2017); 3) To achieve a large field of view (FOV), the view modulators need to precisely manipulate light over a large steering angle; 4) For an energy-efficient system, the light efficiency of the view modulators needs to be adequate. In addition to these four important factors that affect the optical performance of 3D displays, there are some additional features that should be addressed in applications; 5) To maintain a thin form factor and be lightweight for portable electronics, the design of view modulators should be elegant with as few layers or components as possible; 6) To solve the tradeoff between spatial resolution, angular resolution, and FOV, the view modulators should manipulate the shape of view for variant information density. 7) In window display applications, the view modulators should be transparent to combine virtual 3D images with physical objects for glasses-free augmented reality display.
Depending on the types of adopted view modulators, autostereoscopic 3D displays can be divided into geometrical optics-based and planar optics-based systems. With regard to geometrical optics-based 3D displays, the most representative architectures are parallax barrier or lenticular lens array-based, microlens array-based and layer-based systems (Ma et al., 2019). The parallax barrier or lenticular lens array was first integrated with flat panels and applied in 3D mobile electronic devices because of the advantages of utilizing existing 2D screen fabrication infrastructure (Ives, 1902; Kim et al., 2016; Yoon et al., 2016; Lv et al., 2017; Huang et al., 2019). For improved display performance, aperture stops were inserted into the system to reduce the crosstalk by decreasing the aperture ratio; however, this strategy comes at the expense of light efficiency (Wang et al., 2010; Liang et al., 2014; Lv et al., 2014). Microlens array-based 3D display, i.e., integral imaging display generates stereoscopic images by recording and reproducing the rays from 3D objects (Lippmann, 1908; Martínez-Corral and Javidi, 2018; Javidi et al., 2020). It can present full motion parallax by adding light manipulating power in a different direction. Recently, a bionic compound eye structure was proposed to enhance the performance of integral imaging 3D display systems. With proper design based on geometric optics, the 3D display prototype can be used to obtain a 28° horizontal, 22° vertical viewing angle, approximately two times that of a normal integral imaging display (Zhao et al., 2020). In another work, an integral imaging 3D display system that can enhance both the pixel density and viewing angle was proposed, with parallel projection of ultrahigh-definition elemental images (Watanabe et al., 2020). This prototype display system reproduced 3D images with a horizontal pixel density of 63.5 ppi and viewing angles of 32.8° and 26.5° in the horizontal and vertical directions, respectively. Furthermore, with three groups of directional backlight and a fast-switching liquid crystal display (LCD) panel, a time-multiplexed integral imaging 3D display with a 120° wide viewing angle was demonstrated (Liu et al., 2019). The layer-based 3D display invented by Lanman and Wetzstein (Lanman et al., 2010; Lanman et al., 2011; Wetzstein et al., 2011; Wetzstein et al., 2012) used multiple LCD screen layers to modulate the light field of 3D objects. This display can provide both vergence and accommodation cues for viewers with limited fatigue and dizziness (Maimone et al., 2013). Nevertheless, its FOV is limited by the effective size of the display panel. Moreover, layer-based 3D displays also suffer from a trade-off between the depth of field and the complexity of the system (i.e., the layer number for the display devices). In general, geometrical optics-based autostereoscopic 3D displays have the advantages of low cost and thin form factors that are compatible with 2D flat display panels. However, we still have a fair way to go due to the tradeoffs among the resolution, FOV, depth cues, depth of field and form factor (Qiao et al., 2020). Alleviating these tradeoffs and improving the image quality to provide more realistic stereoscopic vision has opened up an intriguing avenue for developing next-generation 3D display technology.
Fast-growing planar optics have attracted wide attention in various fields because of their outstanding capability for light control (Genevet et al., 2017; Zhang and Fang, 2019; Chen and Segev, 2021; Tabiryan et al., 2021; Xiong and Wu, 2021). In the field of glasses-free 3D displays, planar optical elements, such as diffraction gratings, diffractive lenses and metasurfaces, can be used to modulate the light field of 3D objects at the pixel level. With proper design, planar optical elements at the micro or nano scale provide superior light manipulation capability in terms of light intensity, phase, and polarization. Therefore, planar optics-based glass-free 3D displays have several merits, such as reduced crosstalk, no vergence-accommodation conflict, enhanced light efficiency, and an enlarged FOV. Figure 2 shows the developing trend for 3D display technologies with regard to the revolution of view modulators. Planar optics are becoming the “next-generation 3D display technology” because of outstanding view modulation flexibility.
FIGURE 2. Schematic of the development of glasses-free 3D displays with regard to the revolution of view modulators. LLA: Lenticular lens array; MLA: Microlens array.
In this review, the critical challenges for glasses-free 3D displays are analyzed. Planar optics-based 3D displays suggest a variety of solutions for 3D displays, which will be reviewed in the section Glasses-Free 3D Display Based on Planar Optical Elements. As a specific application and an appealing feature, augmented reality (AR) 3D displays enabled by planar optics will be comprehensively introduced in the section Glasses-Free augmented reality 3D display based on planar optical elements. In addition to the design of view modulators, the fabrication of view modulators is another challenge that hinders the development of 3D displays. Therefore, in the section Fabrication of Large-Scale Micro/Nanostructures on View Modulators for 3D Displays, we will highlight multiple micro/nanofabrication methods for view modulators in 3D displays. In the section Conclusions and Outlook, the current status for glasses-free 3D displays and glasses-free AR 3D displays will be summarized. Finally, future directions and potential applications are suggested in the section Conclusions and Outlook.
Diffraction gratings are unique components that can split incident light into many spatial directions simultaneously and have been widely used in steering devices, such as spectrometers, optical waveguides and laser resonators (Zola et al., 2019; Cao et al., 2020; Görrn et al., 2011; Zhang et al., 2019; Liu et al., 2020). Fattal et al. employed diffraction gratings in a 3D display and proposed a directional diffractive backlight to produce full parallax views within a wide FOV (Fattal et al., 2013). The key elements in the backlight were pixelated grating patterns fabricated by electron-beam lithography. Both passive and active prototypes provided 64-view images within a FOV of 90°. The diffractive wide-angle backlight is regarded as a revolutionary 3D display (https://www.technologyreview.com/innovator/david-fattal). It has opened up rich opportunities for planar optics-based glasses-free 3D displays.
On this basis, a holographic sampling 3D display was proposed by combining a phase plate with a thin film transistor-LCD panel (Wan et al., 2017) (Figure 3A). The phase plate modulates the phase information, while the LCD panel provides refreshable amplitude information for the light field. Notably, the period and orientation of the diffraction gratings in each pixel are calculated to form converged beams instead of (semi)parallel beams in a geometrical optics-based 3D display. As a result, the angular divergence of target viewpoints (1.02°) is confined close to the diffraction limit (0.94°), leading to significantly reduced crosstalk and ghost images (Figures 3B,C). The researchers further presented a holographic sampling 3D display based on metagratings and demonstrated a video rate full-color 3D display prototype with sizes ranging from 5 to 32 inches (Figure 3D) (Wan et al., 2020). The metagratings on the view modulator were designed to operate at the R/G/B wavelength to reconstruct the wavefront at sampling viewpoints with the correct white balance (Figure 3E). By combining the view modulator, a LCD panel and a color filter, virtual 3D whales were presented, as shown in Figure 3F. To address the vergence-accommodation conflict in 3D displays, a super multiview display was also proposed based on pixelated gratings. Closely packaged views with an angular separation of 0.9° provide a depth cue for the accommodation process of the human eye (Wan et al., 2020).
FIGURE 3. (A) Schematic of the proposed holographic sampling 3D display. (B) Photograph of 4 views and the light intensity distribution at 4 views. (C) 3D images of a car running through trees. (D) Schematic of the full-color video rate holographic sampling 3D display. (E) The radiation pattern measured from a 16-view point view modulator. (F) 3D images of whales and logos. [(A–C) Reproduced from Wan et al. (2013). Copyright (2021) with permission from Optica Publishing Group. (D–F) Reproduced from Wan et al. (2020). Copyright (2021) with permission from Elsevier B.V.].
To summarize, diffraction grating-based 3D displays have the advantages of minimum crosstalk, reduced vergence-accommodation conflict, tailorable view arrangement, continuous motion parallax and a wide FOV. Nevertheless, the experimental diffraction efficiency of binary gratings is approximately 20%, leading to inevitable high-power consumption. On this basis, diffractive lenses and metasurfaces are employed for 3D displays.
Light efficiency is a crucial parameter in glass-free 3D display systems. Diffractive lenses with blazed structures can be used to focus light together, thereby showing higher light efficiency in 3D displays than diffraction gratings. As shown in Figures 4A,B, pixelated blazed diffractive lenses are introduced in a 3D display to form four independent convergent views, while the amplitude plate provides the images at these views. The system has the following benefits. First, each structured pixel on the view modulator is calculated by the relative position relationship between the pixel and viewing points. These accurately calculated aperiodic structures can improve the precision of light manipulation, thereby eliminating crosstalk and ghost images. Second, the 4-level blazed diffractive lens greatly increases the diffraction efficiency of the grating-based 3D display from 20 to 60% (Zhou et al., 2020). In another work, a view modulator covered with a blazed diffractive lenticular lens was proposed in a multiview holographic 3D display (Hua et al., 2020). This system redirected the diverging rays to shape four extended views with a vertical FOV of 17.8°. In addition, the diffraction efficiency of the view modulator was increased to 46.9% using the blazed phase structures. Most recently, a vector light field display with a large depth of focus was proposed based on an intertwined flat lens, as shown in Figures 4C,D. A grayscale achromatic diffractive lens was designed to extend the depth of focus by 1.8 × 104 times. By integrating the intertwined diffractive lens with a liquid crystal display, a 3D display with a crosstalk below 26% was realized over a viewing distance ranging from 24 to 90 cm (Zhou et al., 2022).
FIGURE 4. (A) Schematic of a glass-free 3D display based on a multilevel diffractive lens. (B) 3D images of letters or thoracic cages in a blazed diffractive lens-based 3D display. (C) Schematic of a vector light field display based on a grayscale achromatic diffractive lens. (D) Full color 3D images of letters and the thoracic cage produced by the intertwined diffractive lens-based 3D display. [(A,B) Reproduced from Zhou et al. (2020). Copyright (2021), with permission from IEEE. (C,D) Reproduced from Zhou et al. (2022). Copyright (2022), with permission from Optica Publishing Group.].
In summary, coupled with various design approaches, an optimized diffractive lens can enable the realization of a high-quality full spectrum in imaging applications (Peng et al., 2015; Heide et al., 2016; Peng et al., 2016; Peng et al., 2019). The design of diffractive lenses in 3D displays bears similarities to the design in imaging. This solves the problem of light efficiency in diffractive grating-based 3D displays. The optimized lens features a high light efficiency, wide spectrum response and large depth of focus, which benefits glasses-free 3D displays in terms of brightness, color fidelity, and viewing depth. However, the minimum feature size of diffractive lenses is generally larger than that of nanogratings due to the fabrication limit, resulting in a reduced viewing angle.
We believe that metasurfaces can be used in 3D displays because of their unprecedented capability to manipulate light fields. In 2013, 3D computer-generated holography image reconstruction was demonstrated in the visible and near-infrared range by a plasmonic metasurface composed of pixelated gold nanorods (Huang et al., 2013) (Figures 5A,B). The pixel size of the metasurface hologram was only 500 nm, which is much smaller than the size of the hologram pixels generated by spatial light modulators or diffractive optical elements. As a result, a FOV as large as 40° was demonstrated. To correct chromatic aberration in integral imaging 3D displays, a single polarization-insensitive broadband achromatic metalens using silicon nitride was proposed (Fan et al., 2019) (Figures 5C,D). Each achromatic metalens has a diameter of 14 µm and was fabricated via the electron beam lithography technique. The focusing efficiency was 47% on average. By composing a 60 × 60 metalens in a rectangular lattice, a broadband achromatic integral imaging display was demonstrated under white light illumination. To address the tradeoff between spatial resolution, angular resolution, and FOV, a general approach for foveated glasses-free 3D displays using the two-dimensional metagrating complex was proposed recently (Figure 5E) (Hua et al., 2021). The dot/linear/rectangular hybrid views, which are shaped by a two-dimensional metagrating complex, form spatially variant information density. By combining the two-dimensional metagrating complex film and a LCD panel, a video rate full-color foveated 3D display system with an unprecedented FOV up to 160° was demonstrated (Figure 5F). Compared with prior work, the proposed system makes two breakthroughs: First, the irradiance pattern of each view can be tailored carefully to avoid both crosstalk and discontinuity between views. Second, the tradeoffs between the angular resolution, spatial resolution and FOV in 3D displays are alleviated.
FIGURE 5. (A) Schematic of a plasmonic metasurface for 3D CGH image reconstruction. (B) Experimental hologram images for different focusing positions along the z direction. (C) Schematic of the broadband achromatic metalens array for a white-light achromatic integral imaging display. (D) Reconstructed images for the cases that “3” and “D” lie on the same depth plane or on different depth planes, respectively. Scale bar, 100 µm. (E) Schematic of a foveated glasses-free 3D display using the two-dimensional metagrating complex. (F) “Albert Einstein” images in the foveated 3D display system. [(A,B) Reproduced from Huang et al. (2013). Copyright (2021), with permission from Springer Nature. (C,D) Reproduced from Fan et al. (2019). Copyright (2021), with permission from Springer Nature. (E,F) Reproduced from Hua et al. (2021). Copyright (2021), with permission from Springer Nature.].
To summarize, metasurfaces provide a solution that maintains both a large FOV and reasonable light efficiency. Moreover, the superior light manipulation capability provides an inspiring foveated glasses-free 3D display solution for an intrinsic tradeoff between resolution and viewing angle in 3D displays. Like all metamaterial-based photonic devices, the mass fabrication of metasurfaces is the major issue that prevents industrial application of this technology.
As mentioned above, we have reviewed the research progress for planar optics-based glass-free 3D displays: diffraction grating-based, diffractive lens-based and metasurface-based. Compared with geometric optics-based 3D displays, these displays all have common advantages, such as high precision control at the pixel level, high degrees of freedom in design, and compact form factors. On the other hand, they have their own properties in terms of light efficiency, FOV, viewing distance, and fabrication scaling, as listed in Table 1. The diffraction grating-based method has both a large FOV with continuous motion parallax and large fabrication scaling. Although the bandwidth of the diffraction grating is limited, a full-color display can still be realized by integrating a color filter. As a result, the problem of selective bandwidth operation is trivial in 3D displays. However, the low light efficiency of binary gratings can be problematic because of the increased power consumption, especially in portable electronics. The diffractive lens-based approach greatly improves the light efficiency. Moreover, through proper design, an intertwined diffractive lens can be used to realize a large viewing distance and broadband spectrum manipulation. Nevertheless, the viewing angle of a diffractive lens-based 3D display is limited by the numerical aperture. The metasurface-based technique has the advantages of medium light efficiency, a large FOV and broadband spectrum response. Therefore, metasurfaces can provide better 3D display performance in terms of color fidelity. Furthermore, the subwavelength dimensions of metasurfaces ensure their flexibility for view manipulation. However, the complexity and difficulty in nanofabrication hinders the application of metasurfaces in large-scale displays.
Most recently, augmented reality (AR), as an interactive display that fuses the virtual world with reality, has become an aggressive research field that attracts broad attention from researchers, investors and scientists (Chang et al., 2020; Xiong et al., 2021). Glasses-free AR 3D displays are of special interest because of the huge demand in many applications, such as head-up displays in vehicles, education, and exhibitions. Although near-to-eye displays for AR technologies based on wearable devices can be implemented by various methods, including free-form optics, holographic optical elements, surface relief gratings, or metasurfaces, the realization of glasses-free AR 3D displays is a much harder task because of the uncertain spatial relationship between the display screen and observers. Glasses-free AR 3D displays can be assigned to either reflection-type and optical see-through type displays. Li et al. adopted a mirror-based pinhole array to demonstrate a reflective AR 3D display system based on an integral imaging display (Li et al., 2019). Recently, they improved the performance of the reflection-type AR 3D system with high definition and high brightness based on the use of a reflective polarizer (Li et al., 2021). However, in the reflection-type AR 3D display, virtual images are fused with mirror images of the real scene rather than the real scene itself.
The optical see-through glasses-free AR 3D display permits people to perceive real scenes directly through a transparent optical combiner (Hong et al., 2016; Mu et al., 2020). Generally, it occupies the mainstream for various AR 3D display technologies and can be realized by using geometric optical elements, holographic optical elements (HOE) and metagratings. In 2020, a lenticular lens-based light field 3D display system with continuous depth was proposed and integrated into AR head up display optics (Lee et al., 2020). This integrated system can generate stereoscopic virtual images with a FOV of 10° × 5°.
The HOE is an optical component that can be used to produce holographic images using principles of diffraction, which is commonly used in transparent displays, 3D imaging, and certain scanning technologies. HOEs share the same optical functions as conventional optical elements, such as mirrors, microlenses, and lenticular lenses. On the other hand, they also have unique advantages of high transparency and high diffraction efficiency. On this basis, the integral imaging display can be integrated with an AR display based on a lenticular lens or microlens-array HOE (Li et al., 2016; Wakunami et al., 2016). Moreover, the HOE can be recorded by wavelength multiplexing for full-color imaging (Hong et al., 2014; Deng et al., 2019) (Figure 6A). A high transmittance was achieved at all wavelengths (Figures 6B,C). A 2D/3D convertible AR 3D display was further proposed based on a lens-array holographic optical element, a polymer dispersed liquid crystal film, and a projector (Zhang et al., 2019). Controlled by voltage, the film can switch the display mode from a 2D display to an optical see-through 3D display.
FIGURE 6. (A) Work principles for a lens-array HOE used in the OST AR 3D display system. (B) Transmittance and reflectance of the recorded lens-array HOE. (C) 3D virtual image of the lens-array HOE-based full color AR 3D display system. (D) Schematic for spatial multiplexing metagratings for a full-color glasses-free AR 3D display. (E) Transmittance of the holographic combiner based on pixelated metagratings. (F) 3D virtual image of the metagratings-based glasses-free AR 3D display system. (G) Schematic of the pixelated multilevel blazed gratings for a glass-free AR 3D display. (H) Principles of the pixelated multilevel blazed gratings array that form viewpoints in different focal planes. (I) 3D virtual image of the blazed gratings-based glasses-free AR 3D display system. [(A–C) Reproduced from Hong et al. (2014). Copyright (2021), with permission from Optica Publishing Group. (D–F) Reproduced from Shi et al. (2020). Copyright (2021), with permission from De Gruyter. (G–I) Reproduced from Shi et al. (2021). Copyright (2021), with permission from MDPI.].
In fact, AR 3D displays based on lens arrays form self-repeating views. Thus, both motion parallax and FOV are limited. Moreover, false depth cues for 3D virtual images can be generated due to the image flip effect. Correct depth cues are particularly important for AR 3D displays when virtual images fuse with natural objects. On this basis, a holographic combiner composed of spatial multiplexing metagratings was proposed to realize a 32-inch full-color glass-free AR 3D display, as shown in Figure 6D (Shi et al., 2020). The irradiance pattern for each view is formed as a super Gaussian function to reduce crosstalk. A FOV as large as 47° was achieved in the horizontal direction. For the sake of correct white balance, three layers of metagratings are stacked for spatial multiplexing. The whole system contains only two components: a projector and a metagrating-based holographic combiner. Moreover, the transmittance is higher than 75% over the visible spectrum (Figures 6E,F), but the light efficiency of metagrating is relatively low (40% in theory and 12% in experiment). To improve the light efficiency, pixelated multilevel blazed gratings were introduced for glasses-free AR 3D displays with a 20 inch format (Figures 6G,H) (Shi et al., 2021). The measured diffraction efficiency was improved to a value of ∼53%. The viewing distance for motion parallax was extended to more than 5 m, benefiting from the multiorder diffraction light according to harmonic diffraction theory (Figure 6I).
We introduce a summary of various methods for realizing glasses-free AR 3D displays. As shown in Table 2, the optical see-through combiner outweighs the reflection type method for a more natural fusion with the physical world. In all optical see-through combiners, holographic optical element-based combiners have the advantages of high diffraction efficiency and high transparency. However, they suffer from a limited FOV and motion parallax. The metagrating-based combiner offers an accurate depth cue over a large FOV. The multilevel blazed grating-based method further improves the light efficiency and viewing depth due to multiorder diffraction.
The development of high-throughput micro/nanofabrication methods is essential for large view modulators. To fabricate the diffraction gratings or metagratings at a high throughput, a flexible lithography system was proposed (Figure 7A) (Wan et al., 2016). The nanogratings in this system were fabricated pixel by pixel. Through one exposure, a nanograting pixel with a size on the scale of tens of microns was formed. Therefore, the throughput can be much faster than that obtained by an electron beam lithography system that works via a sequential writing process. In addition, the periodic tuning accuracy of the fabricated gratings can be less than 1 nm. Using the proposed lithography system, a 32-inch view modulator with a minimum feature size of 300 nm was successfully prepared for a glass-free 3D display (Figures 7B,C). This view modulator has a total of 24,883,200 pixelated metagratings.
In this paper, we mainly focused on the exciting achievements of planar optics-based glass-free 3D displays and glass-free AR 3D displays (as summarized in Figure 8). Planar optics opens up the possibility to manipulate the beam steering pixel by pixel, rather than an image with many pixels as in a microlens array-based architecture. There are several benefits to modulating individual pixels. First, the views can be arranged freely either in a line for horizontal parallax, a curve for table-top 3D displays, or a matrix for full parallax. As a result, the views can be arranged according to the application. Second, when imaging with many pixels, many pixels are wasted, especially at large viewing angles. Therefore, severe resolution degradation is always criticized. In the pixel-to-pixel steering strategy, however, every pixel contributes to the virtual 3D image. Third, planar optics offers superior light steering capability for a large FOV. Fourth, the light distribution of each view can be tuned from a Gaussian distribution to a super-Gaussian distribution to minimize crosstalk and ghost images. Fifth, the view shape can be tuned to dots/linear/rectangular shapes for information density variant 3D displays. The tradeoff between resolution and viewing angle can be alleviated. Sixth, a super multiview display can be realized with closely packaged views to address vergence-accommodation conflict problems. Seventh, multilevel structures, such as blazed gratings, diffractive lenses, and metasurfaces, offer solutions for high light efficiency and reduced chromatic aberration. Eighth, planar optics possess the features of a thin form factor and light weight, which are compatible with portable electronics. Finally, a glass-free AR 3D display can be achieved with a large FOV, enhanced light efficiency and reduced crosstalk for window displays.
FIGURE 8. Schematic of the emerging planar optical elements applied in glasses-free 3D displays and glasses-free AR 3D displays. There are various merits for planar optical elements compared with geometric optical elements. DG: Diffraction gratings.
To summarize, planar optics-based 3D displays have the advantages of a thin form factor, light weight, flexible design, and precise light manipulation. They hold great promise to tackle the critical challenges for geometric-based 3D displays, especially for the applications of portable electronics and transparent displays.
Future research in planar optics-based 3D displays should focus on the improvement of display performance and enhancement of practicality. From the system level, some strategies can be used to further improve display performance. First, a time-multiplexed strategy enabled by a high refresh rate monitor can be used to increase the resolution by exploiting the redundant time information (Hwang et al., 2014; Ting et al., 2016; Liu et al., 2019). For example, a projector array and a liquid crystal-based steering screen has been used to implement a time-multiplexed multiview 3D display. An angular steering screen was used to control the light direction to generate more continual viewpoints, thereby increasing the angular resolution (Xia et al., 2018). In another work, a time sequential directional beam splitter array was introduced in a multiview 3D display to increase the spatial resolution (Feng et al., 2017). When equipped with eye-tracking systems, a time-multiplexed 3D display can provide both high spatial resolution and angular resolution for single-user applications. Second, a foveated vision strategy can be utilized to compress the image processing load and improve the optical performance of the imaging system and near-eye display (Phillips et al., 2017; Chang et al., 2020). For instance, a multiresolution foveated display using two display panels and an optical combiner was proposed for virtual reality applications (Tan et al., 2018). The first display panel provides a wide FOV, and the second display panel improves spatial resolution for the central fovea region. This system effectively reduces the screen-door effect in near-eye displays. Moreover, a foveated glasses-free 3D display was also demonstrated with spatially variant information density. This strategy offers potential solutions to solve the trade-off between resolution and FOV (Hua et al., 2021). For foveated display systems, liquid crystal lens technology is also significant (Chen et al., 2015; Lin et al., 2017; Yuan et al., 2021). Under polarization control, liquid crystal lenses with tunable focal lengths are able to provide active switching of the FOV. This technology was demonstrated in a foveated near-eye display to create multiresolution images with a single display module (Yoo et al., 2020). The system maintains both a wide FOV and high resolution with compressed data. Third, the development of artificial intelligence algorithms can improve the optical performance of planar optical elements (Chang et al., 2018; Sitzmann et al., 2018; Tseng et al., 2021; Zeng et al., 2021). For example, an end-to-end optimization algorithm was introduced to design a diffractive achromatic lens. By jointly learning the lens and an image recovery neural network, this method can be used to realize superior high-fidelity imaging (Dun et al., 2020). Therefore, in planar optics-based 3D displays, algorithms such as deep learning can be incorporated with hardware for aberration reduction and image precalibration.
In addition to the aforementioned improvement in display performance, several techniques need to be implemented that can promote the practical application of 3D displays. First, a directional backlight system with low divergence and high uniformity should be integrated into planar optics-based glass-free 3D displays (Yoon et al., 2011; Fan et al., 2015; Teng and Tseng, 2015; Zhan et al., 2016; Krebs et al., 2017). The angular divergence of the illumination greatly affects the display performance in terms of crosstalk and ghost images. An edge-lit directional backlight based on a waveguide with pixelated nanogratings was proposed (Zhang et al., 2020). The directional backlight module provides an angular divergence of ±6.17° and a uniformity of 95.7 and 86.8% in the x- and y-directions, respectively, at a wavelength of 532 nm. In another work, a steering-backlight was introduced into a slim panel holographic video display (An et al., 2020). The overall system thickness is < 10 cm. Nevertheless, the design and fabrication of a directional backlight is still a difficult task. Second, several challenges in nanofabrication should be overcome for planar optics-based 3D displays (Manfrinato et al., 2013; Manfrinato et al., 2014; Chen et al., 2015; Qiao et al., 2016; Wu et al., 2021). For example, the patterning of nanostructures over a large size, the fabrication of multilevel micro/nanostructures with a high aspect ratio, and the realization of high-fidelity batch copies of micro/nanostructures remains challenging. We believe that numerous micro/nanomanufacturing techniques and instruments will be developed to meet the specific needs of 3D displays. Last but not least, planar optics-based 3D displays will benefit from the rapid development of advanced display panels. To enhance the brightness while ensuring low system power consumption, a spontaneous emission source can be introduced into planar optics-based 3D displays (Fang et al., 2006; Hoang et al., 2015; Pelton, 2015). By constructing plasmonic nanoantennas, large spontaneous emission enhancements were real