autostereoscopic display screens quotation
Besides 3D displays, we offer state-of-the-art 3D content player and 2D-to-3D content conversion technology.Attractive, well designed 3D content is crucial for generating desired effect using 3D autostereoscopic (glasses-free 3D display) technology. Our team of artists and partners is dedicated to producing the impressive customised 3D content.We create your 3D spot from initial idea or enhance your existing media by adding auto-stereoscopic 3D elements. Attractive and immersive content is the key to your success and the way to make your message hit its target.Need the best suitable 3D screening solution or a special content for screening? For more information on available technology or on content creation contact YOCOMA teamwho will be more than happy to assist.
Using the latest generation of auto-stereoscopic (or ‘lenticular’) LCD technology, Magnetic Enabl3D screens allow incredible resolution and outstanding 3D large format displays without the need for any special 3D glasses, 3D eyewear, 3D headgear or 3D projectors.
Providing the ultimate in eye-catching, crowd-stopping 3D displays, with 3D media and 3D digital signage; 3D video images and content appear to fly out of the screen and float in mid air!
•Glasses-free 3D screen technology•Auto-stereoscopic, Full-HD 1080p LCD screens•9-point multi-viewing 3D zones•176° ultra-wide viewing angle•50,000 hours viewing time•Durable and discreet design and build•IRFM technology helps prevent ‘screen burn’•Configurable Inputs/Outputs•Active ambient light sensor for energy saving control
In this paper, an autostereoscopic display system based on a time-multiplexed directional backlight using a large aperture Fresnel lens is proposed. High-resolution stereoscopy for multiple viewers positioned at different distances from the screen is achieved in the proposed system by layering polymer dispersed liquid crystal screens behind the Fresnel lens. The screens with segmented electrodes are electrically controlled to change the position of light diffusion, while the time-multiplexed backlight is projected by a digital mirror device projector at a high refresh rate. The light is diffused at the conjugate focal points of the observers’ eyes to deliver directional light to each eye. The right-eye image and the left-eye image are alternated on the LCD panel in front of the lens to synchronize with the backlight.
Most of the perceptual cues that humans use to visualize the world"s 3D structure are available in 2D projections. This is why we can make sense of photographs and images on a television screen, at the cinema, or on a computer monitor. Such cues include occlusion, perspective, familiar size, and atmospheric haze. Four cues are missing from 2D media: stereo parallax - seeing a different image with each eye, movement parallax - seeing different images when we move our heads, accommodation - the eyes" lenses focus on the object of interest, and convergence - both eyes converge on the object of interest. All 3D display technologies (stereoscopic displays) provide at least stereo parallax. Autostereoscopic displays provide the 3D image without the viewer needing to wear any special viewing gear.
Autostereoscopy is any method of displaying stereoscopic images (adding binocular perception of 3D depth) without the use of special headgear, glasses, something that affects vision, or anything for eyes on the part of the viewer. Because headgear is not required, it is also called "glasses-free 3D" or "glassesless 3D". There are two broad approaches currently used to accommodate motion parallax and wider viewing angles: eye-tracking, and multiple views so that the display does not need to sense where the viewer"s eyes are located.lenticular lens, parallax barrier, and may include Integral imaging, but notably do not include volumetric display or holographic displays.
Many organizations have developed autostereoscopic 3D displays, ranging from experimental displays in university departments to commercial products, and using a range of different technologies.Heinrich Hertz Institute (HHI) in Berlin.Sega AM3 (Floating Image System)eye tracking system and a seamless mechanical adjustment of the lenses.
Eye tracking has been used in a variety of systems in order to limit the number of displayed views to just two, or to enlarge the stereoscopic sweet spot. However, as this limits the display to a single viewer, it is not favored for consumer products.
Currently, most flat-panel displays employ lenticular lenses or parallax barriers that redirect imagery to several viewing regions; however, this manipulation requires reduced image resolutions. When the viewer"s head is in a certain position, a different image is seen with each eye, giving a convincing illusion of 3D. Such displays can have multiple viewing zones, thereby allowing multiple users to view the image at the same time, though they may also exhibit dead zones where only a non-stereoscopic or pseudoscopic image can be seen, if at all.
A parallax barrier is a device placed in front of an image source, such as a liquid crystal display, to allow it to show a stereoscopic image or multiscopic image without the need for the viewer to wear 3D glasses. The principle of the parallax barrier was independently invented by Auguste Berthier, who published first but produced no practical results,Frederic E. Ives, who made and exhibited the first known functional autostereoscopic image in 1901.
In the early 2000s, Sharp developed the electronic flat-panel application of this old technology to commercialization, briefly selling two laptops with the world"s only 3D LCD screens.FinePix Real 3D W1 digital camera, which features a built-in autostereoscopic LCD measuring 2.8 in (71 mm) diagonal. The Nintendo 3DS video game console family uses a parallax barrier for 3D imagery; on a newer revision, the New Nintendo 3DS, this is combined with an eye tracking system.
Philips solved a significant problem with electronic displays in the mid-1990s by slanting the cylindrical lenses with respect to the underlying pixel grid.Philips produced its WOWvx line until 2009, running up to 2160p (a resolution of 3840×2160 pixels) with 46 viewing angles.Lenny Lipton"s company, StereoGraphics, produced displays based on the same idea, citing a much earlier patent for the slanted lenticulars. Magnetic3d and Zero Creative have also been involved.
With rapid advances in optical fabrication, digital processing power, and computational models for human perception, a new generation of display technology is emerging: compressive light field displays. These architectures explore the co-design of optical elements and compressive computation while taking particular characteristics of the human visual system into account. Compressive display designs include dualcomputed tomography and Non-negative matrix factorization and non-negative tensor factorization.
Dimension Technologies released a range of commercially available 2D/3D switchable LCDs in 2002 using a combination of parallax barriers and lenticular lenses.SeeReal Technologies has developed a holographic display based on eye tracking.
There are a variety of other autostereo systems as well, such as volumetric display, in which the reconstructed light field occupies a true volume of space, and integral imaging, which uses a fly"s-eye lens array.
Sunny Ocean Studios, located in Singapore, has been credited with developing an automultiscopic screen that can display autostereo 3D images from 64 different reference points.
Many autostereoscopic displays are single-view displays and are thus not capable of reproducing the sense of movement parallax, except for a single viewer in systems capable of eye tracking.
"Resolving the Vergence-Accommodation Conflict in Head-Mounted Displays" (PDF). web.archive.org. 22 September 2022. Archived from the original (PDF) on 22 September 2022. Retrieved 22 September 2022.
Holliman, N.S. (2006). Three-Dimensional Display Systems (PDF). ISBN 0-7503-0646-7. Archived from the original (PDF) on 4 July 2010. Retrieved 30 March 2010.
Ives, Frederic E. (1902). "A novel stereogram". Journal of the Franklin Institute. 153: 51–52. doi:10.1016/S0016-0032(02)90195-X. Reprinted in Benton "Selected Papers n Three-Dimensional Displays"
Lippmann, G. (2 March 1908). "Épreuves réversibles. Photographies intégrales". Comptes Rendus de l"Académie des Sciences. 146 (9): 446–451. Bibcode:1908BSBA...13A.245D. Reprinted in Benton "Selected Papers on Three-Dimensional Displays"
van Berkel, Cees (1997). Fisher, Scott S; Merritt, John O; Bolas, Mark T (eds.). "Characterisation and optimisation of 3D-LCD module design". Proc. SPIE. Stereoscopic Displays and Virtual Reality Systems IV. 3012: 179–186. Bibcode:1997SPIE.3012..179V. doi:10.1117/12.274456. S2CID 62223285.
Lanman, D.; Wetzstein, G.; Hirsch, M.; Heidrich, W.; Raskar, R. (2011). "Polarization Fields: Dynamic Light Field Display using Multi-Layer LCDs". ACM Transactions on Graphics (SIGGRAPH Asia). Cite journal requires |journal= (help)
Chinnock, Chris (11 April 2014). "NAB 2014 – Dolby 3D Details Partnership with Stereolabs". Display Central. Archived from the original on 23 April 2014. Retrieved 19 July 2016.
McAllister, David F. (February 2002). "Stereo & 3D Display Technologies, Display Technology" (PDF). In Hornak, Joseph P. (ed.). Encyclopedia of Imaging Science and Technology, 2 Volume Set (Hardcover). Vol. 2. New York: Wiley & Sons. pp. 1327–1344. ISBN 978-0-471-33276-3.
Dodgson, N.A.; Moore, J. R.; Lang, S. R. (1999). "Multi-View Autostereoscopic 3D Display". IEEE Computer. 38 (8): 31–36. CiteSeerX doi:10.1109/MC.2005.252. ISSN 0018-9162. S2CID 34507707.
"Resolving the Vergence-Accommodation Conflict in Head-Mounted Displays" (PDF). web.archive.org. 22 September 2022. Archived from the original (PDF) on 22 September 2022. Retrieved 22 September 2022.
Ukai, K.; Howarth, P.A. Visual fatigue caused by viewing stereoscopic motion images: Background, theories, and observations. Displays 2008, 29, 106–116.
Martínez-Corral, M.; Javidi, B. Fundamentals of 3D imaging and displays: A tutorial on integral imaging, light-field, and plenoptic systems. Adv. Opt. Photonics 2018, 10, 512.
Deng, H.; Wang, Q.H.; Luo, C.G.; Liu, C.L.; Li, C. Accommodation and convergence in integral imaging 3D display. J. Soc. Inf. Disp. 2014, 22, 158–162.
Fattal, D.; Peng, Z.; Tran, T.; Vo, S.; Fiorentino, M.; Brug, J.; Beausoleil, R.G. A multi-directional backlight for a wide-angle, glasses-free three-dimensional display. Nature 2013, 495, 348–351.
Lee, W.; Shin, Y.; Yoon, J.; Kim, J.; Lee, C.K.; Jeong, Y.; Jang, C.; Hong, J.Y.; Lee, B. Mobile autostereoscopic 3D display using a diamond pixel structured OLED pentile display panel. In Optics InfoBase Conference Papers; OSA Technical Digest (online); Optical Society of America: Seattle, WA, USA, 2014; p. JTu4A.8.
Kim, J.; Lee, C.K.; Jeong, Y.; Jang, C.; Hong, J.Y.; Lee, W.; Shin, Y.C.; Yoon, J.H.; Lee, B. Crosstalk-reduced dual-mode mobile 3D display. IEEE/OSA J. Disp. Technol. 2015, 11, 97–103.
Liu, Z.; Lin, C.H.; Hyun, B.R.; Sher, C.W.; Lv, Z.; Luo, B.; Jiang, F.; Wu, T.; Ho, C.H.; Kuo, H.C.; et al. Micro-light-emitting diodes with quantum dots in display technology. Light. Sci. Appl. 2020, 9, 83.
Deng, Y.; Chu, D. Coherence properties of different light sources and their effect on the image sharpness and speckle of holographic displays. Sci. Rep. 2017, 7, 5893.
Yen, W.T.; Chen, F.H.; Chen, W.L.; Liou, J.C.; Tsai, C.H. Enhance light efficiency for slim light-strip array backlight on autostereoscopic display. In Proceedings of the 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON), Zurich, Switzerland, 15–17 October 2012; pp. 1–4.
Peng, D.; Zhang, K.; Chao, V.S.D.; Mo, W.; Lau, K.M.; Liu, Z. Full-color pixelated-addressable light emitting diode on transparent substrate (LEDoTS) micro-displays by CoB. J. Disp. Technol. 2016, 12, 742–746.
Huang, Y.; Hsiang, E.L.; Deng, M.Y.; Wu, S.T. Mini-LED, Micro-LED and OLED displays: Present status and future perspectives. Light. Sci. Appl. 2020, 9, 105.
Cees Van Berkel C (1999) Image preparation for 3D-LCD. In: Proceedings of SPIE vol 3639, stereoscopic displays and virtual reality systems VI, San Jose, pp 84–91, May 1999
Im H-j, Jung S-m, Lee B-j, Hong H-k, Shin H-h (2008) 20.1: Mobile 3D displays based on a LTPS 2.4 VGA LCD panel attached with lenticular lens sheets. SID symposium digest of technical papers, vol 39, Los Angeles, p 256–259
Woodgate GJ, Harrold J (2003) LP-1: High efficiency reconfigurable 2D/3D autostereoscopic display. SID symposium digest of technical papers, vol 34, Baltimore, pp 394–397
de Zwart ST, IJzerman WL, Dekker T, Wolter WAM (2004) A 20” switchable auto-stereoscopic 2D/3D display. In: Proceedings of 11th international display workshop (IDW), Niigata, pp 1459–1460
Hiddink MGH, de Zwart ST, Willemsen OH, Dekker T (2006) 20.1: Locally switchable 3D displays. SID international symposium digest of technical papers, vol 37, San Francisco, pp 1142–1145
Ren H, Fox D, Wu S-T (2007) 62.1: Liquid crystal and liquid lenses for displays and image processing. SID symposium digest of technical papers, vol 38, Long Beach, pp 1733–1736
Kao Y-Y, Huang Y-P, Yang K-X, Chao PCP, Tsai C-C, Mo C-N (2009) 11.1: An auto-stereoscopic 3D display using tunable liquid crystal lens array that mimics effects of GRIN lenticular lens array. SID symposium digest of technical papers, vol 40, San Antonio, June 2009, p 111–114
Moseley RR, Woodgate GJ, Jacobs AMS, Harrold J, Ezra D (2002) ‘Parallax barrier, display, passive polarization modulating optical element and method of making such an element’, US Patent 6437915
Eichenlaub JB, Hollands D, Hutchins JM (1995) A prototype flat plane hologram-like display that produces multiple perspective views at full resolution. In: Proceedings of SPIE vol 2409, stereoscopic displays and virtual reality systems II, San Jose, pp 102–112
Ezra D, Woodgate GJ, Omar BA, Holliman NS, Harrold J, Shapiro LS (1995) New autostereoscopic display system. In: Proceedings of SPIE, vol 2409, stereoscopic displays and virtual reality systems II, San Jose, p 31
Brott R, Schultz J (2010) 16.3: Directional backlight lightguide considerations for full resolution autostereoscopic 3D displays. SID Symposium digest of technical papers, vol 41, Seattle, pp 218–221
Sakai H, Yamasaki M, Koike T, Oikawa M, Kobayashi M (2009) 41.2: Autostereoscopic display based on enhanced integral photography using overlaid multiple projectors. SID symposium digest of technical papers, vol 40, San Antonio, pp 611–614
Travis ARL, Lang SR (1990) A CRT based autostereoscopic 3-D display. In: Eurodisplay 1990, 10th international display research conference, 26–28 September 1990, Amsterdam, LP10
Cossairt O, Møller C, Travis A, Benton SA (2004) Novel view sequential display based on DMD technology. In: Proceedings of SPIE vol 2591, stereoscopic displays and virtual reality systems XI, San Jose, pp 273–278
Møller CN, Travis AR (2005) Time multiplexed autostereoscopic flat panel display using an optical wedge. In: Proceedings of SPIE vol 5664, stereoscopic displays and virtual reality systems XII, San Jose, pp 150–157
There are many currently available approaches to realizing headset type mixed reality information display just as there are also multiple approaches to realizing unbounded mixed reality information display.
With regards to headset types category they can be divided into subcategories that can be described as fully immersive, optical see through and video see through displays as illustrated in Figure 8.
In general, fully immersive devices tend to be mostly for immersive virtual reality experiences. Their displays tend to be stereoscopic displays that are then combined with sensors that can track the user’s head position as well as orientation. Optical components are used to project left and right eye pixels to their respective eye locations depending on the head position and orientation. This projection can be realized either through directly displaying synchronized pairs of images with the desired image disparity to create appropriate feeling of depth sensation using two separate near-to-eye displays. One for each eye, having two displays however tends to also increase the cost. Another way is to use optical components that effectively extracts interlaced left and right pixels from a single display and projects them to their respective eyes.
Optical see through devices mitigate the above camera artifacts’ challenge by eliminating it from mediating the viewers’ optical path to the real world. Thus, in optical see through devices, the users see the actual real world around them. Then sensors in the devices track the head location and orientation in order to overlay correctly calibrated synthetic data onto this real world. There are multiple approaches to realizing these types of mixed reality devices. The proposed configuration illustrated in Figure 7 is one such binocular autostereoscopic example.
Unbounded mixed reality systems are also a category that enables viewers to experience immersive sensations without necessarily wearing headsets or any other devices. These devices can be for a single user or multiple concurrent users. They are designed so as to provide autostereoscopic information display and sometimes interaction as well. In order to enable multiple simultaneous users to experience autostereoscopic 3D sensation some of the displays employ the concepts covered in Sections 2–2.3. However, all stereoscopic and autostereoscopic information displays make use of the concepts in the human visual system covered in Section 2 as they are the basis for human 3D and depth perception.
Walls mixed reality displays can be comprised of multiple flat panel or curved displays that are tiled together to create an immersive experience. This immersive experience can also be in the form of autostereoscopic sensations using the various multi-views autostereoscopic approaches including the lenticular and parallax barrier systems introduced in this chapter. Another approach used for achieving wall type mixed reality displays is through projections onto the walls. These could be front projections, rear projections or both depending on the application.
Caves mixed reality systems are in general multi-sided immersive environments that offer notably stronger sensations of immersion than one-sided standing walls mixed reality systems. This sense of immersion is sometimes enhanced with the addition of viewer surrounding autostereoscopic 3D display walls that give the viewers a greater sense of depth. Similar to walls mixed reality systems, flat panel or curved displays can be used as well as rear and front projection displays to produce the cave system.
Domes are a variation of caves mixed reality systems whereby usually the interior hemispherical domed surfaces that completely enclose a space are used as the image projection display surfaces. This configuration thereby creates a seamless 360 degree horizontal and 180 degree vertical immersive experience for the viewers. Coupling these systems with autostereoscopic 3D information display capability results in highly immersive and interactive mixed reality systems that are superior to most. The fact that these strongly immersive experiences can be enjoyed by multiple users simultaneously makes domes particularly popular in multiple industries and research fields.
To make the proposed approach possible, we present a method for displaying a 3D image by driving multiple LHE MEMS scanning units. Since we use a single-pixel beam to establish a basic unit of an LHE multi-view 3D display, the MEMS scanning unit plays a role of fast projection optics by producing multi-view homogeneous emitting beams with a horizontal and vertical projection angle. Figure 2 illustrates the concept of a MEMS scanning unit designed for the proposed LHE 3D display approach. Each unit consists of a light source, beam aligners, a reflection mirror and a biaxial MEMS scanning mirror (Fig. 2a). To produce color images, the lights from three laser diodes are combined with a dichroic prism into a single coaxial full-color light beam. The collected beam is reflected by the fixed mirror and transmitted to the biaxial scanning mirror, where the beam is scanned two-dimensionally. The image signals are inputted via the laser diodes in the basic MEMS scanning unit.
(a) Concept of LHE MEMS scanning unit; each unit consists of light source, beam aligners, reflection mirror and biaxial MEMS scanning mirror. For color image, color beams from diodes are aligned by corresponding beam aligners into intense beams spreading proportional to distance from diodes. Color beams can be collected together into one beam by transmitting through dichroic mirror. Beam is relayed onto biaxial MEMS scanning mirror that reflect beam in Lissajous pattern in space. (b) LHE 3D display constructed by arrangement of MEMS scanning units and corresponding signals input. Each unit corresponds to one pixel shown in 3D display.
The MEMS scanning units as elemental pixels are used to construct the LHE 3D display (Fig. 2b). To precisely project the images in the required directions for generating a long-viewing-distance 3D image, the projected directions of each MEMS scanning unit are calibrated before use. We use a computer-assisted geometric correcting approachSupplementary Figures 1 and 2) through a serial input. After calibration of the LHE MEMS scanning 3D display, we can input the revised elemental images to each MEMS scanning array so that the impinged light beams are reflected to a predetermined area in space and the 3D image can be spatially formatted.
For this study, we fabricated a prototype LHE 3D display for horizontal-parallax-only 3D images and conducted a set of experiments to evaluate the feasibility of the display (Fig. 3a). Multiple MEMS scanning units are aligned horizontally. This display technique makes it easy to create a full-parallax autostereoscopic display by adding vertical arrangement of the MEMS scanning units. Each scanning unit is reconstructed and modified based on a laser projectorSupplementary Figure 3 and Supplementary Video 1).
(a) Manufactured LHE 3D display with 16 MEMS scanning units; (b) LHE 3D display system using MEMS scanning array and mirror array for evaluation of long viewing distance 3D image.
To highlight the LHE 3D display"s capability, we extended the image by transforming the 1D arrays to 2D arrays with a multi-mirror array (see Method 3). A zigzag-shape-aligned multi-mirror array is used to reflect the projected light beams and realign extended 3D images in the 3D space (Fig. 3b). Thus, the display can show a 3D image with a horizontal resolution of 16 by N (N is the number of mirrors) pixels. Furthermore, since the image depth is too long (several meters in this study) to be displayed in a short distance, we use a reflection mirror to reflect the projected images and direct the light beams to a multi-mirror array.
We evaluated the motion parallax of the 3D image by placing three letters at different positions in front of and behind the display plane. Figure 4a shows the prototype LHE 3D display and the displayed spatial formation 3D image. Images of “3”, “-” and “D” located at the positions 1.5 m from the screen, on the screen and 1.5 m inside the screen, respectively were produced. To compare the positions of the reconstructed images, we put three markers at +1.5, 0 and −1.5 m (symbol arrows showed in Fig. 4a, due to the reflection of the mirror and multi-mirror array, all arrows were placed behind the device. A detailed schematic of the experiment is available in Supplementary Figure 5). Figure 4b and Supplementary Video 2 show the motion parallax of the reconstructed LHE 3D images and those are fixed at the same position of the three markers.
(a) Manufactured MEMS scanning LHE 3D display device and displayed spatial formation 3D images with three markers at positions +1.5 m, 0 m and −1.5 m. (b) Spatial formation 3D images viewed from left, center and right, with camera zoom focused on position at +1.5 m. Moving seamlessly around display allows observers to see spatial formation 3D images from different perspectives with continuous motion parallax. (c) Experimental setup for evaluating image depths of LHE 3D display with three arrows: red, green and blue at positions +3 m, 0 m and −3.0 m, respectively. (d) Motion parallax of long-viewing-distance LHE 3D images taken from various directions with camera zoom focused at −3.0 m (3 m inside screen); Red “circle”, 3 m from LHE 3D display; Green “|”, at position of LHE 3D display; Blue “▴”, 3 m inside LHE 3D display.
Figure 4c shows another experiment to evaluate the possible depths of the displayed 3D image. Since the width of the prototype LHE 3D display is only 19 cm, the viewing area for observing a long-distance 3D image is small. In this study, we only presented a 3D image with six meters to ensure the viewing area, although the image depths can be enlarged. We placed three arrows: red arrow 3 m from the screen, green arrow at the screen and blue arrow 3 m inside the screen (−3.0 m). The results showed that the LHE 3D display displayed a natural 3D image with super long image depth of six meters (Fig. 4d and Supplementary Video 3), which is a fantastic result compared with the original screen width of only about 19 cm.
Since we use a laser for light beam scanning, the LHE 3D display should theoretically have a longer viewing distance. The image depth and viewing area of the generated 3D image could be enlarged with more LHE MEMS scanning units and precision calibration of light rays. Furthermore, we can use galvano mirror scanning to enhance the flipping angle for the light beam; thus, create a larger viewing angle 3D display. Since such high-resolution images are replaced with electrical image signals from the MEMS scanning units, the 3D display can be produced using a relatively simple technology at a lower cost.