Methods for Generating Addressable Focus Cues in Stereoscopic Displays

Persistent Link:
http://hdl.handle.net/10150/193859
Title:
Methods for Generating Addressable Focus Cues in Stereoscopic Displays
Author:
LIU, SHENG
Issue Date:
2010
Publisher:
The University of Arizona.
Rights:
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
Abstract:
Conventional stereoscopic displays present a pair of stereoscopic images on a single and fixed image plane decoupled with the vergence and accommodation responses of the viewer. In consequence, these displays lack the capability of correctly rendering focus cues (i.e. accommodation and retinal blur) and may induce the discrepancy between accommodation and convergence. A number of visual artifacts associated with incorrect focus cues in stereoscopic displays have been reported, limiting the applicability of these displays for demanding applications and daily usage.In this dissertation, methods and apparatus for generating addressable focus cues in conventional stereoscopic displays are proposed. Focus cues can be addressed throughout a volumetric space, either through dynamically varying the focal distance of a display enabled by an active optical element or by multiplexing a stack of 2-D image planes. Optimal depth-weighted fusing functions are developed to fuse a number of discrete image planes into a seamless volumetric space with continuous and near-correct focus cues similar to the real world counterparts.The optical design, driving methodology, and prototype implementation of the addressable focus displays are presented and discussed. Experimental results demonstrate continuously addressable focus cues from infinity to as close as the near eye distance. Experiments to further evaluate the depth perception in the display prototype are conducted. Preliminary results suggest that the perceived distance and accommodative response of the viewer match with the addressable accommodation cues rendered by the display, approximating the real-world viewing condition.
Type:
text; Electronic Dissertation
Keywords:
Accommodation; Convergence; Focus cues; Head-mounted display; Stereoscopic display; Three-dimensional display
Degree Name:
Ph.D.
Degree Level:
doctoral
Degree Program:
Optical Sciences; Graduate College
Degree Grantor:
University of Arizona
Committee Chair:
Hua, Hong

Full metadata record

DC FieldValue Language
dc.language.isoenen_US
dc.titleMethods for Generating Addressable Focus Cues in Stereoscopic Displaysen_US
dc.creatorLIU, SHENGen_US
dc.contributor.authorLIU, SHENGen_US
dc.date.issued2010en_US
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.description.abstractConventional stereoscopic displays present a pair of stereoscopic images on a single and fixed image plane decoupled with the vergence and accommodation responses of the viewer. In consequence, these displays lack the capability of correctly rendering focus cues (i.e. accommodation and retinal blur) and may induce the discrepancy between accommodation and convergence. A number of visual artifacts associated with incorrect focus cues in stereoscopic displays have been reported, limiting the applicability of these displays for demanding applications and daily usage.In this dissertation, methods and apparatus for generating addressable focus cues in conventional stereoscopic displays are proposed. Focus cues can be addressed throughout a volumetric space, either through dynamically varying the focal distance of a display enabled by an active optical element or by multiplexing a stack of 2-D image planes. Optimal depth-weighted fusing functions are developed to fuse a number of discrete image planes into a seamless volumetric space with continuous and near-correct focus cues similar to the real world counterparts.The optical design, driving methodology, and prototype implementation of the addressable focus displays are presented and discussed. Experimental results demonstrate continuously addressable focus cues from infinity to as close as the near eye distance. Experiments to further evaluate the depth perception in the display prototype are conducted. Preliminary results suggest that the perceived distance and accommodative response of the viewer match with the addressable accommodation cues rendered by the display, approximating the real-world viewing condition.en_US
dc.typetexten_US
dc.typeElectronic Dissertationen_US
dc.subjectAccommodationen_US
dc.subjectConvergenceen_US
dc.subjectFocus cuesen_US
dc.subjectHead-mounted displayen_US
dc.subjectStereoscopic displayen_US
dc.subjectThree-dimensional displayen_US
thesis.degree.namePh.D.en_US
thesis.degree.leveldoctoralen_US
thesis.degree.disciplineOptical Sciencesen_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.grantorUniversity of Arizonaen_US
dc.contributor.chairHua, Hongen_US
dc.contributor.committeememberHua, Hongen_US
dc.contributor.committeememberDereniak, Eustace L.en_US
dc.contributor.committeememberSchwiegerling, James T.en_US
dc.identifier.proquest11148en_US
dc.identifier.oclc752261004en_US
All Items in UA Campus Repository are protected by copyright, with all rights reserved, unless otherwise indicated.