2nd International ICST Conference on Immersive Telecommunications

Research Article

Multi-view lenticular display for group teleconferencing

Download763 downloads
  • @INPROCEEDINGS{10.4108/immerscom.2009.23,
        author={Peter Lincoln and Andrew Nashel and Adrian Ilie and Herman Towles and Gregory Welch and Henry Fuchs},
        title={Multi-view lenticular display for group teleconferencing},
        proceedings={2nd International ICST Conference on Immersive Telecommunications},
        publisher={ICST},
        proceedings_a={IMMERSCOM},
        year={2010},
        month={5},
        keywords={},
        doi={10.4108/immerscom.2009.23}
    }
    
  • Peter Lincoln
    Andrew Nashel
    Adrian Ilie
    Herman Towles
    Gregory Welch
    Henry Fuchs
    Year: 2010
    Multi-view lenticular display for group teleconferencing
    IMMERSCOM
    ICST
    DOI: 10.4108/immerscom.2009.23
Peter Lincoln1,*, Andrew Nashel1,*, Adrian Ilie1,*, Herman Towles1,*, Gregory Welch1,*, Henry Fuchs1,*
  • 1: Dept. of Computer Science, UNC-Chapel Hill, Chapel Hill, NC 27599-3175
*Contact email: plincoln@cs.unc.edu, nashel@cs.unc.edu, adyilie@cs.unc.edu, herman@cs.unc.edu, welch@cs.unc.edu, fuchs@cs.unc.edu

Abstract

We present a prototype display system for group teleconferencing that delivers the proper views to multiple local viewers in different locations. While current state-of-the-art commercial teleconferencing systems can provide highdefinition video, and proper placement and scaling of remote participants, they cannot support correct eye gaze for multiple users. A display capable of providing multiple simultaneous views to each local observer and multiple aligned cameras are required for generating a distinct and spatially-appropriate image for each local participant. If each local participant can observe the remote participants from an appropriate angle, then it becomes possible for viewers to properly identify where other participants are looking.

Our system provides view-appropriate imagery for multiple users by spatially multiplexing the output of the display. We achieve this by placing a lenticular sheet over the surface of the display, which directs light from different pixels in different directions. With knowledge about the subset of the display surface each participant can see, it is possible to combine each of the remote cameras' images into a single composite image. This image, when viewed though the multiplexing layer, appears to be the appropriate camera image for each local participant. The prototype system uses a camera-based display calibration technique capable of properly evaluating which pixels are visible from an arbitrary viewpoint without a physical model of the display.