About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Collaborative Computing: Networking, Applications and Worksharing. 17th EAI International Conference, CollaborateCom 2021, Virtual Event, October 16-18, 2021, Proceedings, Part I

Research Article

A Novel Gaze-Point-Driven HRI Framework for Single-Person

Download(Requires a free EAI acccount)
2 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-92635-9_38,
        author={Wei Li and Pengfei Yi and Dongsheng Zhou and Qiang Zhang and Xiaopeng Wei and Rui Liu and Jing Dong},
        title={A Novel Gaze-Point-Driven HRI Framework for Single-Person},
        proceedings={Collaborative Computing: Networking, Applications and Worksharing. 17th EAI International Conference, CollaborateCom 2021, Virtual Event, October 16-18, 2021, Proceedings, Part I},
        proceedings_a={COLLABORATECOM},
        year={2022},
        month={1},
        keywords={Human-robot interaction Gaze point Grab},
        doi={10.1007/978-3-030-92635-9_38}
    }
    
  • Wei Li
    Pengfei Yi
    Dongsheng Zhou
    Qiang Zhang
    Xiaopeng Wei
    Rui Liu
    Jing Dong
    Year: 2022
    A Novel Gaze-Point-Driven HRI Framework for Single-Person
    COLLABORATECOM
    Springer
    DOI: 10.1007/978-3-030-92635-9_38
Wei Li, Pengfei Yi, Dongsheng Zhou,*, Qiang Zhang, Xiaopeng Wei, Rui Liu, Jing Dong
    *Contact email: zhouds@dlu.edu.cn

    Abstract

    Human-robot interaction (HRI) is a required method of information interaction in the age of intelligence. The new human-robot collaboration work mode is based on this information interaction method. Most of the existing HRI strategies have some limitations: Firstly, limb-based HRI relies heavily on the user’s physical movements, making interaction impossible when physical activity is limited. Secondly, voice-based HRI is vulnerable to noise in the interaction environment. Lastly, while gaze-based HRI reduces the reliance on physical movements and the impact of noise in the interaction environment, external wearables result in a less convenient and natural interaction process and increase costs. This paper proposed a novel gaze-point-driven interaction framework using only RGB cameras to provide a more convenient and less restricted way of interaction. At first, gaze points are estimated from images captured by cameras. Then, targets can be determined by matching these points and positions of objects. At last, objects gazed at by an interactor can be grabbed by the robot. Experiments under conditions of different lighting, distances, and different users on the Baxter robot show the robustness of this framework.

    Keywords
    Human-robot interaction Gaze point Grab
    Published
    2022-01-01
    Appears in
    SpringerLink
    http://dx.doi.org/10.1007/978-3-030-92635-9_38
    Copyright © 2021–2025 ICST
    EBSCOProQuestDBLPDOAJPortico
    EAI Logo

    About EAI

    • Who We Are
    • Leadership
    • Research Areas
    • Partners
    • Media Center

    Community

    • Membership
    • Conference
    • Recognition
    • Sponsor Us

    Publish with EAI

    • Publishing
    • Journals
    • Proceedings
    • Books
    • EUDL