airo 23(1):

Research Article

Advancing Robot Perception in Non-Spiral Environments through Camera-based Image Processing

Download127 downloads
  • @ARTICLE{10.4108/airo.3591,
        author={Hamid Hoorfar and Alireza Bagheri},
        title={Advancing Robot Perception in Non-Spiral Environments through Camera-based Image Processing},
        journal={EAI Endorsed Transactions on AI and Robotics},
        volume={2},
        number={1},
        publisher={EAI},
        journal_a={AIRO},
        year={2023},
        month={8},
        keywords={robot perception, visibility regions, optimal-time complexity, memory utilization, constant-memory model},
        doi={10.4108/airo.3591}
    }
    
  • Hamid Hoorfar
    Alireza Bagheri
    Year: 2023
    Advancing Robot Perception in Non-Spiral Environments through Camera-based Image Processing
    AIRO
    EAI
    DOI: 10.4108/airo.3591
Hamid Hoorfar1,*, Alireza Bagheri2
  • 1: University of Maryland, Baltimore
  • 2: Amirkabir University of Technology
*Contact email: hhoorfar@som.umaryland.edu

Abstract

Robot perception heavily relies on camera-based visual input for navigating and interacting in its environment. As robots become integral parts of various applications, the need to efficiently compute their visibility regions in complex environments has grown. The key challenge addressed in this paper is to devise an innovative solution that not only accurately computes the visibility region V of a robot operating in a polynomial environment but also optimizes memory utilization to ensure real-time performance and scalability. The main objective of this research is to propose an algorithm that achieves optimal-time complexity and significantly reduces memory requirements for visibility region computation. By focusing on sub-linear memory utilization, we aim to enhance the robot's ability to perceive its surroundings effectively and efficiently. Previous approaches have provided solutions for visibility region computation in non-spiral environments, but most were not tailored to memory limitations. In contrast, the proposed algorithm is designed to achieve optimal time complexity that is O(n) while reducing memory usage to O(c/log n) variables, where c < n represents the number of critical corners in the environment. Leveraging the constant-memory model and memory-constrained algorithm, we aim to strike a balance between computational efficiency and memory usage. The algorithm's performance is rigorously evaluated through extensive simulations and practical experiments. The results demonstrate its linear-time complexity and substantial reduction in memory usage without compromising the accuracy of the visibility region computation. By efficiently handling memory constraints, the robot gains a cost-effective and reliable perception mechanism, making it well-suited for a wide range of real-world applications. The constant-memory model and memory-constrained algorithm presented in this paper offer a significant advancement in robot perception capabilities. By optimizing the visibility region computation in polynomial environments, our approach contributes to the efficient operation of robots, enhancing their performance and applicability in complex real-world scenarios. The results of this research hold promising potential for future developments in robotics, computer vision, and related fields.