About | Contact Us | Register | Login
ProceedingsSeriesJournalsSearchEAI
Science and Technologies for Smart Cities. 6th EAI International Conference, SmartCity360°, Virtual Event, December 2-4, 2020, Proceedings

Research Article

Scalable Approximate Computing Techniques for Latency and Bandwidth Constrained IoT Edge

Download(Requires a free EAI acccount)
2 downloads
Cite
BibTeX Plain Text
  • @INPROCEEDINGS{10.1007/978-3-030-76063-2_20,
        author={Anjus George and Arun Ravindran},
        title={Scalable Approximate Computing Techniques for Latency and Bandwidth Constrained IoT Edge},
        proceedings={Science and Technologies for Smart Cities. 6th EAI International Conference, SmartCity360°, Virtual Event, December 2-4, 2020, Proceedings},
        proceedings_a={SMARTCITY},
        year={2021},
        month={5},
        keywords={Edge computing IoT Approximate computing Machine learning Machine vision},
        doi={10.1007/978-3-030-76063-2_20}
    }
    
  • Anjus George
    Arun Ravindran
    Year: 2021
    Scalable Approximate Computing Techniques for Latency and Bandwidth Constrained IoT Edge
    SMARTCITY
    Springer
    DOI: 10.1007/978-3-030-76063-2_20
Anjus George1,*, Arun Ravindran1
  • 1: University of North Carolina at Charlotte, Charlotte
*Contact email: ageorg28@uncc.edu

Abstract

Machine vision applications at the IoT Edge have bandwdith and latency constraints due to large sizes of video data. In this paper we propose approximate computing, that trades off inference accuracy with video frame size, as a potential solution. We present a number of low compute overhead video frame modifications that can reduce the video frame size, while achieving acceptable levels of inference accuracy. We present, a heuristic based design space pruning, and a Categorical boost based machine learning model as two approaches to achieve scalable performance in determining the appropriate video frame modifications that satisfy design constraints. Experimental results on an object detection application on the Microsoft COCO 2017 data set, indicates that proposed methods were able to reduce the video frame size by upto 71.3% while achieving an inference accuracy of 80.9% of that of the unmodified video frames. The machine learning model has a high training cost, but has a lower inference time, and is scalable and flexible compared to the heuristic design space pruning algorithm.

Keywords
Edge computing IoT Approximate computing Machine learning Machine vision
Published
2021-05-22
Appears in
SpringerLink
http://dx.doi.org/10.1007/978-3-030-76063-2_20
Copyright © 2020–2025 ICST
EBSCOProQuestDBLPDOAJPortico
EAI Logo

About EAI

  • Who We Are
  • Leadership
  • Research Areas
  • Partners
  • Media Center

Community

  • Membership
  • Conference
  • Recognition
  • Sponsor Us

Publish with EAI

  • Publishing
  • Journals
  • Proceedings
  • Books
  • EUDL