The 8th EAI International Conference on Mobile Computing, Applications and Services

Research Article

DXTK: Enabling Resource-efficient Deep Learning on Mobile and Embedded Devices with the DeepX Toolkit

  • @INPROCEEDINGS{10.4108/eai.30-11-2016.2267463,
        author={Nicholas Lane and Sourav Bhattacharya and Akhil Mathur and Claudio Forlivesi and Fahim Kawsar},
        title={DXTK: Enabling Resource-efficient Deep Learning on Mobile and Embedded Devices with the DeepX Toolkit},
        proceedings={The 8th EAI International Conference on Mobile Computing, Applications and Services},
        publisher={ACM},
        proceedings_a={MOBICASE},
        year={2016},
        month={12},
        keywords={wearables mobile sensing deep learning toolkit},
        doi={10.4108/eai.30-11-2016.2267463}
    }
    
  • Nicholas Lane
    Sourav Bhattacharya
    Akhil Mathur
    Claudio Forlivesi
    Fahim Kawsar
    Year: 2016
    DXTK: Enabling Resource-efficient Deep Learning on Mobile and Embedded Devices with the DeepX Toolkit
    MOBICASE
    ACM
    DOI: 10.4108/eai.30-11-2016.2267463
Nicholas Lane1,*, Sourav Bhattacharya2, Akhil Mathur2, Claudio Forlivesi2, Fahim Kawsar2
  • 1: Nokia Bell Labs and University College London
  • 2: Nokia Bell Labs
*Contact email: Niclane@acm.org

Abstract

Deep learning is having a transformative effect on how sensor data are processed and interpreted. As a result, it is becoming increasingly feasible to build sensor-based computational models that are much more robust to real-world noise and complexity than previously possible. It is paramount that these innovations reach mobile and embedded devices that often rely on understanding and reacting to sensor data. However, deep models conventionally demand a level of system resources (e.g., memory and computation) that makes them problematic to run directly on constrained devices.

In this work, we present the DeepX toolkit (DXTK); an open-source collection of software components for simplifying the execution of deep models on embedded and mobile platforms. DXTK contains a number of pre-trained low-resource deep models that users can quickly adopt and integrate for their particular application needs. Similarly, it offers a range of runtime options for executing deep models on devices ranging from Android platforms to Linux-based embedded platforms. But the heart of DXTK is a series of optimization techniques (viz. weight/sparse factorization, convolution separation, precision scaling, and parameter cleaning). These optimizers offer different methods for shaping the system resource needs and are compatible with a wide variety of different forms of deep neural networks. We hope that DXTK accelerates the study of resource-constrained deep learning in the community.