amsys 17(14): e5

Research Article

Research on Cache Placement in ICN

Download749 downloads
  • @ARTICLE{10.4108/eai.28-8-2017.153308,
        author={Yu Zhang and Yangyang Li and Ruide Li and Wenjing Sun},
        title={Research on Cache Placement in ICN},
        journal={EAI Endorsed Transactions on Ambient Systems},
        volume={4},
        number={14},
        publisher={EAI},
        journal_a={AMSYS},
        year={2017},
        month={8},
        keywords={ICN, Steiner tree, link cost, cache placement, group multicast},
        doi={10.4108/eai.28-8-2017.153308}
    }
    
  • Yu Zhang
    Yangyang Li
    Ruide Li
    Wenjing Sun
    Year: 2017
    Research on Cache Placement in ICN
    AMSYS
    EAI
    DOI: 10.4108/eai.28-8-2017.153308
Yu Zhang1, Yangyang Li2,*, Ruide Li2, Wenjing Sun2
  • 1: Beijing Institute of Technology, Zhongguancun South Street, Haidian District, Beijing, No.5; The Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory, the 54th Research Institute of China Electronics Technology Group Corporation
  • 2: Beijing Institute of Technology; Zhongguancun South Street, Haidian District, Beijing, No.5
*Contact email: 743897251@qq.com

Abstract

Ubiquitous in-network caching is one of key features of Information Centric Network, together with receiver-drive content retrieval paradigm, Information Centric Network is better support for content distribution, multicast, mobility, etc. Cache placement strategy is crucial to improving utilization of cache space and reducing the occupation of link bandwidth. Most of the literature about caching policies considers the overall cost and bandwidth, but ignores the limits of node cache capacity. This paper proposes a G-FMPH algorithm which takes into ac-count both constrains on the link bandwidth and the cache capacity of nodes. Our algorithm aims at minimizing the overall cost of contents caching afterwards. The simulation results have proved that our proposed algorithm has a better performance.