Proceedings of the 2015 Workshop on ns-3

Research Article

Pushing the envelope in distributed ns-3 simulations: one billion nodes

  • @INPROCEEDINGS{10.1145/2756509.2756525,
        author={Sergei  Nikolaev and Eddy  Banks and Peter D.  Barnes, Jr. and David R.  Jefferson and Steven  Smith},
        title={Pushing the envelope in distributed ns-3 simulations: one billion nodes},
        proceedings={Proceedings of the 2015 Workshop on ns-3},
        publisher={ACM},
        proceedings_a={WNS3},
        year={2016},
        month={2},
        keywords={High-performance computing MPI Network simulation ns-3 Parallel architecture Performance},
        doi={10.1145/2756509.2756525}
    }
    
  • Sergei Nikolaev
    Eddy Banks
    Peter D. Barnes, Jr.
    David R. Jefferson
    Steven Smith
    Year: 2016
    Pushing the envelope in distributed ns-3 simulations: one billion nodes
    WNS3
    ACM
    DOI: 10.1145/2756509.2756525
Sergei Nikolaev1, Eddy Banks1, Peter D. Barnes, Jr.1, David R. Jefferson1, Steven Smith1
  • 1: Lawrence Livermore National Laboratory, Livermore, CA

Abstract

In this paper, we describe the results of simulation of very large (up to 109 nodes), planetary-scale networks using ns-3 simulator. The modeled networks consist of the small-world core graph of network routers and an equal number of the leaf nodes (one leaf node per router). Each bidirectional link in the simulation carries on-off traffic. Using LLNL's high-performance computing (HPC) clusters, we conducted strong and weak scaling studies, and investigated on-node scalability for MPI nodes. The scaling relations for both runtime and memory are derived. In addition we examine the packet transmission rate in the simulation and its scalability. Performance of the default ns-3 parallel scheduler is compared to the custom-designed NULL-message scheduler.