Sixth International Conference on Simulation Tools and Techniques

Research Article

Evaluating Simulation Software Components with Player Rating Systems

  • @INPROCEEDINGS{10.4108/icst.simutools.2013.251723,
        author={Jonathan Wienss and Michael Stein and Roland Ewald},
        title={Evaluating Simulation Software Components with Player Rating Systems},
        proceedings={Sixth International Conference on Simulation Tools and Techniques},
        publisher={ICST},
        proceedings_a={SIMUTOOLS},
        year={2013},
        month={7},
        keywords={software components adaptive software performance analysis bayesian learning online learning},
        doi={10.4108/icst.simutools.2013.251723}
    }
    
  • Jonathan Wienss
    Michael Stein
    Roland Ewald
    Year: 2013
    Evaluating Simulation Software Components with Player Rating Systems
    SIMUTOOLS
    ACM
    DOI: 10.4108/icst.simutools.2013.251723
Jonathan Wienss1, Michael Stein1, Roland Ewald1,*
  • 1: University of Rostock
*Contact email: roland.ewald@uni-rostock.de

Abstract

In component-based simulation systems, simulation runs are usually executed by combinations of distinct components, each solving a particular sub-task. If multiple components are available for a given sub-task (e.g., different event queue implementations), a simulation system may rely on an automatic selection mechanism, on a user decision, or---if neither is available---on a predefined default component. However, deciding upon a default component for each kind of sub-task is difficult: such a component should work well across various application domains and various combinations with other components. Furthermore, the performance of individual components cannot be evaluated easily, since performance is typically measured for component combinations as a whole (e.g., the execution time of a simulation run). Finally, the selection of default components should be dynamic, as new and potentially superior components may be deployed to the system over time. We illustrate how player rating systems for team-based games can solve the above problems and evaluate our approach with an implementation of the TrueSkill(tm) rating system [14], applied in the context of the open-source modeling and simulation framework JAMES II. We also show how such systems can be used to steer performance analysis experiments for component ranking.