1st International ICST Conference on Performance Evaluation Methodologies and Tools

Research Article

Automated benchmarking and analysis tool

  • @INPROCEEDINGS{10.1145/1190095.1190101,
        author={Tomas  Kalibera and Jakub  Lehotsky and David  Majda and Branislav  Repcek and Michal  Tomcanyi and Antonin  Tomecek and Petr  Tuma and Jaroslav  Urban},
        title={Automated benchmarking and analysis tool},
        proceedings={1st International ICST Conference on Performance Evaluation Methodologies and Tools},
        publisher={ACM},
        proceedings_a={VALUETOOLS},
        year={2012},
        month={4},
        keywords={Automated benchmarking Regression benchmarking},
        doi={10.1145/1190095.1190101}
    }
    
  • Tomas Kalibera
    Jakub Lehotsky
    David Majda
    Branislav Repcek
    Michal Tomcanyi
    Antonin Tomecek
    Petr Tuma
    Jaroslav Urban
    Year: 2012
    Automated benchmarking and analysis tool
    VALUETOOLS
    ACM
    DOI: 10.1145/1190095.1190101
Tomas Kalibera1,*, Jakub Lehotsky1, David Majda1, Branislav Repcek1, Michal Tomcanyi1, Antonin Tomecek1, Petr Tuma1, Jaroslav Urban1
  • 1: Distributed Systems Research Group, Department of Software Engineering, Faculty of Mathematics and Physics, Charles University, Malostranske nam. 25, 118 00 Prague, Czech Republic. phone +420-221914232, fax +420-221914323
*Contact email: been@nenya.ms.mff.cuni.cz

Abstract

Benchmarking is an important performance evaluation technique that provides performance data representative of real systems. Such data can be used to verify the results of performance modeling and simulation, or to detect performance changes. Automated benchmarking is an increasingly popular approach to tracking performance changes during software development, which gives developers a timely feedback on their work. In contrast with the advances in modeling and simulation tools, the tools for automated benchmarking are usually being implemented ad-hoc for each project, wasting resources and limiting functionality.We present the result of project BEEN, a generic tool for automated benchmarking in a heterogeneous distributed environment. BEEN automates all steps of a benchmark experiment from software building and deployment through measurement and load monitoring to the evaluation of results. The notable features include separation of measurement from the evaluation and ability to adaptively scale the benchmark experiment based on the evaluation. BEEN has been designed to facilitate automated detection of performance changes during software development (regression benchmarking).