Research Article
Memory efficient analysis for a class of large structured Markov chains: work in progress
@INPROCEEDINGS{10.4108/ICST.VALUETOOLS2009.7424, author={Paolo Ballarini and Andr\^{a}s Horv\^{a}th}, title={Memory efficient analysis for a class of large structured Markov chains: work in progress}, proceedings={4th International ICST Workshop on Tools for solving Structured Markov Chains}, publisher={ACM}, proceedings_a={SMCTOOLS}, year={2010}, month={5}, keywords={}, doi={10.4108/ICST.VALUETOOLS2009.7424} }
- Paolo Ballarini
András Horváth
Year: 2010
Memory efficient analysis for a class of large structured Markov chains: work in progress
SMCTOOLS
ICST
DOI: 10.4108/ICST.VALUETOOLS2009.7424
Abstract
We consider a class of Markov chains modelling concurrent activities with phase-type (PH) distributed duration. Specifically we focus on systems whose dynamics is characterised by the parallel execution of a number of active jobs and such that the termination of an active job causes the remaining active jobs to be restarted from scratch (preemptive restart policy) or deactivated and can lead to the activation of new jobs. In such a framework, the state-space can be partitioned into disjoint sets (called macrostates). Each macrostate represents the parallel execution of the (currently) active jobs and a transition to another macrostate corresponds to termination of one of the active job. Models of this kind are subject to the phenomenon of state space explosion, to an extent which is proportional both to the level of concurrency (i.e., the number of active jobs in a macrostate) and to the dimension of the phase-type timing (i.e., the number of phases used to model jobs' execution). Although techniques exist that allow for handling the infinitesimal generator matrix of such models in a memory efficient manner (e.g., in terms of Kronecker expressions), classical Markovian analysis remains hindered by the memory requirements for storing the vector of the steady-state or transient probabilities.
In this paper we show that a Markov chain satisfying the above assumptions can be analysed by calculations performed on the (small) matrices describing the durations of the jobs without the explicit storage of the (large) vectors of transient of steady state probabilities.