Interpolating data in transition probability matrix of Markov chain to improvise average length of stay

Data interpolation is proposed for estimating transition probability matrix (TPM) of Markov chain model. We showed that interpolated estimator was unbiased. To show its applicability the model on the manpower recruitment policy is developed and analyzed on Excel spreadsheet. Based on the model, the...

Full description

Saved in:
Bibliographic Details
Main Authors: Abdul Rahim, Rahela, Jamaluddin, Fadhilah
Format: Article
Language:English
Published: 2017
Subjects:
Online Access:http://repo.uum.edu.my/25906/1/IJBR%208%203%202017%20263%20270.pdf
http://repo.uum.edu.my/25906/
http://www.bipublication.com/ijabr2017sp3.html
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Data interpolation is proposed for estimating transition probability matrix (TPM) of Markov chain model. We showed that interpolated estimator was unbiased. To show its applicability the model on the manpower recruitment policy is developed and analyzed on Excel spreadsheet. Based on the model, the new estimation of the state transition matrix for each category of manpower driven by interpolation technique is devised. The revised transition matrix of Markov chain was substituted by embedding interpolation and can be used as an equation solver to calculate mean time estimation for each category of manpower. The model results were then compared to the classical Markov chain for both old and new policies by means of mean time estimation. Two scenarios were considered in the study; scenario 1 was based on historical data pattern in five years and scenario 2 was based on the new policy. The results showed the possibility average length of stay by position and probability of loss for both scenarios. The proposed data interpolation based TPM approach has shown a new way of recruitment projection for policy changes. The results have indicated better estimation of average length of stay for each category compared to the traditional Markov chain approach.