Persistent Link:
http://hdl.handle.net/10150/290583
Title:
Construction and solution of Markov reward models
Author:
Qureshi, Muhammad Akber, 1964-
Issue Date:
1996
Publisher:
The University of Arizona.
Rights:
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
Abstract:
Stochastic Petri nets (SPNs) and extensions are a popular method for evaluating a wide variety of systems. In most cases, the interesting measures regarding the system's characteristics can be defined at the net level by means of reward variables. Depending on the measures, these net-level reward models are solved either by first generating a state-level reward model or by directly generating paths from the net-level description. In this thesis, we propose algorithms for the generation of state-level reward models as well as for directly obtaining solutions from net-level reward models when the net-level reward models are specified as stochastic activity networks (SANs) with "step-based reward structure." Moreover, we propose algorithms for computing the expected value and the probability distribution function of a reward variable at specified time instants, and for computing the probability distribution function of reward accumulated during a finite interval. The interval may correspond to the mission period in a mission-critical system, the time between scheduled maintenances, or a warranty period; whereas the time instants may be critical instances during these intervals. The proposed algorithms avoid the construction of state-level representations and the memory growth problems experienced when applying previous approaches to large models. Furthermore, we study the effect of workload on the availability and response time of voting algorithms. Voting algorithms are a popular way to provide data consistency in replicated data systems. Many models have been made to study the degree to which replication increases the availability of data, and some have been made to study the cost incurred in maintaining consistency. However, little work has been done to evaluate the time it takes to serve request, accounting for server and network failures, or to determine the effect of workload on these measures. In this thesis, we use stochastic activity networks (SANs) to study the effect of work load on availability and mean response time of two variant models of a replicated file system to maintain data consistency, one using a static voting algorithm, the other using a dynamic voting algorithm.
Type:
text; Dissertation-Reproduction (electronic)
Keywords:
Engineering, Electronics and Electrical.; Engineering, Industrial.; Operations Research.
Degree Name:
Ph.D.
Degree Level:
doctoral
Degree Program:
Graduate College; Electrical and Computer Engineering
Degree Grantor:
University of Arizona
Advisor:
Sanders, William H.

Full metadata record

DC FieldValue Language
dc.language.isoen_USen_US
dc.titleConstruction and solution of Markov reward modelsen_US
dc.creatorQureshi, Muhammad Akber, 1964-en_US
dc.contributor.authorQureshi, Muhammad Akber, 1964-en_US
dc.date.issued1996en_US
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.description.abstractStochastic Petri nets (SPNs) and extensions are a popular method for evaluating a wide variety of systems. In most cases, the interesting measures regarding the system's characteristics can be defined at the net level by means of reward variables. Depending on the measures, these net-level reward models are solved either by first generating a state-level reward model or by directly generating paths from the net-level description. In this thesis, we propose algorithms for the generation of state-level reward models as well as for directly obtaining solutions from net-level reward models when the net-level reward models are specified as stochastic activity networks (SANs) with "step-based reward structure." Moreover, we propose algorithms for computing the expected value and the probability distribution function of a reward variable at specified time instants, and for computing the probability distribution function of reward accumulated during a finite interval. The interval may correspond to the mission period in a mission-critical system, the time between scheduled maintenances, or a warranty period; whereas the time instants may be critical instances during these intervals. The proposed algorithms avoid the construction of state-level representations and the memory growth problems experienced when applying previous approaches to large models. Furthermore, we study the effect of workload on the availability and response time of voting algorithms. Voting algorithms are a popular way to provide data consistency in replicated data systems. Many models have been made to study the degree to which replication increases the availability of data, and some have been made to study the cost incurred in maintaining consistency. However, little work has been done to evaluate the time it takes to serve request, accounting for server and network failures, or to determine the effect of workload on these measures. In this thesis, we use stochastic activity networks (SANs) to study the effect of work load on availability and mean response time of two variant models of a replicated file system to maintain data consistency, one using a static voting algorithm, the other using a dynamic voting algorithm.en_US
dc.typetexten_US
dc.typeDissertation-Reproduction (electronic)en_US
dc.subjectEngineering, Electronics and Electrical.en_US
dc.subjectEngineering, Industrial.en_US
dc.subjectOperations Research.en_US
thesis.degree.namePh.D.en_US
thesis.degree.leveldoctoralen_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.disciplineElectrical and Computer Engineeringen_US
thesis.degree.grantorUniversity of Arizonaen_US
dc.contributor.advisorSanders, William H.en_US
dc.identifier.proquest9706170en_US
dc.identifier.bibrecord.b34290588en_US
All Items in UA Campus Repository are protected by copyright, with all rights reserved, unless otherwise indicated.