Reward model solution methods with impulse and rate rewards: An algorithm and numerical results

Persistent Link:
http://hdl.handle.net/10150/278184
Title:
Reward model solution methods with impulse and rate rewards: An algorithm and numerical results
Author:
Qureshi, Muhammad Akber, 1964-
Issue Date:
1992
Publisher:
The University of Arizona.
Rights:
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
Abstract:
Reward models have become an important method for specifying performability models for many types of systems. Many methods have been proposed for solving these reward models, but no method has proven itself to be applicable over all system classes and sizes. Furthermore, specification of reward models has usually been done at the state level, which can be extremely cumbersome for realistic models. We describe a method to specify reward models as stochastic activity networks (SANs) with impulse and rate rewards, and a method by which to solve these models via randomization. The method is an extension of one proposed by de Souza e Silva and Gail in which impulse and rate rewards are specified at the SAN level, and solved in a single model. Furthermore, a novel method of discarding trajectories of low probabilities with algorithms to compute bounds on the injected error is proposed. The methodology is presented, together with the results on the time and space efficiency of a particular implementation.
Type:
text; Thesis-Reproduction (electronic)
Keywords:
Engineering, Electronics and Electrical.; Computer Science.
Degree Name:
M.S.
Degree Level:
masters
Degree Program:
Graduate College
Degree Grantor:
University of Arizona
Advisor:
Sanders, William H.

Full metadata record

DC FieldValue Language
dc.language.isoen_USen_US
dc.titleReward model solution methods with impulse and rate rewards: An algorithm and numerical resultsen_US
dc.creatorQureshi, Muhammad Akber, 1964-en_US
dc.contributor.authorQureshi, Muhammad Akber, 1964-en_US
dc.date.issued1992en_US
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.description.abstractReward models have become an important method for specifying performability models for many types of systems. Many methods have been proposed for solving these reward models, but no method has proven itself to be applicable over all system classes and sizes. Furthermore, specification of reward models has usually been done at the state level, which can be extremely cumbersome for realistic models. We describe a method to specify reward models as stochastic activity networks (SANs) with impulse and rate rewards, and a method by which to solve these models via randomization. The method is an extension of one proposed by de Souza e Silva and Gail in which impulse and rate rewards are specified at the SAN level, and solved in a single model. Furthermore, a novel method of discarding trajectories of low probabilities with algorithms to compute bounds on the injected error is proposed. The methodology is presented, together with the results on the time and space efficiency of a particular implementation.en_US
dc.typetexten_US
dc.typeThesis-Reproduction (electronic)en_US
dc.subjectEngineering, Electronics and Electrical.en_US
dc.subjectComputer Science.en_US
thesis.degree.nameM.S.en_US
thesis.degree.levelmastersen_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.grantorUniversity of Arizonaen_US
dc.contributor.advisorSanders, William H.en_US
dc.identifier.proquest1349471en_US
dc.identifier.bibrecord.b27698841en_US
All Items in UA Campus Repository are protected by copyright, with all rights reserved, unless otherwise indicated.