DECISION MAKING UNDER UNCERTAINTY IN DYNAMIC MULTI-STAGE ATTACKER-DEFENDER GAMES

Persistent Link:
http://hdl.handle.net/10150/204331
Title:
DECISION MAKING UNDER UNCERTAINTY IN DYNAMIC MULTI-STAGE ATTACKER-DEFENDER GAMES
Author:
Luo, Yi
Issue Date:
2011
Publisher:
The University of Arizona.
Rights:
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
Embargo:
Embargo: Release after 4/22/2013
Abstract:
This dissertation presents efficient, on-line, convergent methods to find defense strategies against attacks in dynamic multi-stage attacker-defender games including adaptive learning. This effort culminated in four papers submitted to high quality journals and a book and they are partially published. The first paper presents a novel fictitious play approach to describe the interactions between the attackers and network administrator along a dynamic game. Multi-objective optimization methodology is used to predict the attacker's best actions at each decision node. The administrator also keeps track of the attacker's actions and updates his knowledge on the attacker's behavior and objectives after each detected attack, and uses this information to update the prediction of the attacker's future actions to find its best response strategies. The second paper proposes a Dynamic game tree based Fictitious Play (DFP) approach to describe the repeated interactive decision processes of the players. Each player considers all possibilities in future interactions with their uncertainties, which are based on learning the opponent's decision process (including risk attitude, objectives). Instead of searching the entire game tree, appropriate future time horizons are dynamically selected for both players. The administrator keeps tracking the opponent's actions, predicts the probabilities of future possible attacks, and then chooses its best moves. The third paper introduces an optimization model to maximize the deterministic equivalent of the random payoff function of a computer network administrator in defending the system against random attacks. By introducing new variables the transformed objective function becomes concave. A special optimization algorithm is developed which requires the computation of the unique solution of a single variable monotonic equation. The fourth paper, which is an invited book chapter, proposes a discrete-time stochastic control model to capture the process of finding the best current move of the defender. The defender's payoffs at each stage of the game depend on the attacker's and the defender's accumulative efforts and are considered random variables due to their uncertainty. Their certain equivalents can be approximated based on their first and second moments which is chosen as the cost functions of the dynamic system. An on-line, convergent, Scenarios based Proactive Defense (SPD) algorithm is developed based on Differential Dynamic Programming (DDP) to solve the associated optimal control problem.
Type:
text; Electronic Dissertation
Keywords:
Decision making under uncertainty; Dynamic programming; Forecasting; Game theory; Intrusion defense system; Optimal control
Degree Name:
Ph.D.
Degree Level:
doctoral
Degree Program:
Graduate College; Systems & Industrial Engineering
Degree Grantor:
University of Arizona
Advisor:
Szidarovszky, Ferenc

Full metadata record

DC FieldValue Language
dc.language.isoenen_US
dc.titleDECISION MAKING UNDER UNCERTAINTY IN DYNAMIC MULTI-STAGE ATTACKER-DEFENDER GAMESen_US
dc.creatorLuo, Yien_US
dc.contributor.authorLuo, Yien_US
dc.date.issued2011-
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.description.releaseEmbargo: Release after 4/22/2013en_US
dc.description.abstractThis dissertation presents efficient, on-line, convergent methods to find defense strategies against attacks in dynamic multi-stage attacker-defender games including adaptive learning. This effort culminated in four papers submitted to high quality journals and a book and they are partially published. The first paper presents a novel fictitious play approach to describe the interactions between the attackers and network administrator along a dynamic game. Multi-objective optimization methodology is used to predict the attacker's best actions at each decision node. The administrator also keeps track of the attacker's actions and updates his knowledge on the attacker's behavior and objectives after each detected attack, and uses this information to update the prediction of the attacker's future actions to find its best response strategies. The second paper proposes a Dynamic game tree based Fictitious Play (DFP) approach to describe the repeated interactive decision processes of the players. Each player considers all possibilities in future interactions with their uncertainties, which are based on learning the opponent's decision process (including risk attitude, objectives). Instead of searching the entire game tree, appropriate future time horizons are dynamically selected for both players. The administrator keeps tracking the opponent's actions, predicts the probabilities of future possible attacks, and then chooses its best moves. The third paper introduces an optimization model to maximize the deterministic equivalent of the random payoff function of a computer network administrator in defending the system against random attacks. By introducing new variables the transformed objective function becomes concave. A special optimization algorithm is developed which requires the computation of the unique solution of a single variable monotonic equation. The fourth paper, which is an invited book chapter, proposes a discrete-time stochastic control model to capture the process of finding the best current move of the defender. The defender's payoffs at each stage of the game depend on the attacker's and the defender's accumulative efforts and are considered random variables due to their uncertainty. Their certain equivalents can be approximated based on their first and second moments which is chosen as the cost functions of the dynamic system. An on-line, convergent, Scenarios based Proactive Defense (SPD) algorithm is developed based on Differential Dynamic Programming (DDP) to solve the associated optimal control problem.en_US
dc.typetexten_US
dc.typeElectronic Dissertationen_US
dc.subjectDecision making under uncertaintyen_US
dc.subjectDynamic programmingen_US
dc.subjectForecastingen_US
dc.subjectGame theoryen_US
dc.subjectIntrusion defense systemen_US
dc.subjectOptimal controlen_US
thesis.degree.namePh.D.en_US
thesis.degree.leveldoctoralen_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.disciplineSystems & Industrial Engineeringen_US
thesis.degree.grantorUniversity of Arizonaen_US
dc.contributor.advisorSzidarovszky, Ferencen_US
dc.contributor.committeememberHariri, Salimen_US
dc.contributor.committeememberLin, Wei Huaen_US
dc.contributor.committeememberBayraksan, Güzinen_US
dc.identifier.proquest11644-
dc.identifier.oclc752261491-
All Items in UA Campus Repository are protected by copyright, with all rights reserved, unless otherwise indicated.