Equilibrium characterization for a class of dynamical neural networks with applications to learning and synthesis.

Persistent Link:
http://hdl.handle.net/10150/185365
Title:
Equilibrium characterization for a class of dynamical neural networks with applications to learning and synthesis.
Author:
Sudharsanan, Subramania Iyer.
Issue Date:
1991
Publisher:
The University of Arizona.
Rights:
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
Abstract:
There has been a considerable amount of interest in the application of neural networks to information processing problems in the recent past. The computational capabilities of neural networks stem from a massively parallel, dense interconnection of simple nonlinear elements. In this dissertation, a class of dynamical neural networks which has received wide attention is investigated for its general computational capabilities. This is achieved by considering the design of the network in various application scenarios, viz. quadratic minimization, associative memory and nonlinear input-output mapping. The design of the network for each application is facilitated by a qualitative analysis of the properties of the equilibrium points of the neural network whose elements are appropriately tailored for the specific application. Two different design methodologies, learning and synthesis, are addressed. The equilibrium characterization studies conducted yield specific results regarding the equilibrium points: degree of exponential stability, estimation of regions of attraction and conditions for confining them in certain regions of the state-space. The synthesis procedure developed utilizing these results for the employment of the network to perform quadratic minimization guarantees a unique equilibrium point. It is shown that the speed of computation can be increased by adjusting certain parameters of the network and is independent of the problem size. The synthesis of the associative memory network is carried out by a proper tailoring of the neuronal activation functions to satisfy certain stability requirements and by using an interconnection structure that is not necessarily symmetric. Obtaining valuable insights from the results of the equilibrium characterization, a simple and efficient learning rule for the interconnection structure is also devised. Convergence properties of the learning rule are established, and guidelines for selecting the initial values and the adaptation step size parameters are provided. This learning rule is extended to a novel three layer neural network architecture that functions as a nonlinear input-output mapper. The feasibility of the developed learning rules and synthesis procedures are demonstrated through a number of applications, viz. parameter estimation and state estimation in linear systems, design of a class of pattern recognition filters, storage of specific pattern vectors, and nonlinear system identification.
Type:
text; Dissertation-Reproduction (electronic)
Keywords:
Dissertations, Academic; Artificial intelligence; Computer science.
Degree Name:
Ph.D.
Degree Level:
doctoral
Degree Program:
Electrical and Computer Engineering; Graduate College
Degree Grantor:
University of Arizona
Advisor:
Sundareshan, Malur K.

Full metadata record

DC FieldValue Language
dc.language.isoenen_US
dc.titleEquilibrium characterization for a class of dynamical neural networks with applications to learning and synthesis.en_US
dc.creatorSudharsanan, Subramania Iyer.en_US
dc.contributor.authorSudharsanan, Subramania Iyer.en_US
dc.date.issued1991en_US
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.description.abstractThere has been a considerable amount of interest in the application of neural networks to information processing problems in the recent past. The computational capabilities of neural networks stem from a massively parallel, dense interconnection of simple nonlinear elements. In this dissertation, a class of dynamical neural networks which has received wide attention is investigated for its general computational capabilities. This is achieved by considering the design of the network in various application scenarios, viz. quadratic minimization, associative memory and nonlinear input-output mapping. The design of the network for each application is facilitated by a qualitative analysis of the properties of the equilibrium points of the neural network whose elements are appropriately tailored for the specific application. Two different design methodologies, learning and synthesis, are addressed. The equilibrium characterization studies conducted yield specific results regarding the equilibrium points: degree of exponential stability, estimation of regions of attraction and conditions for confining them in certain regions of the state-space. The synthesis procedure developed utilizing these results for the employment of the network to perform quadratic minimization guarantees a unique equilibrium point. It is shown that the speed of computation can be increased by adjusting certain parameters of the network and is independent of the problem size. The synthesis of the associative memory network is carried out by a proper tailoring of the neuronal activation functions to satisfy certain stability requirements and by using an interconnection structure that is not necessarily symmetric. Obtaining valuable insights from the results of the equilibrium characterization, a simple and efficient learning rule for the interconnection structure is also devised. Convergence properties of the learning rule are established, and guidelines for selecting the initial values and the adaptation step size parameters are provided. This learning rule is extended to a novel three layer neural network architecture that functions as a nonlinear input-output mapper. The feasibility of the developed learning rules and synthesis procedures are demonstrated through a number of applications, viz. parameter estimation and state estimation in linear systems, design of a class of pattern recognition filters, storage of specific pattern vectors, and nonlinear system identification.en_US
dc.typetexten_US
dc.typeDissertation-Reproduction (electronic)en_US
dc.subjectDissertations, Academicen_US
dc.subjectArtificial intelligenceen_US
dc.subjectComputer science.en_US
thesis.degree.namePh.D.en_US
thesis.degree.leveldoctoralen_US
thesis.degree.disciplineElectrical and Computer Engineeringen_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.grantorUniversity of Arizonaen_US
dc.contributor.advisorSundareshan, Malur K.en_US
dc.contributor.committeememberCellier, Francois E.en_US
dc.contributor.committeememberSchowengerdt, Robert A.en_US
dc.identifier.proquest9121552en_US
dc.identifier.oclc709596452en_US
All Items in UA Campus Repository are protected by copyright, with all rights reserved, unless otherwise indicated.