reine Buchbestellungen ab 5 Euro senden wir Ihnen Portofrei zuDiesen Artikel senden wir Ihnen ohne weiteren Aufpreis als PAKET

Optimization of Stochastic Discrete Systems and Control on Complex Networks
(Englisch)
Computational Networks
Dmitrii Lozovanu & Stefan Pickl

Print on Demand - Dieser Artikel wird für Sie gedruckt!

86,45 €

inkl. MwSt. · Portofrei
Dieses Produkt wird für Sie gedruckt, Lieferzeit ca. 14 Werktage
Menge:

Optimization of Stochastic Discrete Systems and Control on Complex Networks

Medium
Seiten
Erscheinungsdatum
Auflage
Erscheinungsjahr
Sprache
Serienfolge
alternative Ausgabe
Vertrieb
Kategorie
Buchtyp
Warengruppenindex
Warengruppe
Detailwarengruppe
Laenge
Breite
Hoehe
Gewicht
Herkunft
Relevanz
Referenznummer
Moluna-Artikelnummer

Produktbeschreibung

Systematizes the most important existing methods of stochastic dynamic optimization

Describes new algorithms for solving different classes of stochastic dynamic programming problems

Presents methods to solve practical decision problems from diverse areas such as ecology, economics, engineering and communication systems


Systematizes the most important existing methods of stochastic dynamic optimization

Describes new algorithms for solving different classes of stochastic dynamic programming problems

Presents methods to solve practical decision problems from diverse areas such as ecology, economics, engineering and communication systems

Includes supplementary material: sn.pub/extras


Prof. Dr. Stefan Pickl is professor for Operations Research at Universität der Bundeswehr in Munich. He studied mathematics, electrical engineering, and philosophy at TU Darmstadt and EPFL Lausanne 1987-93. Dipl.-Ing. '93, Doctorate 1998 with award. Assistant Professor at Cologne University (Dr. habil. 2005; venia legendi ``Mathematics"). Visiting Professor at University of New Mexico (U.S.A.), University Graz (Austria), University of California at Berkeley. Visiting scientist at SANDIA, Los Alamos National Lab, Santa Fe Institute for Complex Systems and MIT. Associated with Centre for the Advanced Study of Algorithms (CASA, USA) and Center for Network Innovation and Experimentation (CENETIX, USA) , vice-chair of EURO group ``Experimental OR”, program for highly gifted pupils, research program``Intelligent Networks and Security Structures” (INESS), ``Critical Infrastructures and System Analyses" (CRISYS). International best paper awards ´03, ´05, '07. Foundation of COMTESSA (Competence Center for Operations Research, Strategic Planning Management, Safety & Security ALLIANCE).

Prof. Dr. Dmitrii Lozovanu received his PhD in mathematics  in 1980 from the Institute of Cybernetics of Academy of Sciences of Ukraine, Kiev. After the habilitation theses defense in 1991 he became professor in Computer Science. He is the head of department of Applied Mathematics at the Faculty of Mathematics and Computer Science of Moldova State University, Chisinau. His research interest

s are related to discrete optimization, game theory, optimal control and stochastic decision processes.



This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors´ new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book´s final chapter is devoted to finite horizon stochastic control problems and Markov decision processes. The algorithms developed represent a valuable contribution to the important field of computational network theory.

Discrete stochastic processes, numerical methods for Markov chains and polynomial time algorithms.- Stochastic optimal control problems and Markov decision processes with infinite time horizon.- A game-theoretical approach to Markov decision processes, stochastic positional games and multicriteria control models.- Dynamic programming algorithms for finite horizon control problems and Markov decision processes.

"This book contributes to the systematization of the most relevant existing methods for these problems by introducing new algorithms for solving different classes of stochastic dynamic programming problems. ... The mathematical and computational level of the book will enable students and practitioners to deepen their understanding of the topic. Numerous examples are included to illustrate the proposed algorithms and methods.” (Rosario Romera, Mathematical Reviews, July, 2015)



This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic control problems and Markov decision processes. The algorithms developed represent a valuable contribution to the important field of computational network theory.

"This book contributes to the systematization of the most relevant existing methods for these problems by introducing new algorithms for solving different classes of stochastic dynamic programming problems. ... The mathematical and computational level of the book will enable students and practitioners to deepen their understanding of the topic. Numerous examples are included to illustrate the proposed algorithms and methods." (Rosario Romera, Mathematical Reviews, July, 2015)


Prof. Dr. Stefan Pickl is professor for Operations Research at Universität der Bundeswehr in Munich. He studied mathematics, electrical engineering, and philosophy at TU Darmstadt and EPFL Lausanne 1987-93. Dipl.-Ing. '93, Doctorate 1998 with award. Assistant Professor at Cologne University (Dr. habil. 2005; venia legendi ``Mathematics"). Visiting Professor at University of New Mexico (U.S.A.), University Graz (Austria), University of California at Berkeley. Visiting scientist at SANDIA, Los Alamos National Lab, Santa Fe Institute for Complex Systems and MIT. Associated with Centre for the Advanced Study of Algorithms (CASA, USA) and Center for Network Innovation and Experimentation (CENETIX, USA) , vice-chair of EURO group ``Experimental OR", program for highly gifted pupils, research program``Intelligent Networks and Security Structures" (INESS), ``Critical Infrastructures and System Analyses" (CRISYS). International best paper awards '03, '05, '07. Foundation of COMTESSA (Competence Center for Operations Research, Strategic Planning Management, Safety & Security ALLIANCE). Prof. Dr. Dmitrii Lozovanu received his PhD in mathematics in 1980 from the Institute of Cybernetics of Academy of Sciences of Ukraine, Kiev. After the habilitation theses defense in 1991 he became professor in Computer Science. He is the head of department of Applied Mathematics at the Faculty of Mathematics and Computer Science of Moldova State University, Chisinau. His research interests are related to discrete optimization, game theory, optimal control and stochastic decision processes.

Über den Autor

Prof. Dr. Stefan Pickl is professor for Operations Research at Universität der Bundeswehr in Munich. He studied mathematics, electrical engineering, and philosophy at TU Darmstadt and EPFL Lausanne 1987-93. Dipl.-Ing. '93, Doctorate 1998 with award. Assistant Professor at Cologne University (Dr. habil. 2005; venia legendi ``Mathematics"). Visiting Professor at University of New Mexico (U.S.A.), University Graz (Austria), University of California at Berkeley. Visiting scientist at SANDIA, Los Alamos National Lab, Santa Fe Institute for Complex Systems and MIT. Associated with Centre for the Advanced Study of Algorithms (CASA, USA) and Center for Network Innovation and Experimentation (CENETIX, USA) , vice-chair of EURO group ``Experimental OR", program for highly gifted pupils, research program``Intelligent Networks and Security Structures" (INESS), ``Critical Infrastructures and System Analyses" (CRISYS). International best paper awards '03, '05, '07. Foundation of COMTESSA (Competence Center for Operations Research, Strategic Planning Management, Safety & Security ALLIANCE).

Prof. Dr. Dmitrii Lozovanu received his PhD in mathematics  in 1980 from the Institute of Cybernetics of Academy of Sciences of Ukraine, Kiev. After the habilitation theses defense in 1991 he became professor in Computer Science. He is the head of department of Applied Mathematics at the Faculty of Mathematics and Computer Science of Moldova State University, Chisinau. His research interest

s are related to discrete optimization, game theory, optimal control and stochastic decision processes.


Inhaltsverzeichnis



Discrete stochastic processes, numerical methods for Markov chains and polynomial time algorithms.- Stochastic optimal control problems and Markov decision processes with infinite time horizon.- A game-theoretical approach to Markov decision processes, stochastic positional games and multicriteria control models.- Dynamic programming algorithms for finite horizon control problems and Markov decision processes.


Klappentext

This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic control problems and Markov decision processes. The algorithms developed represent a valuable contribution to the important field of computational network theory.




Systematizes the most important existing methods of stochastic dynamic optimization

Describes new algorithms for solving different classes of stochastic dynamic programming problems

Presents methods to solve practical decision problems from diverse areas such as ecology, economics, engineering and communication systems

Includes supplementary material: sn.pub/extras



Datenschutz-Einstellungen