随机控制
目 录内容简介
Preface
Notation
Assumption Index
Problem Index
Chapter 1. Basic Stochastic Calculus
1. Probability
1.1. Probability spaces
1.2. Random variables
1.3. Conditional expectation
1.4. Convcrgence of probabilities
查看完整
Notation
Assumption Index
Problem Index
Chapter 1. Basic Stochastic Calculus
1. Probability
1.1. Probability spaces
1.2. Random variables
1.3. Conditional expectation
1.4. Convcrgence of probabilities
查看完整
目 录内容简介
随机控制也叫试探控制,是最原始的控制方式,是其他一切控制方式的基础。随机控制是完全建立在偶然机遇的基础上,是“试试看”思想在控制活动中的体现。随机控制在成功的同时,常常伴随着失败。这种控制方式有较大的风险,对事关重大的活动,一般不宜采用这种控制方式。 《随机控制》(作者雍炯敏)是关于介绍随机控制的英文教材。
目 录内容简介
Preface
Notation
Assumption Index
Problem Index
Chapter 1. Basic Stochastic Calculus
1. Probability
1.1. Probability spaces
1.2. Random variables
1.3. Conditional expectation
1.4. Convcrgence of probabilities
2. Stochastic Processes
2.1. General considerations
2.2. Brownian motions
3. Stopping Times
4. Martingales
5. ItS's Integral
5.1. Nondifferentiability of Brownian motion
5.2. Definition of Ites integral and basic properties
5.3. ItS's formula
5.4. Martingale representation theorems
6. Stochastic Differential Equations
6.1. Strong solutions
6.2. Weak solutions
6.3. Linear SDEs
6.4. Other types of SDEs
Chapter 2. Stochastic Optimal Control Problems
1. Introduction
2. Deterministic Cases Revisited
3. Examples of Stochastic Control Problems
3. 1. Production planning
3.2. Investment vs. consumption
3.3. Reinsurance and dividend management
3.4. Technology diffusion
3.5. Queueing systems in heavy traffic
4. Formulations of Stochastic Optimal Control Problems
4.1. Strong formulation
4.2. Weak formulation
5. Existence of Optimal Controls
5.1. A deterministic result
5.2. Existence under strong formulation
5.3. Existence under weak formulation
6. Reachable Sets of Stochastic Control Systems
6.1. Nonconvexity of the reachable sets
6.2. Nonclnseness of the reachable sets
7. Other Stochastic Control Models
7.1. Random duration
7.2. Optimal stopping
7.3. Singular and impulse controls
7.4. Risk-sensitive controls
7.5. Ergodic controls
7.6. Partially observable systems
8. Historical Remarks
Chapter 3. Maximum Principle and Stochastic
Hamiitonian Systems
1. Introduction
2. The Deterministic Case Rcvisited
3. Statement of the Stochastic Maximum Principle
3.1. Adjoint equations
3.2. The maximum principle and stochastic
Hamiltonian systems
3.3. A worked-out example
4. A Proof of the Maximum Principle
4.1. A moment estimate
4.2. Taylor expansions
4.3. Duality analysis and complction of thc proof
5. Sufficient Conditions of Optimality
6. Problems with Statc Constraints
6.1. Formulation of the problem and the maximum principle
6.2. Some preliminary lemmas
6.3. A proof of Theorem 6.1
7. Historical Remarks
Chapter 4. Dynamic Programming and HJB Equations
1. Introduction
2. The Deterministic Casc Revisited
3. The Stochastic Principle of Optimality and the HJB Equation
3.1. A stochastic framework for dynamic programming
3.2. Principlc of optimality
3.3. The HJB cquation
4. Other Properties of the Value Function
4.1. Continuous dependence on parameters
4.2. Semiconcavity
5. Viseo~ity Solutions
5.1. Definitions
5.2. Some properties
6. Uniqueness of Viscosity Solutions
6.1. A uniqueness theorem
6.2. Proofs of Lemmas 6.6 and 6.7
7. Historical Rcmarks
Chapter 5. The Relationship Between the Maximum
Principle and Dynamic Programming
1. Introduction
2. Classical Hamilton-Jacobi Theory
3. Relationship for Deterministic Systems
3.1. Adjoint variable and value function: Smooth case
3.2. Economic interpretation
3.3. Methods of characteristics and the Fcynman Kac formula
3.4. Adjoint variable and value function: Nonsmooth case
3.5. Vcrification theorems
4. Relationship for Stochastic Systems
4.1. Smooth case
4.2. Nonsmooth case: Differentials in the spatial variable
4.3. Nonsmooth case: Differentials in the time variable
5. Stochastic Vcrification Theorems
5.1. Smooth case
5.2. Nonsmooth case
6. Optimal Fccdback Controls
7. Historical Remarks
Chapter 6. Linear Quadratic Optimal Control Problems
1. Introduction
2. The Deterministic LQ Problems Revisited
2.1. Formulation
2.2. A minimization problem of a quadratic functional
2.3. A linear Hamiltonian system
2.4. The Riccati equation and feedback optimal control
3. FormuLation of Stochastic LQ Problems
3.1. Statement of the problems
3.2. Examples
4. Finiteness and Solvability
5. A Necessary Condition and a Hamiltonian System
6. Stochastic Riceati Equations
7. GLobal Solvability of Stochastic Riccati EQuations
7.1. Existence: Thc standard case
7.2. Existence: The case C = 0, S = 0, and Q,G >_ 0
7.3. Existence: The one-dimensional case
8. A Mean-variance Portfolio Selection Problem
9. Historical Remarks
Chapter 7. Backward Stochastic Differential Equations
1. Introduction
2. Linear Backward Stochastic Differential EQuations
3. Nonlinear Backward Stochastic Differential Equations
3.1. BSDEs in finite deterministic durations: Method of
contraction mapping
3.2. BSDEs in random durations: Method of continuation
4. Feynman-Kac-Type Formulae
4.1. Representation via SDEs
4.2. Representation via BSDEs
5. Forward-Backward Stochastic Differential Equations
5.1. General formulation and nonsolvability
5.2. The four-step scheme, a heuristic derivation
5.3. Several solvable classes of FBSDEs
6. Option Pricing Problems
6.1. European call options and the Black-Scholes formula
6.2. Other options
7. Historical Remarks
References
Index
^ 收 起
Notation
Assumption Index
Problem Index
Chapter 1. Basic Stochastic Calculus
1. Probability
1.1. Probability spaces
1.2. Random variables
1.3. Conditional expectation
1.4. Convcrgence of probabilities
2. Stochastic Processes
2.1. General considerations
2.2. Brownian motions
3. Stopping Times
4. Martingales
5. ItS's Integral
5.1. Nondifferentiability of Brownian motion
5.2. Definition of Ites integral and basic properties
5.3. ItS's formula
5.4. Martingale representation theorems
6. Stochastic Differential Equations
6.1. Strong solutions
6.2. Weak solutions
6.3. Linear SDEs
6.4. Other types of SDEs
Chapter 2. Stochastic Optimal Control Problems
1. Introduction
2. Deterministic Cases Revisited
3. Examples of Stochastic Control Problems
3. 1. Production planning
3.2. Investment vs. consumption
3.3. Reinsurance and dividend management
3.4. Technology diffusion
3.5. Queueing systems in heavy traffic
4. Formulations of Stochastic Optimal Control Problems
4.1. Strong formulation
4.2. Weak formulation
5. Existence of Optimal Controls
5.1. A deterministic result
5.2. Existence under strong formulation
5.3. Existence under weak formulation
6. Reachable Sets of Stochastic Control Systems
6.1. Nonconvexity of the reachable sets
6.2. Nonclnseness of the reachable sets
7. Other Stochastic Control Models
7.1. Random duration
7.2. Optimal stopping
7.3. Singular and impulse controls
7.4. Risk-sensitive controls
7.5. Ergodic controls
7.6. Partially observable systems
8. Historical Remarks
Chapter 3. Maximum Principle and Stochastic
Hamiitonian Systems
1. Introduction
2. The Deterministic Case Rcvisited
3. Statement of the Stochastic Maximum Principle
3.1. Adjoint equations
3.2. The maximum principle and stochastic
Hamiltonian systems
3.3. A worked-out example
4. A Proof of the Maximum Principle
4.1. A moment estimate
4.2. Taylor expansions
4.3. Duality analysis and complction of thc proof
5. Sufficient Conditions of Optimality
6. Problems with Statc Constraints
6.1. Formulation of the problem and the maximum principle
6.2. Some preliminary lemmas
6.3. A proof of Theorem 6.1
7. Historical Remarks
Chapter 4. Dynamic Programming and HJB Equations
1. Introduction
2. The Deterministic Casc Revisited
3. The Stochastic Principle of Optimality and the HJB Equation
3.1. A stochastic framework for dynamic programming
3.2. Principlc of optimality
3.3. The HJB cquation
4. Other Properties of the Value Function
4.1. Continuous dependence on parameters
4.2. Semiconcavity
5. Viseo~ity Solutions
5.1. Definitions
5.2. Some properties
6. Uniqueness of Viscosity Solutions
6.1. A uniqueness theorem
6.2. Proofs of Lemmas 6.6 and 6.7
7. Historical Rcmarks
Chapter 5. The Relationship Between the Maximum
Principle and Dynamic Programming
1. Introduction
2. Classical Hamilton-Jacobi Theory
3. Relationship for Deterministic Systems
3.1. Adjoint variable and value function: Smooth case
3.2. Economic interpretation
3.3. Methods of characteristics and the Fcynman Kac formula
3.4. Adjoint variable and value function: Nonsmooth case
3.5. Vcrification theorems
4. Relationship for Stochastic Systems
4.1. Smooth case
4.2. Nonsmooth case: Differentials in the spatial variable
4.3. Nonsmooth case: Differentials in the time variable
5. Stochastic Vcrification Theorems
5.1. Smooth case
5.2. Nonsmooth case
6. Optimal Fccdback Controls
7. Historical Remarks
Chapter 6. Linear Quadratic Optimal Control Problems
1. Introduction
2. The Deterministic LQ Problems Revisited
2.1. Formulation
2.2. A minimization problem of a quadratic functional
2.3. A linear Hamiltonian system
2.4. The Riccati equation and feedback optimal control
3. FormuLation of Stochastic LQ Problems
3.1. Statement of the problems
3.2. Examples
4. Finiteness and Solvability
5. A Necessary Condition and a Hamiltonian System
6. Stochastic Riceati Equations
7. GLobal Solvability of Stochastic Riccati EQuations
7.1. Existence: Thc standard case
7.2. Existence: The case C = 0, S = 0, and Q,G >_ 0
7.3. Existence: The one-dimensional case
8. A Mean-variance Portfolio Selection Problem
9. Historical Remarks
Chapter 7. Backward Stochastic Differential Equations
1. Introduction
2. Linear Backward Stochastic Differential EQuations
3. Nonlinear Backward Stochastic Differential Equations
3.1. BSDEs in finite deterministic durations: Method of
contraction mapping
3.2. BSDEs in random durations: Method of continuation
4. Feynman-Kac-Type Formulae
4.1. Representation via SDEs
4.2. Representation via BSDEs
5. Forward-Backward Stochastic Differential Equations
5.1. General formulation and nonsolvability
5.2. The four-step scheme, a heuristic derivation
5.3. Several solvable classes of FBSDEs
6. Option Pricing Problems
6.1. European call options and the Black-Scholes formula
6.2. Other options
7. Historical Remarks
References
Index
^ 收 起
目 录内容简介
随机控制也叫试探控制,是最原始的控制方式,是其他一切控制方式的基础。随机控制是完全建立在偶然机遇的基础上,是“试试看”思想在控制活动中的体现。随机控制在成功的同时,常常伴随着失败。这种控制方式有较大的风险,对事关重大的活动,一般不宜采用这种控制方式。 《随机控制》(作者雍炯敏)是关于介绍随机控制的英文教材。
比价列表