In this dissertation, we study three topics under a common theme: nonlinear expectation related to zero-sum stochastic differential games. To develop this nonlinear expectation, we first study the stochastic game problem where both players use feedback controls. This is in contrast with the standard literature where the setting of strategies versus controls is usually used. Such approach has the drawback of creating the asymmetry between the two players. Using feedback controls, we prove the existence of the game value where both players use controls and preserve the symmetry. Moreover, we allow for non-Markovian structure and characterize the value process as the unique viscosity solution of the path-dependent Bellman-Isaacs equation.;Using the dynamic programming principle, the game value process can be viewed as a filtration consistent nonlinear expectation. Moreover, this nonlinear expectation is dominated by the G-Expectation, which is defined naturally from the game setting. It follows that the game value process is a G-submartingale. It is natural to conjecture that a G-submartingale is a semi-martingale under each probability that P composes the G-Expectation. Therefore, we study norm estimate for semi-martingales as our second topic. We introduce two new types of norms. The first characterizes square integrable semi-martingales. The second characterizes the absolute continuity of the finite variation part with respect to the Lebesgue measure. As an application of the first norm, we obtain the Doob-Meyer decomposition for G-submartingale.;Finally, we study the wellposedness problem of doubly reflected Backward Stochastic Differential Equations and establish some a priori estimates for DRBSDEs without imposing the Mokobodski's condition.
展开▼