Quantcast
Channel: Active questions tagged real-analysis - Mathematics Stack Exchange
Viewing all articles
Browse latest Browse all 8677

Intuition behind the exponential convergence(e-convergence)

$
0
0

I'm studying a concept called e-convergence for sequences of probability densities. The definition states:

A sequence $(g_n)_{n \in \mathbb{N}}$ in $M_{\mu}$ is e-convergent to $g$ if:

  1. $(g_n)_{n \in \mathbb{N}}$ tends to $g$ in $\mu$-probability as $n \to \infty$, and
  2. The sequences $(\frac{g_n}{g})_{n \in \mathbb{N}}$ and $(\frac{g}{g_n})_{n \in \mathbb{N}}$ are eventually bounded in each $L^p(g)$ for $p > 1$.

Formally, for all $p > 1$:

$\lim_{n \to \infty} E_g \left(\frac{g_n}{g}\right)^p < +\infty$ and $\lim_{n \to \infty} E_g \left(\frac{g}{g_n}\right)^p < +\infty$

I understand that the first condition ensures convergence in probability, but I'm struggling to grasp the intuition behind the ratio conditions. Specifically:

  1. Why are we considering both $\frac{g_n}{g}$ and $\frac{g}{g_n}$? Wouldn't one ratio be sufficient?
  2. What's the significance of these ratios being bounded in $L^p(g)$ for all $p > 1$? How does this differ from just considering $p = 1$ or $p = 2$?
  3. How do these conditions relate to the concept of uniform integrability?
  4. Can you provide an example of a sequence that converges in probability but fails to be e-convergent due to these ratio conditions?

I'm looking for both mathematical rigor and intuitive explanations to help me understand the motivation behind this definition.


Viewing all articles
Browse latest Browse all 8677

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>