I'm studying a concept called e-convergence for sequences of probability densities. The definition states:
A sequence $(g_n)_{n \in \mathbb{N}}$ in $M_{\mu}$ is e-convergent to $g$ if:
- $(g_n)_{n \in \mathbb{N}}$ tends to $g$ in $\mu$-probability as $n \to \infty$, and
- The sequences $(\frac{g_n}{g})_{n \in \mathbb{N}}$ and $(\frac{g}{g_n})_{n \in \mathbb{N}}$ are eventually bounded in each $L^p(g)$ for $p > 1$.
Formally, for all $p > 1$:
$\lim_{n \to \infty} E_g \left(\frac{g_n}{g}\right)^p < +\infty$ and $\lim_{n \to \infty} E_g \left(\frac{g}{g_n}\right)^p < +\infty$
I understand that the first condition ensures convergence in probability, but I'm struggling to grasp the intuition behind the ratio conditions. Specifically:
- Why are we considering both $\frac{g_n}{g}$ and $\frac{g}{g_n}$? Wouldn't one ratio be sufficient?
- What's the significance of these ratios being bounded in $L^p(g)$ for all $p > 1$? How does this differ from just considering $p = 1$ or $p = 2$?
- How do these conditions relate to the concept of uniform integrability?
- Can you provide an example of a sequence that converges in probability but fails to be e-convergent due to these ratio conditions?
I'm looking for both mathematical rigor and intuitive explanations to help me understand the motivation behind this definition.