We use theorems like Monotone Convergence, Dominated Convergence, Bounded Convergence theorems to show that when a sequence of measurable functions $f_n$ in a measure space $X$ converges to some function $f$ in $X$ almost everywhere that we can assert the corresponding claim
$$\int_X f_n \,\, d\mu\to \int_X f \,\, d\mu$$
I'm wondering what the fundamental error in the following line of reason is: Suppose $f_n \to f$ a.e. in $X$. Then $a.e.$, $|f_n - f| \to 0$. Then by the triangle inequality
$$\left| \int_X f_n \,\, d\mu - \int_X f \,\, d\mu \right| =\left| \int_X f_n - f\,\, d\mu \right| \leq \int_X |f_n-f| \, \, d\mu \tag{1}$$
And so now as $|f_n-f|$ can be made arbitrarily small, one would think that we could make the final integral vanish and hence make the LHS go to $0$. What are the various situations that flaw this line of reasoning? One that comes to mind is if for some fixed $x$, the sequence of outputs $f_n(x)$ grows very quickly toward some ill-defined $f(x)$; this idea is vaguely reminiscent of the Dirac delta function. Could an example be provided that uses the format $(1)$?