Quantcast
Channel: Active questions tagged real-analysis - Mathematics Stack Exchange
Viewing all articles
Browse latest Browse all 8471

How does this optimal x change with this parameter?

$
0
0

Consider an optimization problem

$$\max_x V = f(x,a) + g(x,a)$$

where $\frac{\partial^2 f}{\partial x^2},\frac{\partial^2 g}{\partial x^2} < 0$

Let $x^*$ (the optimal $x$) be such that

$$\frac{\partial f(x^*)}{\partial x} + \frac{\partial g(x^*)}{\partial x} = 0$$

and $\frac{\partial f(x^*)}{\partial x} < 0$ and $\frac{\partial g(x^*)}{\partial x} > 0$.

Show that if $\frac{\partial}{\partial a} \Big[\frac{f(x^*,a)}{f(x^*,a) + g(x^*,a)}\Big] < 0$,

then

$$\frac{\partial x^*}{\partial a}>0$$.

MY ATTEMPT:

Consider a different problem:

$$ \max_x S = wf(x,a) + (1-w)g(x,a) $$

with any $w \in (0,1)$. The first order condition for the problem is as follows

$$L = w\frac{\partial f(x^*)}{\partial x} + (1-w)\frac{\partial g(x^*)}{\partial x} = 0 $$

Then consider

$$\frac{\partial x^*}{\partial w} = -\frac{\frac{\partial L}{\partial w}}{\frac{\partial L}{\partial x}}$$

The denominator is clearly negative. The numerator is $ \frac{\partial f(x^*)}{\partial x} - \frac{\partial g(x^*)}{\partial x} $ and thus negative if $x^*$ of this problem is the same as the $x^*$ of the original problem. For instance consider $w = 1/2$ at the start, then as $w$ increases $x^*$ decreases. Since $a$ decreases the proportion of $f(x^*, a)$, then it follows that proof is done. I know this is not right. But do you see what I am going for? Help towards a more rigorous proof would be delightful!


Viewing all articles
Browse latest Browse all 8471

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>