Quantcast
Channel: Active questions tagged real-analysis - Mathematics Stack Exchange
Viewing all articles
Browse latest Browse all 8553

Seber, Ex. 1.b.3) - Unconstrained optimization problem

$
0
0

In one of my previous questions, I tried to solve the following exercise:

enter image description here

Now the solution to exercise 3a) mentions that one may alternatively substitute $w_n = \sum_{i=1} ^{n-1} w_i$ and turn the problem into an unconstrained optimization problem (which was also pointed out by a stackexchange member in the comments of my previous question). However, now I'm stuck.

So far I have formulated the variance as $var(\overline X) = v(w_1,...w_{i-1}) = \sum_{i=1}^n w_i ^2 \sigma_i ^2$. Since $w_n = \sum_i w_i \implies w_n = 1- \sum_{j = 1}^n w_j$ I can express $var(\overline X)$ in this way.

Then I tried to compute the gradient by calculating each $\frac{d}{dw_i}$ as follows: $\frac{d}{dw_i} = 2w_i \sigma_i^2 + \frac{d}{dw_i}\Bigl(w_n ^2 \sigma_n ^2\Bigr) = 2w_i \sigma_i^2 + 2\sigma_n^2 w_n \frac{dw_n}{dw_i} =2w_i\sigma_i^2 - 2\sigma_n^2 w_n $.

In the minimum, this has to be equal to the zero vector so I tried setting it equal to 0:

$2w_i\sigma_i^2 - 2\sigma_n^2 w_n = 0 \implies 2w_i\sigma_i^2 = 2\sigma_n^2 w_n \implies w_i\sigma_i^2 = \sigma_n^2 w_n$.

This, however, looks very different from Sebers model solution:enter image description here

So my question is: What am I missing? I have already tried some further simplification/substitution but it lead me nowhere. Please note that I already solved the problem using lagrangians, I'm just trying to explore the alternative right now.


Viewing all articles
Browse latest Browse all 8553

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>