Minimax Set Example Math, CS, Data
This question is adapted from a past qualifying exam question. Let $\lambda$ denote Lebesgue measure.
Consider the choice of a set estimator $C_X$ for a parameter $\theta \in \R$ based on one observation $X$ from the $N (\theta, 1)$ distribution, using the loss function $L(\theta, C) = \frac{1}{4}\lambda C - \I ${$\theta \in C$}. Show that the set $C_X = [X − c_0, X + c_0]$ is minimax for a suitably chosen constant $c_0$, and find this $c_0$.
Solution
We will show that $C_X$ is minimax by showing that it is an equalizer decision rule and that it is extended Bayes. By Theorem 2.2 of Essentials of Statistical Inference by Young and Smith, these two conditions are sufficient for minimaxity.
Sometimes, minimaxity can be shown directly. More often, invoking this theorem is the right approach approach to take.
Equalizer
Let $\Phi$ denote the standard normal cdf.
Given any $c_0$, the risk of the decision rule is the same for all $\theta$.
Extended Bayes
Consider a $N(0, \tau^2)$ prior distribution on $\theta$. Then the posterior distribution is
which is $N(tX, t)$.
To minimize the Bayes risk, it is sufficient to minimize the expected posterior loss.
$\dagger$ For any fixed $\lambda C$, the (normal) posterior measure of $C$ is obviously maximal if $C$ is centered at the posterior mean.
By taking the derivative of the final expression with respect to $\lambda C$, we find that it is minimized at $\lambda C = 2 \sqrt{t} \phi^{-1} (1/4)$, where $\phi^{-1}$ is the inverse of the standard normal pdf restricted to $[0, \infty)$. By plugging in the interval with this width, centered at $tX$, we find the infimum expected posterior loss to be
Because the expression is continuous in $t$, and because $t \rightarrow 1$ as $\tau^2 \rightarrow \infty$, the expression converges (monotonically increasing) to
In other words, given any $\epsilon > 0$ there exists a $\tau_1$ such that when $\tau > \tau_1$, the infimum expected posterior loss is within $\epsilon$ of this limit. Likewise, the expectation of this infimum expected posterior loss, taken over any marginal distribution of $X$ (including the “true” one corresponding to any $\tau$) must be in $[L-\epsilon, L]$. Therefore, for any $\tau > \tau_1$, the infimum Bayes risk is in $[L-\epsilon, L]$.
Now, consider the interval $C_X$ with $c_0 = \phi^{-1}(1/4)$ (which is approximately 1). Because it is an equalizer, its Bayes risk equal to its constant risk. Looking back at this constant risk found in the previous part, and plugging in our $c_0$, we find that $C_X$ has Bayes risk equal to $L$.
If $\tau > \tau_1$, then the infimum Bayes risk is within $\epsilon$ of $L$, which is the Bayes risk of $C_X$. Therefore, $C_X = [X - \phi^{-1}(1/4), X + \phi^{-1}(1/4)]$ is $\epsilon$-Bayes, and thus minimax.
blog comments powered by Disqus