Rejection Sampling Math, CS, Data
An earlier post discussed inverse transform sampling. Another way to sample from a known pdf is to use rejection sampling. First, find a density $f$ that you can sample from and that dominates the density of interest $h$. Next, find a value $M$ such that $Mf \geq h$ (i.e. $M f(x) \geq h(x) \; \forall x \in \R$). Preferably, you should find the smallest such $M$ value, to make the algorithm as fast as possible. Finally, sample from $f$, and keep each sample value $x_i$ with probability $h(x_i)/(Mf(x_i))$.
Example Using Exponential Distribution
Suppose, as in the previous post, we wish to sample from the pdf
For this analysis, let’s assume the parameter $m$ is $.3$. To perform our rejection sampling, let’s use a shifted exponential distribution as our source of random values (credit: Jay Emerson).
Now, we want to find the smallest $M$ such that $M$ times our exponential density is greater than $h$ for all $x$. We also have the freedom to adjust the exponential rate parameter, if we want. A fundamental result in rejection sampling is that the overall acceptance rate of a procedure is exactly equal to $1/M$. Therefore, let’s try to find the rate parameter $\lambda>0$ for which the smallest $M$ will suffice. This will give us the procedure that has the highest acceptance rate (fewest “wasted” trials) of any procedure in this shited exponential class.
Our shifted exponential density will be of the form $f(x; \lambda) = \lambda e^{-\lambda (x-m)} \I{x \geq m}$. Assuming $x \in [m, 1]$, we need $M$ to satisfy
For a given $\lambda$, we want $M$ to be the smallest value satisfying the above condition, so we will let $M$ be the supremum over $x$ of the expression on the right-hand side, which I will call $g(x, \lambda)$. By differentiating $g$ with respect to $x$, we can see that it is convex on the interval we’re considering; therefore, its maximum must occur at an endpoint. Now, let’s define
Because the supremum occurs at an endpoint, the only two possibilities for $S(\lambda)$ are $g(m; \lambda)$ and $g(1; \lambda)$. By substituting these two $x$ values and comparing, it can be shown that
By differentiating with respect to $\lambda$, we find that $S$ is minimized at $\lambda = -\tfrac{3}{1-m} \log m$ which is approximately $5.16$ when $m=.3$. This corresponds to an $M$ value of about 1.42 and an acceptance rate of about 70 percent. The acceptance rate as a function of $\lambda$ is plotted below, along with a dotted line showing simulated results.
It is possible that different rate parameters cause the random number generator to take different amounts of time. However, after a quick check, this question seems unlikely to be worth pursuing. The simulation below shows that Exp(1) is generated at basically the same rate as Exp(5.16).
Therefore, we can be satisfied that using $\lambda=5.16$ is about as well as we can do in drawing from $h$ via an exponential rejection sampling.
Repeat Using Histograms
Another way to use rejection sampling for a density with bounded support is by drawing from a histogram. First, you construct a histogram shape that covers the density of interest, then divide each rectangle’s area by the total area to normalize to a probability mass function (pmf) for the bins. Next, draw a bin according to that pmf. Uniformly sample a horizontal and vertical position with that bin’s rectangle. If this point is below the density of interest, then keep the $x$ value.
We will apply this method to draw from $h$. First, here is a picture of the true density and the histogram-like density if we use ten bins, for example.
We will call the number of bins $r$ (for refinement). It is clear that we can drive the acceptance probability arbitrarily close to 1 by increasing $r$. However, there is a trade-off because the larger $r$ is, the longer it takes to generate each point. We will look for the $r$ that optimizes overall speed. The rsample.h
function below samples from $h$ using this histogram method with a given $r$ value.
Now, we try out a series of $r$ values to determine the time required as a function of $r$.
The algorithm is linear in $r$. The expected time required to reach 1000 \textcal{accepted} draws is approximately the time required to generate 1000 points divided by the acceptance rate. We could find the acceptance rate by conditioning on the bin selected and finding the conditional acceptance rate for each bin. But it is simpler to just use the result invoked earlier: the acceptance rate is $1/M$, where $M$ is the sum of the rectangle areas. In fact, I wrote code for both; the rejectionprob
function does it “the hard way,” while acceptanceprob
uses the easy method. As a sanity check, you can confirm that the sum of these two functions is 1 for any $r$.
We want to find the $r$ value that minimizes number-generating time divided by acceptance rate. Because it is a discrete problem, all we need to do is evaluate the possible $r$ values.
Comparing Speeds of the Exponential and Histogram Algorithms
Lastly, let’s compare the two rejection sampling methods described above. Using the histogram method we can make the acceptance probability arbirarily close to 1 simply by increasing the number of bins. That may make it seem like this method is superior to the exponential one. However, the histogram method requires one extra randomization step, so it could still be worse overall. Let’s see how the best exponential method and the best histogram method match up head-to-head in sampling from $h$.
The histogram method consistently outperforms the exponential method, at least with the implementation I coded up. And as an added bonus, the histogram method can be packaged as a general-purpose function that doesn’t require any thought to use, especially when the density is monotonic or monomodal. It would take some extra code and extra computation to make the function choose a near-optimal $r$ value, but that could be accomplished as well.
Unfortunately, the histogram method requires the density of interest to have bounded support. Although, in practice, if you had a density with unbounded support, you could cut it off way out on the tails without changing your results.
blog comments powered by Disqus