Comparing k-Nearest Neighbors and Linear Regression Math, CS, Data
This is a simple exercise comparing linear regression and k-nearest neighbors (k-NN) as classification methods for identifying handwritten digits. It’s an exercise from Elements of Statistical Learning. The training data and test data are available on the textbook’s website.
The data come from handwritten digits of the zipcodes of pieces of mail. We would like to devise an algorithm that learns how to classify handwritten digits with high accuracy.
There are 256 features, corresponding to pixels of a sixteen-pixel by sixteen-pixel digital scan of the handwritten digit. The features range in value from -1 (white) to 1 (black), and varying shades of gray are in-between.
The training data set contains 7291 observations, while the test data contains 2007.
The first column of each file corresponds to the true digit, taking values from 0 to 9. For simplicity, we will only look at 2’s and 3’s. As a result, we can code the group by a single dummy variable taking values of 0 (for digit 2) or 1 (for digit 3).
Just for fun, let’s glance at the first twenty-five scanned digits of the training dataset. This can be done with the
image command, but I used
grid graphics to have a little more control.
Because we only want to pursue a binary classification, we can use simple linear regression.
Another method we can use is k-NN, with various $k$ values.
Let’s see which method performed better.
In the plot, the red dotted line shows the error rate of the linear regression classifier, while the blue dashed line gives the k-NN error rates for the different $k$ values.
For this particular data set, k-NN with small $k$ values outperforms linear regression. And among k-NN procedures, the smaller $k$ is, the better the performance is. This is because of the “curse of dimensionality” problem; with 256 features, the data points are spread out so far that often their “nearest neighbors” aren’t actually very near them.
blog comments powered by Disqus