Why Ignore Unique Words in Vector Spaces?

Lately I’ve been working on natural language processing, learning as I go. My first project is to discover topics being discussed in a set of documents and group the docs by these topics. So I studied document similarity and discovered a Python library called GenSim which builds on top of numpy and scipy.

The tutorial starts by mapping documents to points in a vector space. Before we map we do some basic text processing to reduce “noise” – for example stripping stop words. The tutorial casually mentions removing all words that appear only once in the corpus. It’s not obvious to me why one would do this, and the tutorial has no explanation. I did a bunch of googling and found this is commonly done, but could not find any explanation why. Then I thought it about a little more and I think I know why.

We map words into a vector space having one dimension per distinct word in the corpus. A document’s value or position along each dimension (word) is computed. It could be the simple number of times that word appears in that doc – this is the bag of words approach. It could be the TF-IDF for that word in that document, which is more complex to compute but could provide better results, depending on what you are doing. However you define it, you end up with a vector for each document.

Once you have this, you can do lots of cool stuff. But it’s this intuitive understanding of what the vector space actually is, that makes it clear why we would remove or ignore all words that appear only once in the corpus.

One way to compute document similarity is to measure the angle between the vectors representing documents. Basically, are they pointing in the same direction? The smaller the angle between them, the more similar they are. This is where cosine similarity comes from. If the angle is 0, cosine is 1: they point in the exact same direction. If the angle is 180, cosine is -1: they point in opposite directions. Cosine of 0 means they are orthogonal. The closer to 1 the cosine is, the more similar they are.

Of course, no matter how many dimensions the vector space has, the angle between any 2 vectors lies in a 2-D plane – it can be expressed as a single number.

Let’s take a 3-D example: we have 3 words: brown (X axis), swim (Y axis), monkey (Z axis). Across our corpus of many documents, suppose only 1 doc (D1) has the word monkey. The other words each appear in several documents. That means the vectors for every document except D1 lie entirely in the X-Y plane – their Z component is 0. D1 is the only document whose vector sticks out from the X-Y plane.

Now it becomes easy to see why the word monkey does not contribute to similarity. Take any 2 vectors in this example. If both are in the X-Y plane then it’s obvious that the Z axis has no impact on the angle between them. If only one is in the X-Y plane (call it Dx), it means the other (not in the X-Y plane) must be D1. Here, the angle between D1 and Dx is different from the angle between Dx and the projection or shadow of D1 onto the X-Y plane. But, it doesn’t matter because this is true when comparing D1 to every other vector in the set. The relative differences between D1 and each other vector in the set are the same whether we use D1 or the projection of D1 onto the X-Y plane. In other words, using cosine similarity they still rank in the same order nearest to furthest.

Another way to see this is to consider the vector dot product between D1 and D2. As a reminder, the dot product is the sum of the products of each vector’s components in each dimension. Any dimension that has a value of 0 in either vector contributes nothing to the dot product. And of course, every vector except D1 has a 0 for the Z dimension, so the Z component of the dot product will always be 0. The cosine of the angle between any 2 vectors Dj and Dk is equal to their dot product divided by the product of their magnitudes. If we normalize all vectors to unit length, the denominator is always 1 and cosine is the dot product.

Because of this, any word that appears exactly once in the corpus can be ignored. It has no effect on the similarity of documents. But we can actually make a stronger statement: any word that appears in a single document can be ignored, no matter how many times it appears in that document. This is a misleading – yet not incorrect – part of the tutorial. It removes words that appear only once in the corpus. It could go further and remove words that appear in only 1 document, even if they occur multiple times.

I haven’t found any explanation for this at all, so I can’t confirm my explanation. But I suspect this is why once-occurring words are often ignored. In fact, sometimes people get better results by ignoring words that occur in more than 1 document, if they occur only in a very small number of docs. The reasoning seems to be that words appearing in a handful of docs from a corpus of thousands or millions, have negligible impact on similarity measures. And each word you ignore reduces the dimensionality of the computations.