Empty vocabulary perhaps the documents only contain stop words

In natural language processing (NLP) and text analytics, one of the common errors that researchers and practitioners encounter is the ValueError: Empty vocabulary perhaps the documents only contain stop words.

This error usually occurs when the documents being processed only consist of stop words, which are words that are considered insignificant and are often eliminated during preprocessing.

Causes of Empty Vocabulary

Here are the few common causes that can lead to an empty vocabulary:

  • Insufficient Corpus Size
  • Stop Words Only
  • Inadequate Text Preprocessing

How the Error Occurs?

docs_df = pd.DataFrame(data, columns=["Doc"])
docs_df['Topic'] = cluster.labels_
docs_df['Doc_ID'] = range(len(docs_df))
docs_per_topic = docs_df.dropna(subset=['Doc']).groupby(['Topic'], as_index = False).agg({'Doc': ' '.join})

def c_tf_idf(documents, m, ngram_range=(1, 1)):
    count = CountVectorizer(ngram_range=ngram_range, stop_words="english").fit(documents)
    t = count.transform(documents).toarray()
    w = t.sum(axis=1)
    tf = np.divide(t.T, w)
    sum_t = t.sum(axis=0)
    idf = np.log(np.divide(m, sum_t)).reshape(-1, 1)
    tf_idf = np.multiply(tf, idf)

    return tf_idf, count


ValueError: Empty vocabulary perhaps the documents only contain stop words

Solutions for ValueError: Empty Vocabulary

To resolve the “ValueError: Empty vocabulary perhaps the documents only contain stop words” error, you can apply the following solutions:

Solution 1: Check the Corpus

Check that your corpus consists of a sufficient number of documents. If it is too small, consider obtaining more text data or combining it with other relevant sources.

Solution 2: Apply Filter Stop Words

Apply stop-word removal methods to eliminate common words from your documents. Many NLP libraries provide pre-defined stop word lists, or you can make a custom list based on your specific needs.

Solution 3: Adjust Minimum Document Frequency

Set a minimum document frequency limit for the words to be included in the vocabulary.

By increasing the limit, you can exclude uncommon words that might contribute to an empty vocabulary.

Solution 4: Increase the Corpus Size

Upgrade your corpus with additional relevant documents. This expansion can help transform the vocabulary and increase the chances of encountering meaningful words.

Solution 5: Preprocess Text Data

Make sure that the proper text preprocessing steps, such as eliminating punctuation, numbers, special characters, and converting text to lowercase. Additionally, consider deriving or limiting the words to reduce vocabulary size further.


Why am I encountering the “ValueError: Empty Vocabulary” error?

This error occurs when the vocabulary, which represents the different words in your text corpus, is empty or consists of only stop words.

It can be caused by an insufficient corpus size, documents consisting mainly of stop words, or inadequate text preprocessing.

How can I check if the vocabulary is empty in my code?

You can check the length of the vocabulary using the len(vectorizer.vocabulary_) statement. If the length is zero, it shown an empty vocabulary.

Are there any pre-defined stop word lists available?

Yes, many NLP libraries provide pre-defined stop word lists. You can use these lists or create a custom one based on your specific needs.


The “Empty vocabulary perhaps the documents only contain stop words” can be a common challenge when working with NLP tasks.

However, by understanding the causes and applying the proper solutions in this article, you can resolve this issue.

Additional Resources

Here are some related topics that can help you to understand more about value errors:

Leave a Comment