📢 Attention Salesforce Certified Trailblazers! Maintain your credentials and link your Trailhead and Webassessor accounts by December 6th. Learn more.
close
Start tracking your progress
Trailhead Home
Trailhead Home

Build a Model with Word2Vec

Learning Objectives

After completing this unit, you’ll be able to:
  • Explain what a vector gradient is.
  • Compare gradient descent and stochastic gradient descent.

We have the linguistics. We have the math. What’s next on our NLP journey?

Training a Model

Now that you know the objective function, it’s time to train the model. Let’s start by reviewing the objective function.

The function J of theta equals negative frac 1 over T end frac times the log of the function of L of theta equals negative frac 1 over T endfrac times the sum over t from 1 to T of the sum over j of negative m to m (with m not equal 0) of the log of the function P of w sub t plus j endsub given w sub t endsub, given theta.

Remember that theta in the objective function stands for all the vectors in the vocabulary. This includes both the vectors for each word as the center word and the vectors for each word as the context word. For a simple vocabulary, you’d have some center word vectors (like vaardvark and vzebra), and some context word vectors (like uaardvark and uzebra).

Theta equals the vector of each center word vector (v) and context word vector (u)

To train the model, we need to calculate the vector gradients for every vector in theta.

What’s a vector gradient? It’s the derivative, or rate of change, of a vector. This rate is what gives the machine a way to map every instance of a word and find the most appropriate vector. Thankfully, Word2vec calculates all the gradients for you!

It’s important to know how vector gradients work, however, as they come into play when we’re training our model.
Note

Note

If you want learn more about gradients and derivatives in multiple variable calculus, check out the Khan Academy introduction to partial derivatives and the gradient.

Optimize Parameters with Gradient Descent

To optimize your model, you need to minimize the objective function J(θ). In other words, you want your model to have the smallest amount of error possible. One way to minimize J(θ) is an algorithm called gradient descent.

Here’s the basic idea: for the current value of theta (the current word vectors), calculate the gradient of J(θ). Then use that gradient information to take a small step in the direction of negative gradient by changing theta. Repeat these steps over and over, until you reach the minimum value for J(θ) (when the gradient of theta is zero).

A parabolic function with markings that represent gradient descent using points that get increasingly close together as they approach the function's minimum.

Note

Note

In practice, J(θ) is not a simple convex function like this. Gradient descent is usually messier than this example, but always has the same goal of finding the lowest point on the function.

Stochastic Gradient Descent

Gradient descent makes a lot of sense. However, by itself, it is an extremely large function. Remember, J(θ) includes all the windows in the input text, which could be billions!

This means that computing the gradient of J(θ) is very expensive. You would have to wait a very long time to make each update, as you calculate J(θ) over and over again.

How can you mitigate that problem? The answer is stochastic gradient descent. Rather than calculating the gradient over the entire body of text, you sample a single window (a single position in the text) at random and minimize the gradient based on that single window. Then you update the vectors for the words in the window you sampled. You repeat that process using many windows across the text, updating each time. Stochastic gradient descent can be very noisy, but it is much faster than traditional gradient descent, and over enough iterations, gives pretty good results.
Note

Note

In practice, you compromise between gradient descent and stochastic gradient descent. Rather than calculating the gradient over the entire dataset, or over a single window at a time, you calculate the gradient over mini-batches of data. Each mini-batch contains a few examples from the dataset (often 32, 64, 128, or 256 examples). You update your vectors after each mini-batch, repeating the process over many batches across the dataset.

Other Approaches with Word2vec

Up to this point, this module has discussed predicting what context words you find around a specific center word. This approach is called the skip-gram model. Word2Vec also supports the continuous bag of words model. With the continuous bag of words model, rather than finding context words from a center word, you work to predict the center word, given context words.

There are lots of ways to train the model using Word2vec. In addition to the naive softmax method we have discussed in detail, Word2vec supports a technique called negative sampling. With negative sampling, rather than updating all the word vectors after each optimization step (as with gradient descent), or updating just the word vectors in your sample (as with stochastic gradient descent), you update both the word vectors in your sample and a set of negative words. This means that you identify a set of words that are very unlikely to be found near your center word (if you’re using the skip-gram model), and update their probability to zero. (If you’re using the continuous bag of words model, you identify a set of words very unlikely to be the center word, and update their probability to zero.)

Get Hands On with Natural Language Processing

In this trail, you’ll complete problem sets using a Google product called Colaboratory. That means you have to have a Google account to complete the challenges. If you don’t have a Google account or you want to use a separate one, you can create an account here.

Once you have a Google account:
  1. Download the source code.
  2. Make sure you’re logged in to your Google account.
  3. Go to Colaboratory.
  4. In the dialog menu, click Upload.
  5. Choose the source code file (.ipynb) and click Open.

Now you’re ready to start coding! Each piece of code is contained in cells. When you click into a cell, a play button appears. This button lets you run the code in the cell.

Throughout the worksheet, you’ll find exercise markers that let you know you need to do something. After you complete an exercise, come back to Trailhead to answer the corresponding question about your results.
Note

Note

Before you run a cell, make sure you run all the cells above it first.

Have fun!