You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: main.md
+81-45Lines changed: 81 additions & 45 deletions
Original file line number
Diff line number
Diff line change
@@ -10,11 +10,11 @@ The answer lies in the observation that many real-world datasets have a low intr
10
10
11
11
This is the topic of [**manifold learning**](http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction), also called **nonlinear dimensionality reduction**, a branch of machine learning (more specifically, _unsupervised learning_). It is still an active area of research today to develop algorithms that can automatically recover a hidden structure in a high-dimensional dataset.
12
12
13
-
This post is an introduction to a popular dimensonality reduction algorithm: [**t-distributed stochastic neighbor embedding (t-SNE)**](http://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding). Developed by [Laurens van der Maaten](http://lvdmaaten.github.io/) and [Geoffrey Hinton](http://www.cs.toronto.edu/~hinton/), this algorithm has been successfully applied to many real-world datasets. Here, we'll see the key concepts of the method, when applied to a toy dataset (handwritten digits). We'll use Python and the [scikit-learn](http://scikit-learn.org/stable/index.html) library.
13
+
This post is an introduction to a popular dimensonality reduction algorithm: [**t-distributed stochastic neighbor embedding (t-SNE)**](http://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding). Developed by [Laurens van der Maaten](http://lvdmaaten.github.io/) and [Geoffrey Hinton](http://www.cs.toronto.edu/~hinton/), this algorithm has been successfully applied to many real-world datasets. Here, we'll follow the original paper and describe the key mathematical concepts of the method, when applied to a toy dataset (handwritten digits). We'll use Python and the [scikit-learn](http://scikit-learn.org/stable/index.html) library.
14
14
15
15
## Visualizing handwritten digits.
16
16
17
-
Let's first import a handful of libraries.
17
+
Let's first import a few libraries.
18
18
19
19
<pre data-code-language="python"
20
20
data-executable="true"
@@ -55,25 +55,46 @@ from moviepy.video.io.bindings import mplfig_to_npimage
55
55
import moviepy.editor as mpy
56
56
</pre>
57
57
58
-
58
+
Now we load the classic *handwritten digits* datasets. It contains 1797 images with <spanclass="math-tex"data-type="tex">\\(8*8=64\\)</span> pixels each.
59
59
60
60
<pre data-code-language="python"
61
61
data-executable="true"
62
62
data-type="programlisting">
63
63
digits = load_digits()
64
+
digits.data.shape
65
+
</pre>
66
+
67
+
<pre data-code-language="python"
68
+
data-executable="true"
69
+
data-type="programlisting">
70
+
print(digits['DESCR'])
71
+
</pre>
72
+
73
+
Here are the images:
74
+
75
+
<pre data-code-language="python"
76
+
data-executable="true"
77
+
data-type="programlisting">
78
+
nrows, ncols = 2, 5
79
+
plt.figure(figsize=(6,3))
80
+
plt.gray()
81
+
for i in range(ncols * nrows):
82
+
ax = plt.subplot(nrows, ncols, i + 1)
83
+
ax.matshow(digits.images[i,...])
84
+
plt.xticks([]); plt.yticks([])
85
+
plt.title(digits.target[i])
64
86
</pre>
65
87
66
-
TODO
67
-
(detail the dataset, nsamples, ndimensions)
68
-
(final output of tSNE)
88
+
Now let's run the t-SNE algorithm on the dataset. It just take one line with scikit-learn.
69
89
70
90
<pre data-code-language="python"
71
91
data-executable="true"
72
92
data-type="programlisting">
73
-
tsne = TSNE()
74
-
digits_proj = tsne.fit_transform(digits.data)
93
+
digits_proj = TSNE().fit_transform(digits.data)
75
94
</pre>
76
95
96
+
Here is a utility function used to display the transformed dataset.
Let's explain how the algorithm works. First, a few definitions.
132
+
Now, let's explain how the algorithm works. First, a few definitions.
112
133
113
-
A **data point** is a point <spanclass="math-tex"data-type="tex">\\(x_i\\)</span> in the original **data space** <spanclass="math-tex"data-type="tex">\\(\mathbf{R}^D\\)</span>, where <spanclass="math-tex"data-type="tex">\\(D\\)</span> is the **dimensionality** of the data space. Every point is an image of a handwritten digit here. There are <spanclass="math-tex"data-type="tex">\\(N\\)</span> points.
134
+
A **data point** is a point <spanclass="math-tex"data-type="tex">\\(x_i\\)</span> in the original **data space** <spanclass="math-tex"data-type="tex">\\(\mathbf{R}^D\\)</span>, where <spanclass="math-tex"data-type="tex">\\(D=64\\)</span> is the **dimensionality** of the data space. Every point is an image of a handwritten digit here. There are <spanclass="math-tex"data-type="tex">\\(N=1797\\)</span> points.
114
135
115
136
A **map point** is a point <spanclass="math-tex"data-type="tex">\\(y_i\\)</span> in the **map space** <spanclass="math-tex"data-type="tex">\\(\mathbf{R}^2\\)</span>. This space will contain our final representation of the dataset. There is a _bijection_ between the data points and the map points: every map point represents one of the original images.
116
137
@@ -124,7 +145,9 @@ Now, we define the similarity as a symmetrized version of the conditional simila
We now compute the similarity with a sigma depending on the data point (found via a binary search). This algorith is implemented in scikit-learn's `_joint_probabilities` function.
This is the same idea as for the data points, but with a different distribution (t-Student, or Cauchy distribution, instead of a Gaussian distribution). We'll elaborate on this choice later.
215
+
This is the same idea as for the data points, but with a different distribution ([**t-Student with one degree of freedom**](http://en.wikipedia.org/wiki/Student%27s_t-distribution), or [**Cauchy distribution**](http://en.wikipedia.org/wiki/Cauchy_distribution), instead of a Gaussian distribution). We'll elaborate on this choice later.
189
216
190
217
Whereas the data similarity matrix <spanclass="math-tex"data-type="tex">\\(\big(p_{ij}\big)\\)</span> is fixed, the map similarity matrix <spanclass="math-tex"data-type="tex">\\(\big(q_{ij}\big)\\)</span> depends on the map points. What we want is for these two matrices to be as close as possible. This would mean that similar data points yield similar map points.
191
218
192
219
## A physical analogy
193
220
194
-
Let's assume that our map points are all connected with springs. The stiffness of a spring connecting points <spanclass="math-tex"data-type="tex">\\(i\\)</span> and <spanclass="math-tex"data-type="tex">\\(j\\)</span> depends on the mismatch between the similarity of the two data points and the similarity of the two map points, that is, <spanclass="math-tex"data-type="tex">\\(p_{ij} - q_{ij}\\)</span>. Now, we let the system evolve according to the law of physics. If two map points are far apart while the data points are close, they are attracted together. If they are close while the data points are dissimilar, they are repelled.
221
+
Let's assume that our map points are all connected with springs. The stiffness of a spring connecting points <spanclass="math-tex"data-type="tex">\\(i\\)</span> and <spanclass="math-tex"data-type="tex">\\(j\\)</span> depends on the mismatch between the similarity of the two data points and the similarity of the two map points, that is, <spanclass="math-tex"data-type="tex">\\(p_{ij} - q_{ij}\\)</span>. Now, we let the system evolve according to the laws of physics. If two map points are far apart while the data points are close, they are attracted together. If they are nearby while the data points are dissimilar, they are repelled.
195
222
196
223
The final mapping is obtained when the equilibrium is reached.
197
224
198
225
## Algorithm
199
226
200
-
Remarkably, this analogy stems exactly from a natural mathematical algorithm. It corresponds to minimizing the Kullback-Leiber divergence between the two distributions <spanclass="math-tex"data-type="tex">\\(\big(p_{ij}\big)\\)</span> and <spanclass="math-tex"data-type="tex">\\(\big(q_{ij}\big)\\)</span>:
227
+
Remarkably, this analogy stems exactly from a natural mathematical algorithm. It corresponds to minimizing the [Kullback-Leiber](http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) divergence between the two distributions <spanclass="math-tex"data-type="tex">\\(\big(p_{ij}\big)\\)</span> and <spanclass="math-tex"data-type="tex">\\(\big(q_{ij}\big)\\)</span>:
This gradient expresses the sum of all spring forces applied to map point <spanclass="math-tex"data-type="tex">\\(i\\)</span>.
237
+
Here, <spanclass="math-tex"data-type="tex">\\(u_{ij}\\)</span> is a unit vector going from <spanclass="math-tex"data-type="tex">\\(y_j\\)</span> to <spanclass="math-tex"data-type="tex">\\(y_i\\)</span>. This gradient expresses the sum of all spring forces applied to map point <spanclass="math-tex"data-type="tex">\\(i\\)</span>.
211
238
212
-
Now, let's illustrate this process by creating an animation of the convergence.
239
+
Let's illustrate this process by creating an animation of the convergence. We'll have to [monkey-patch](http://en.wikipedia.org/wiki/Monkey_patch) the internal `_gradient_descent()` function from scikit-learn's t-SNE implementation in order to register the position of the map points at every iteration.
213
240
214
241
<pre data-code-language="python"
215
242
data-executable="true"
216
243
data-type="programlisting">
244
+
# This list will contain the positions of the map points at every iteration.
We create an animation using [MoviePy](http://zulko.github.io/moviepy/).
306
+
273
307
<pre data-code-language="python"
274
308
data-executable="true"
275
309
data-type="programlisting">
276
-
f, ax, sc, txts = scatter(X_iter[..., -1], y);
310
+
f, ax, sc, txts = scatter(X_iter[..., -1], y)
277
311
278
312
def make_frame_mpl(t):
279
313
i = int(t*40)
@@ -287,10 +321,14 @@ def make_frame_mpl(t):
287
321
288
322
animation = mpy.VideoClip(make_frame_mpl,
289
323
duration=X_iter.shape[2]/40.)
290
-
animation.write_gif("anim.gif", fps=20)
324
+
animation.write_gif("tsne.gif", fps=20)
291
325
</pre>
292
326
293
-
<imgsrc="anim.gif" />
327
+
<imgsrc="tsne.gif" />
328
+
329
+
We can observe the different phases of the optimization. The details of the algorithm can be found in the original paper.
330
+
331
+
Let's also create an animation of the similarity matrix of the map points. We'll observe that it's getting closer and closer to the similarity matrix of the data points.
294
332
295
333
<pre data-code-language="python"
296
334
data-executable="true"
@@ -302,13 +340,9 @@ Q = squareform(Q)
302
340
f = plt.figure(figsize=(6, 6))
303
341
ax = plt.subplot(aspect='equal')
304
342
im = ax.imshow(Q, interpolation='none', cmap=pal)
305
-
plt.axis('tight');
306
-
plt.axis('off');
307
-
</pre>
343
+
plt.axis('tight')
344
+
plt.axis('off')
308
345
309
-
<pre data-code-language="python"
310
-
data-executable="true"
311
-
data-type="programlisting">
312
346
def make_frame_mpl(t):
313
347
i = int(t*40)
314
348
n = 1. / (pdist(X_iter[..., i], "sqeuclidean") + 1)
@@ -319,15 +353,17 @@ def make_frame_mpl(t):
319
353
320
354
animation = mpy.VideoClip(make_frame_mpl,
321
355
duration=X_iter.shape[2]/40.)
322
-
animation.write_gif("anim2.gif", fps=20)
356
+
animation.write_gif("tsne_matrix.gif", fps=20)
323
357
</pre>
324
358
325
-
<imgsrc="anim2.gif" />
359
+
<imgsrc="tsne_matrix.gif" />
326
360
327
361
## The t-Student distribution
328
362
329
363
Let's now explain the choice of the t-Student distribution for the map points, while a normal distribution is used for the data points. It is well known that the volume of the <spanclass="math-tex"data-type="tex">\\(N\\)</span>-dimensional ball of radius <spanclass="math-tex"data-type="tex">\\(r\\)</span> scales as <spanclass="math-tex"data-type="tex">\\(r^N\\)</span>. When <spanclass="math-tex"data-type="tex">\\(N\\)</span> is large, if we pick random points uniformly in the ball, most points will be close to the surface, and very few will be near the center.
330
364
365
+
This is illustrated by the following simulation, showing the distribution of the distances of these points, for different dimensions.
366
+
331
367
<pre data-code-language="python"
332
368
data-executable="true"
333
369
data-type="programlisting">
@@ -352,24 +388,24 @@ for i, D in enumerate((2, 5, 10)):
352
388
ax.set_title('D=%d' % D, loc='left')
353
389
</pre>
354
390
355
-
When reducing the dimensionality of a dataset, if we used the same Gaussian distribution for the data points and the map points, this mathematical fact would result in an _imbalance_ among the neighbors of a given point. This imbalance would lead to an excess of attraction forces and a sometimes unappealing mapping. This is actually what happens in the original SNE algorithm, by Hinton and Roweis (2002).
391
+
When reducing the dimensionality of a dataset, if we used the same Gaussian distribution for the data points and the map points, we could get an _imbalance_ among the neighbors of a given point. This imbalance would lead to an excess of attraction forces and a sometimes unappealing mapping. This is actually what happens in the original SNE algorithm, by Hinton and Roweis (2002).
356
392
357
-
The t-SNE algorithm works around this problem by using a t-Student with one degree of freedom (or Cauchy) distribution for the map points. This distribution has a much heavier tail than the Gaussian distribution, which _compensates_ the original imbalance. For a given data similarity between two data points, the two corresponding map points will need to be much further apart in order for their similarity to match the data similarity.
393
+
The t-SNE algorithm works around this problem by using a t-Student with one degree of freedom (or Cauchy) distribution for the map points. This distribution has a much heavier tail than the Gaussian distribution, which _compensates_ the original imbalance. For a given data similarity between two data points, the two corresponding map points will need to be much further apart in order for their similarity to match the data similarity. This is can be seen in the following plot.
The t-SNE algorithm provides an effective method to visualize a complex dataset. It successfully uncovers hidden structures in the data, exposing natural clusters or smooth nonlinear variations along the dimensions. It has been implemented in many languages, including Python, and it can be easily used thanks to the scikit-learn library.
408
+
The t-SNE algorithm provides an effective method to visualize a complex dataset. It successfully uncovers hidden structures in the data, exposing natural clusters and smooth nonlinear variations along the dimensions. It has been implemented in many languages, including Python, and it can be easily used thanks to the scikit-learn library.
373
409
374
410
The references below link to some optimizations and improvements that can be made to the algorithm and implementations. In particular, the algorithm described here is quadratic in the number of samples, which makes it unscalable to large datasets. One could for example obtain an <spanclass="math-tex"data-type="tex">\\(O(N \log N)\\)</span> complexity by using the Barnes-Hut algorithm to accelerate the N-body simulation via a quadtree or an octree.
0 commit comments