You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<center>Clustering of abstracts related to energy storage</center>
28
+
</h1>
29
+
30
+
<p>
31
+
Below are interactive plots visualizing topic modeling on a collection of article abstracts pulled from Microsoft Academic related to energy storage.
32
+
</p>
33
+
34
+
<h2>
35
+
Obtaining the abstracts
36
+
</h2>
37
+
<p>
38
+
The abstracts were obtained with the search terms below, returning the top 1000 results. Duplicate papers were removed (identified by DOI) and only articles in english were retained, resulting in 7857 abstracts. :
39
+
</p>
40
+
<p>
41
+
<u>Search Terms:</u> High Temperature Energy Storage, Energy Storage, Fossil Energy Storage, Superconducting Magnetic Energy Storage, Thermal Energy Storage, Flow Battery Energy Storage, Electrochemical Energy Storage, Advanced Adiabatic Compressed Air Energy Storage, Liquid Air Energy Storage, Thermochemical Energy Storage, Mechanical Energy Storage, Sensible Thermal Energy Storage, Methanol Energy Storage, Hydrogen Energy Storage, Li-ion Energy Storage, Lead Acid Energy Storage, Latent Thermal Energy Storage, Ammonia Energy Storage
42
+
43
+
44
+
</p>
45
+
46
+
<h2>
47
+
Topic Modeling
48
+
</h2>
49
+
50
+
<p>
51
+
Topic modeling was performed using Latent Diriclet Allocation (LDA) with gensim. LDA is an unsupervised machine learning technique to determine a set of topics that can represnt the modeled collection of texts (corpus).
52
+
53
+
Each document is given a probability of being in each topic, where topics are probability distributions over words. This is a 'soft' clustering technique, in contrast to Kmeans (used previously) which assigns each document to just one cluster. This removes the nuance of papers that lie at the intersection between fields.
54
+
</p>
55
+
56
+
<h2> Topic Visualization with t-SNE </h2>
57
+
58
+
<p>
59
+
Below is a visualization of the topic modeling of the corpus. First, the texts are represented as points on a 2D surface using t-Distributed Stochastic Neighbor Embedding (t-SNE).
60
+
61
+
The topic distribution for each paper is visualized by representing each paper as a pie chart. Each slice represents a topic, and the fractional size (angle) of each slice represents the probability of that topic. Only the top 3 topics for each paper are inclused (resulting in an incomplete pie chart) for the sake of graphics processing.
62
+
63
+
The top words for each topic are indicated in the legend (see next visualization to explore the topic words in more detail). The topics in the legend are sorted by the number of papers that have that topic as their most probable topic.
64
+
65
+
<br><br>
66
+
To use the plot, mouse over each item to get information about the paper. Papers can be clicked to open up the articles web page. Use the tools on the right to move around, and note the 'refresh' button to reset the graph. Topics can be hidden by clicking on the topic color in the legend.
Below is the visualization of the LDA model using pyLDAvis. The graph on the left using Principal Component Analysis to visualize the topics in 2D, similar to TSNE. The dashboard on the right is useful for exploring the words associated with each topic. Slide the relevance metric to about 0.5 to get words more specific to each topic.
0 commit comments