This is an attempt to model Medium's digest by topic. It's a very simple experiment that uses
a combination of (high-dimensional) word embeddings and dynamic bi-directional encoding
on article features. This is a proof-of-concept for modeling "live" data that is "streamed" and is
available publicly. There are models for headless web Scrapers and dataset models for natural
language articles.
Before you start slinging, do this first (> Node 7, > Python 2 or 3):
npm i && python setup.py install-
Scrape dataset
medium. This will go for a while dependent on your internet connection. There is a timeout of 30s where the page will be skipped. This tries to use 8 threads.bin/scrape medium
This crawls the topics page and collects the available topics. Then it visits each topic main page i.e. culture and extracts all the landing page articles (extracts the
hrefaccording to the attributedata-post-id). Finally, it visits each article page, finds the Medium API from the landing page, it looks like this:<script> // <; </script>
It uses
page.evaluate(...)to perform aregexpon the script content, parses it as JSON and then passes it back to node.js. It finally strips the meta data, it reduces the object as a model for the pythontextsum/dataset/article.pymodeltextsum.Articlewith the features:title,subtitle,text,tags,description,short_description.We now have raw data that we can use to do fun things.
-
Convert raw data to numpy records of examples.
bin/records --src=data/medium --dst=records/medium --pad
This takes the raw data from
srcand serializes it astextsum.Articleobjects for consumption. As it is serializing, it tokenizes all the features (title,subtitle, ...) as mentioned in 2. It saves all these asnp.ndarrays and stores them indstbytopic. Next, the examples are piped to*.npyfiles. This comes in handy to be used with the nativetf.dataAPI, it's like hadoop or spark but native compatibility with tensorflow. Finally, all the recordtokenswe collected for each topic, is collected in aset, so we don't store all tokens in memory to avoid repetition, this is done in amap->reducefashion. The tokens are gathered bytopicon a individual thread as asetofstrs and theunionoperation reduces the total space for eachtopicmapoperation. Themapstage returns all the individual vocabs for each feature (as in 2) and is reduced by theunionoperation again.We now have a set of vocab files for each feature in the dataset.
-
Final step, Sling (run the experiment)
bin/experiment \
--model_dir=article_model \
--dataset_dir=records/medium \
--input_feature='text' \
--target_feature='title' \
--schedule='train'