diff --git a/solutions/system_design/twitter/README.md b/solutions/system_design/twitter/README.md index 7df01328a32..d14996f1526 100644 --- a/solutions/system_design/twitter/README.md +++ b/solutions/system_design/twitter/README.md @@ -26,7 +26,7 @@ Without an interviewer to address clarifying questions, we'll define some use ca #### Out of scope * **Service** pushes tweets to the Twitter Firehose and other streams -* **Service** strips out tweets based on user's visibility settings +* **Service** strips out tweets based on users' visibility settings * Hide @reply if the user is not also following the person being replied to * Respect 'hide retweets' setting * Analytics @@ -129,7 +129,7 @@ If our **Memory Cache** is Redis, we could use a native Redis list with the foll | tweet_id user_id meta | tweet_id user_id meta | tweet_id user_id meta | ``` -The new tweet would be placed in the **Memory Cache**, which populates user's home timeline (activity from people the user is following). +The new tweet would be placed in the **Memory Cache**, which populates the user's home timeline (activity from people the user is following). We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest): diff --git a/solutions/system_design/web_crawler/README.md b/solutions/system_design/web_crawler/README.md index 355d36d1115..e6e79ad2245 100644 --- a/solutions/system_design/web_crawler/README.md +++ b/solutions/system_design/web_crawler/README.md @@ -77,7 +77,7 @@ Handy conversion guide: ### Use case: Service crawls a list of urls -We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/), [DMOZ](http://www.dmoz.org/), etc +We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/), [DMOZ](http://www.dmoz.org/), etc. We'll use a table `crawled_links` to store processed links and their page signatures.