Skip to content

Commit

Permalink
Fix typos in Twitter and web crawler exercises (donnemartin#438)
Browse files Browse the repository at this point in the history
  • Loading branch information
Agade09 authored Jul 5, 2020
1 parent 92240cf commit dfb838d
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions solutions/system_design/twitter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Without an interviewer to address clarifying questions, we'll define some use ca
#### Out of scope

* **Service** pushes tweets to the Twitter Firehose and other streams
* **Service** strips out tweets based on user's visibility settings
* **Service** strips out tweets based on users' visibility settings
* Hide @reply if the user is not also following the person being replied to
* Respect 'hide retweets' setting
* Analytics
Expand Down Expand Up @@ -129,7 +129,7 @@ If our **Memory Cache** is Redis, we could use a native Redis list with the foll
| tweet_id user_id meta | tweet_id user_id meta | tweet_id user_id meta |
```

The new tweet would be placed in the **Memory Cache**, which populates user's home timeline (activity from people the user is following).
The new tweet would be placed in the **Memory Cache**, which populates the user's home timeline (activity from people the user is following).

We'll use a public [**REST API**](https://github.com/donnemartin/system-design-primer#representational-state-transfer-rest):

Expand Down
2 changes: 1 addition & 1 deletion solutions/system_design/web_crawler/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Handy conversion guide:
### Use case: Service crawls a list of urls

We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/), [DMOZ](http://www.dmoz.org/), etc
We'll assume we have an initial list of `links_to_crawl` ranked initially based on overall site popularity. If this is not a reasonable assumption, we can seed the crawler with popular sites that link to outside content such as [Yahoo](https://www.yahoo.com/), [DMOZ](http://www.dmoz.org/), etc.

We'll use a table `crawled_links` to store processed links and their page signatures.

Expand Down

0 comments on commit dfb838d

Please sign in to comment.