Skip to content

Commit

Permalink
format README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
yongkun.wang committed Nov 15, 2013
1 parent 3928569 commit aa0fa3d
Showing 1 changed file with 4 additions and 3 deletions.
7 changes: 4 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ Therefore, I re-designed the ACKs system to let the ACK go back via the route of

Main enhancements

..1. Distribute ACK traffic to each flume node, ACK goes back to original agent throught the route of event.
..2. Reuse the connection for event transmission.
..3. Buffer input/output stream for thrift to improve throughput. https://github.com/yongkun/thrift
⋅⋅1. Distribute ACK traffic to each flume node, ACK goes back to original agent throught the route of event.
⋅⋅2. Reuse the connection for event transmission.
⋅⋅3. Buffer input/output stream for thrift to improve throughput. https://github.com/yongkun/thrift

Apache JIRA is here, with some documents https://issues.apache.org/jira/browse/FLUME-640
This enhancement was supposed to be merged into 0.10 if Flume was not upgraded to NG.
Expand All @@ -31,6 +31,7 @@ This enhancement was supposed to be merged into 0.10 if Flume was not upgraded t
Use HDFS append() and change the file rotation mechanism to create large HDFS files, which could increase the performance of Map/Reduce program when using these files as input, and reduce the number of block mapping entries in Hadoop NameNode.



##### This modified version has been heavily used inside our company (Rakuten, http://en.wikipedia.org/wiki/Rakuten), with single collector receiving more than 300GB data per day (more than 328 million events per day, peak throughput is about 26K event per second).

Contact
Expand Down

0 comments on commit aa0fa3d

Please sign in to comment.