-
Notifications
You must be signed in to change notification settings - Fork 332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use shardFilter during listShards #377
Conversation
So we need a test where we,
Also make sure we do hashkey based put -
|
tested with stream of 8 shards. updated to 16 shards and scaled down to 12, all of these events all shards were receiving data as expected |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verified from the logs that new shard ids are picked up on resharding and records are mapped to their hash key range.
Issue:
Current implementation pulls all shards with listShardsRequest without filtering non active shards and in shard scaling event it contributes to latency increase. Use shardFilter to get only active shards.
Related: #372
Description of changes:
Bump up AWS SDK version.
Use shardFilter with listShardsRequest.
Include header to avoid compilation error.
Testing:
This change is not unit testable due to the nature of the change is to change server side behavior hence I had to go down manual integration test route.
Test case 1: Manually changed req.SetMaxResults(1000); to req.SetMaxResults(10); and tested against a stream of previously 100 shards, but re-sharded to 128 shards in my test account, and was able to get 13 paginated results with total of 128 open shards
Test case 2: Used CLI to first scale stream up stream up to 16 with UNIFORM_SCALING and got hash ranges. Sampled 1 hash key per range, and UNIFORM_SCALING down to 8 shards. Start the test case which is just sending data to each hash key I gave (total of 16 keys) in a loop every 10 seconds, verified via CW that all shards were receiving data, and UNIFORM_SCALING up to 16 shards, verified via CW that all 16 shards were receiving data, before I UNIFORM_SCALING down to 12 shards and verified all and only active 12 shards were receiving data.
Added below log line to here during testing
LOG(info) << ur->hash_key() << "\t" <<*shard_id;
and got following log when scaling from 16 to down to 12 shards:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.