Skip to content

Conversation

maca88
Copy link
Contributor

@maca88 maca88 commented May 26, 2019

This PR adds three strategies that use a local cache for faster readings, fixes #69.

  1. DistributedLocalCacheRegionStrategy: Uses only Redis pubsub mechanims to synchronize data between caches.
  2. TwoLayerCacheRegionStrategy: Uses DefaultRegionStrategy logic for updating the Redis cache and Redis pubsub mechanism to invalidate local caches.
  3. FastTwoLayerCacheRegionStrategy: Uses FastRegionStrategy logic for updating the Redis cache and Redis pubsub mechanism to invalidate local caches.

All three strategies share the same drawbacks:

  • No batching for PutMany operation
  • Higher CPU usage due to processing of synchronization/invalidation messages for write operations
  • Slow write operations

Additional drawbacks for DistributedLocalCacheRegionStrategy:

  • Slow Lock/LockMany operation
  • After adding/restarting an instance the performance will drop until the cache is populated.

All three strategies supports pipelining for write operations, but it is disabled by default as it requires more CPU and may cause a timeout due to the cpu or network bound.

The below tests were ran in .NET 4.6.1 on Intel i7-860 and Redis server was located on a different local machine. StackExchange.Redis version 2.0.601 was used as it has some performance improvements. In the test environment the bottleneck was the cpu where the test were ran (Intel i7-860) and Redis server cpu when pipelining was used. When enabling pipelining, there were very high synchronization times and also timeouts due to bottlenecking cpu.

Test method  TwoLayerCacheRegionStrategy  DistributedLocalCacheRegionStrategy DefaultRegionStrategy
StressTestGetAsync(1000) 520161 op/s 2234093 op/s 7450 op/s
StressTestGetManyAsync(1000, 10) 72375 op/s 1106187 op/s 7270 op/s
StressTestLockUnlockAsync(1000) 4265 op/s (*) 1171 op/s (*) 3764 op/s
StressTestLockUnlockManyAsync(1000, 10) 3929 op/s (*) 1080 op/s (*) 4281 op/s
StressTestPutAsync(1000) 3164 op/s, 11ms sync time (*)
With pipelining:
33285 op/s, 296ms sync time (*)
3127 op/s, 2ms sync time (*)
With pipelining:
31243 op/s, 4143ms sync time (*)
7211 op/s
StressTestPutManyAsync(1000, 10) 293 op/s, 13ms sync time (*)
With pipelining:
1366 op/s, 25ms sync time (*)
316 op/s, 2ms sync time (*)
With pipelining:
1170 op/s, 80ms sync time (*)
6729 op/s
StressTestPutAndGetAsync(1000, 20, 80) 6076 op/s, 10ms sync time (*)
With pipelining:
6804 op/s, 10ms sync time (*)
15519 op/s, 1ms sync time (*)
With pipelining:
138007 op/s, 2075ms sync time (*)
7379 op/s
StressTestPutAndClearAsync(1000, 99, 1) 3161 op/s, 15ms sync time (*)
With pipelining:
12648 op/s, 23ms sync time (*)
3364 op/s, 2ms sync time (*)
With pipelining:
24894 op/s, 4ms sync time (*)
6947 op/s
StressTestPutAndRemoveAsync(1000, 80, 20) 3225 op/s, 11ms sync time (*)
With pipelining:
33861 op/s, 278ms sync time (*)
3158 op/s, 2ms sync time (*)
With pipelining:
32417 op/s, 4126ms sync time (*)
7346 op/s
StressTestPutGetAndClearAsync(1000, 19, 80, 1) 5831 op/s, 10ms sync time (*)
With pipelining:
6309 op/s, 10ms sync time (*)
18407 op/s, 2ms sync time (*)
With pipelining:
112632 op/s, 2ms sync time (*)
6141 op/s
StressTestPutGetAndRemoveAsync(1000, 20, 75, 5) 5865 op/s, 11ms sync time (*)
With pipelining:
7009 op/s, 12ms sync time (*)
16771 op/s, 1ms sync time (*)
With pipelining:
142914 op/s, 2074ms sync time (*)
7485 op/s

(*) - Only 4 tasks were used in the test, due to the high CPU usage of the strategy.

For tests that do not have (*), 8 tasks were used.

I didn't realized that the cache providers weren't disposed per test, which heavily impacted the above benchmarks. I've rerun the tests and updated the values.

@maca88
Copy link
Contributor Author

maca88 commented May 27, 2019

I didn't see that the cache providers weren't disposed per test, which heavily impacted the above benchmarks. Now the cache providers are disposed per test for Redis provider and I've reran the tests and updated the benchmarks.

@maca88 maca88 changed the title Add distributed local cache for StackExchange.Redis provider WIP - Add distributed local cache for StackExchange.Redis provider May 28, 2019
@maca88
Copy link
Contributor Author

maca88 commented Jun 2, 2019

The last commit contains also a fix for DefaultRegionStrategy clear operation as it was possible that the current version was updated to an older one, due to the delay of pubsub messages. Currently, there is an issue with the async generator, which does not generate the cancellation token parameter for AbstractRegionStrategy.ExecutePutManyAsync, so I will leave it as WIP until the issue is resolved.

@maca88 maca88 changed the title WIP - Add distributed local cache for StackExchange.Redis provider Add distributed local cache for StackExchange.Redis provider Jun 5, 2019

namespace NHibernate.Caches.StackExchangeRedis.Tests
{
public class CacheRegionStrategyFactory : DefaultCacheRegionStrategyFactory
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was there a reason not to add support for these new strategies directly in the DefaultCacheRegionStrategyFactory @maca88 ?

See #123

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking to include it in the default factory, I have now seen the reason, see #123.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

"Third Level Cache" for Redis

2 participants