Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

more info on race condition #384

Closed
jjb opened this issue Jul 14, 2020 · 5 comments
Closed

more info on race condition #384

jjb opened this issue Jul 14, 2020 · 5 comments
Assignees

Comments

@jjb
Copy link
Contributor

jjb commented Jul 14, 2020

Hi, thanks for an amazing project! Question about this:

Coverband on very high volume sites with many server processes reporting can have a race condition

  • Does this mean that on a very high volume sites with many server processes reporting, there will be a somewhat frequent race condition, and on lower volume sites with fewer server processes, the same race condition can occur but much less often?
  • How does this race condition present?
@danmayer
Copy link
Owner

yeah, good call. I should try to improve this area of documentation... I will work on improving the docs and comments in the files.

For now:

Does this mean that on a very high volume sites with many server processes reporting, there will be a somewhat frequent race condition, and on lower volume sites with fewer server processes, the same race condition can occur but much less often?

Yes the race condition is when multiple servers are performing two operations against redis, the more servers you have talking to redis the higher the change that different servers will talk to redis between the two calls. So there is never an issue with a single server, but then the chance of a race increases as new servers are added. Honestly, I never noticed this error until running on systems with more than 250 reporting puma processes.

How does this race condition present?

When the race situation occurs one of the X servers will win the coverage update, resulting in some lost coverage data... This doesn't hurt the app or perf in anyway, but could result in hit lines not being reported as hit.

Solution:

We now have two Redis adapters the normal one which has this issue or a new one which doesn't have a race condition and moves some load from your application servers to the redis server. To use the new adapter you just set it in your config/coverband.rb like so.

config.store = Coverband::Adapters::HashRedisStore.new(Redis.new(url: redis_url), redis_namespace: 'coverband_data')

Note that this will increase memory and CPU needs on your Redis a little bit, if you use this depending on your utilization you might want a dedicated Redis... I run this on a dedicated redis for some apps with massive server clusters, which already put pretty high load on redis via background jobs and Rail.cache

@danmayer
Copy link
Owner

let me know if you have other questions @jjb

@danmayer danmayer self-assigned this Jul 15, 2020
@danmayer
Copy link
Owner

OK, I think this is answered pretty well here... we could perhaps update the docs but maybe we just link to the issue and now it is always in google for search so calling this good.

@jjb
Copy link
Contributor Author

jjb commented Aug 29, 2020

okay, made a PR - feel free to reword or reject if you think google search is good enough :-D thanks

@danmayer
Copy link
Owner

ok cool that is awesome I will take a look

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants