Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gateload parity / investigate longer PushToSubscriberInteractive round trips #56

Open
tleyden opened this issue Jan 3, 2017 · 21 comments
Assignees

Comments

@tleyden
Copy link
Contributor

tleyden commented Jan 3, 2017

couchbaselabs/mobile-testkit#205

3sg 2 accel

5k / 5k

@tleyden
Copy link
Contributor Author

tleyden commented Jan 3, 2017

With these sgload params:

sgload gateload --createreaders --createwriters --numreaders 100 --numwriters 100 --numupdaters 100 --numrevsperdoc 5000 --numdocs 10000 --numchannels 10 --batchsize 100  --expvarprogressenabled --loglevel debug 

I'm seeing long gateload roundtrip times:

DBUG[01-03|23:27:02] Statsd timing stat for gateload_roundtrip delta=1m46.158577322s
DBUG[01-03|23:27:02] Statsd timing stat for gateload_roundtrip delta=1m45.772005479s
DBUG[01-03|23:27:02] Statsd timing stat for gateload_roundtrip delta=1m44.014131679s
DBUG[01-03|23:27:02] Statsd timing stat for gateload_roundtrip delta=1m44.161888201s
DBUG[01-03|23:27:02] Statsd timing stat for gateload_roundtrip delta=1m43.768857852s
DBUG[01-03|23:27:02] Statsd timing stat for gateload_roundtrip delta=1m43.539814586s

Couchbase bucket stats (post-run):

screen shot 2017-01-03 at 3 34 17 pm

@tleyden
Copy link
Contributor Author

tleyden commented Jan 3, 2017

TODO:

  • Compare gateload against a running SG cluster and try to get similar stats / timing.
  • Might need to add throttling ability to sgload to get better parity w/ gateload numbers

@tleyden
Copy link
Contributor Author

tleyden commented Jan 5, 2017

Gateload behavior for 5k / 5k ( 2 accel / 2 sg ) ignoring rampup:

  • 5K writers, each writer tries to write every 10s -- 500 writes per second
  • No updates, all insert
  • Readers running concurrently, channels spread among readers

TODO:

  • Have writers write-then-sleep. Start timer before writing, include write time in sleep time
  • Batchsize = 1 for simulating gateload

@tleyden
Copy link
Contributor Author

tleyden commented Jan 5, 2017

With these sgload params:

go run main.go gateload 
--createreaders 
--createwriters 
--numreaders 100 
--numwriters 100 
--numupdaters 100 
--numrevsperdoc 10 
--numdocs 10000 
--numchannels 1 
--batchsize 1 
--loglevel debug 
--sg-url "http://localhost:8000/db/"

Here is a visualization of the activity:

screen shot 2017-01-04 at 6 49 37 pm

screen shot 2017-01-04 at 6 49 29 pm

@tleyden
Copy link
Contributor Author

tleyden commented Jan 6, 2017

I'm running into a blocker if I try to do a 5k/5k test:

$ go run main.go gateload --createreaders --createwriters 
--numreaders 5000 --numwriters 5000 --numupdaters 0 
--numrevsperdoc 2 --numdocs 500000 --numchannels 1 
--batchsize 1 --expvarprogressenabled --writerdelayms 10000 
--loglevel debug 
--sg-url http://s61705cnt72.sc.couchbase.com:4984/db/

it's giving me this error:

WARN[01-05|17:07:27] HttpClientRetry                          url=http://s61705cnt72.sc.couchbase.com:4985/db/_user/ numAttempts=10
panic: Error creating user in datastore.  User: {reader-user-4376-47a21bcd-8749-4ab2-be7b-74e796ce7709 reader-passw0rd-4376-47a21bcd-8749-4ab2-be7b-74e796ce7709}, Err: POST http://s61705cnt72.sc.couchbase.com:4985/db/_user/ giving up after 11 attempts

goroutine 29978 [running]:
panic(0x40bb20, 0xc433af29a0)
	/usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/couchbaselabs/sgload/sgload.(*Agent).createSGUserIfNeeded(0xc42b40afc0, 0xc42b410260, 0x1, 0x1)
	/Users/tleyden/Development/gocode/src/github.com/couchbaselabs/sgload/sgload/agent.go:35 +0x3fd
github.com/couchbaselabs/sgload/sgload.(*Reader).Run(0xc42b40afc0)
	/Users/tleyden/Development/gocode/src/github.com/couchbaselabs/sgload/sgload/reader.go:118 +0x15d
created by github.com/couchbaselabs/sgload/sgload.GateLoadRunner.startReaders
	/Users/tleyden/Development/gocode/src/github.com/couchbaselabs/sgload/sgload/gateload_runner.go:228 +0xff
exit status 2

and I can see in the scoop logs that user creation is taking upwards of 2 mins:

screen shot 2017-01-06 at 8 09 00 am

@tleyden
Copy link
Contributor Author

tleyden commented Jan 6, 2017

Seeing view warnings in sync gateway logs:

_time=2017-01-06T08:13:08.260-08:00 _level=INFO _msg=go-couchbase: call to ViewCustom("sync_gateway", "role_access") in github.com/couchbase/sync_gateway/db.(*DatabaseContext).ComputeRolesForUser took 6.844358193s
_time=2017-01-06T08:13:08.260-08:00 _level=INFO _msg=go-couchbase: call to ViewCustom("sync_gateway", "access_vbseq") in github.com/couchbase/sync_gateway/db.(*DatabaseContext).ComputeVbSequenceChannelsForPrincipal took 9.208915771s
_time=2017-01-06T08:13:08.260-08:00 _level=INFO _msg=go-couchbase: call to ViewCustom("sync_gateway", "access_vbseq") in github.com/couchbase/sync_gateway/db.(*DatabaseContext).ComputeVbSequenceChannelsForPrincipal took 8.99751397s
_time=2017-01-06T08:13:08.260-08:00 _level=INFO _msg=go-couchbase: call to ViewCustom("sync_gateway", "access_vbseq") in github.com/couchbase/sync_gateway/db.(*DatabaseContext).ComputeVbSequenceChannelsForPrincipal took 9.184084207s
_time=2017-01-06T08:13:08.260-08:00 _level=INFO _msg=go-couchbase: call to ViewCustom("sync_gateway", "access_vbseq") in github.com/couchbase/sync_gateway/db.(*DatabaseContext).ComputeVbSequenceChannelsForPrincipal took 9.179951494s
_time=2017-01-06T08:13:08.261-08:00 _level=INFO _msg=go-couchbase: call to ViewCustom("sync_gateway", "access_vbseq") in github.com/couchbase/sync_gateway/db.(*DatabaseContext).ComputeVbSequenceChannelsForPrincipal took 9.262217661s

@tleyden
Copy link
Contributor Author

tleyden commented Jan 6, 2017

sgload is basically trying to create 10K users in parallel, whereas gateload created them in serial up-front (if I'm remembering correctly)

@adamcfraser
Copy link
Contributor

adamcfraser commented Jan 6, 2017

You're right - gateload staggers the user creation - and I'm assuming this is the reason why. I guess you can stagger user creation in sgload for now, but we should think about escalating couchbase/sync_gateway#799 post-1.4, even if only to help our own testing.

@tleyden tleyden self-assigned this Jan 6, 2017
@tleyden tleyden added this to the 1.4.0 milestone Jan 6, 2017
@tleyden tleyden changed the title Gateload parity Gateload parity Jan 6, 2017
@tleyden tleyden added the ready label Jan 6, 2017
@tleyden tleyden changed the title Gateload parity Gateload parity / investigate longer PushToSubscriberInteractive round trips Jan 13, 2017
@tleyden
Copy link
Contributor Author

tleyden commented Jan 13, 2017

Possibile cause: continuous changes feeds vs longpoll changes feeds.

Easiest way to vet this would be to run gateload in longpoll mode and see if there are any differences in round trip times.

@tleyden
Copy link
Contributor Author

tleyden commented Jan 18, 2017

In perf run 308, I'm noticing higher memory usage on Sync Gateway than on what is supposed to be an equivalent gateload run:

http://s61103cnt72.sc.couchbase.com:3000/dashboard/db/sync-gateway-perf-test?var-perftestrundbs=jenkins-sync-gateway-perf-test-308&from=1484766730259&to=1484773930259

@tleyden
Copy link
Contributor Author

tleyden commented Jan 19, 2017

I'm running against the perf cluster created in perf run 311, which is using SG commit couchbase/sync_gateway@adedf1a, and seeing the following errors in the Sync Gateway logs:

2017-01-19T03:31:46.587Z HTTP:  #159385: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=13174-2  (as reader-user-272-97dbe52e-4556-4fe7-b9f1-b45c8f76781a)
2017-01-19T03:31:46.587Z HTTP: #159385:     --> 500 Internal error: Stored hash not found for sequence [13174-2], returning zero clock  (0.6 ms)
2017-01-19T03:31:46.613Z HTTP:  #159386: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=12683-1  (as reader-user-161-97dbe52e-4556-4fe7-b9f1-b45c8f76781a)
2017-01-19T03:31:46.613Z HTTP: #159386:     --> 500 Internal error: Stored hash not found for sequence [12683-1], returning zero clock  (0.7 ms)
2017-01-19T03:31:48.589Z HTTP:  #159387: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=13174-2  (as reader-user-272-97dbe52e-4556-4fe7-b9f1-b45c8f76781a)
2017-01-19T03:31:48.589Z HTTP: #159387:     --> 500 Internal error: Stored hash not found for sequence [13174-2], returning zero clock  (0.7 ms)
2017-01-19T03:31:48.615Z HTTP:  #159388: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=12683-1  (as reader-user-161-97dbe52e-4556-4fe7-b9f1-b45c8f76781a)
2017-01-19T03:31:48.615Z HTTP: #159388:     --> 500 Internal error: Stored hash not found for sequence [12683-1], returning zero clock  (0.6 ms)

and the corresponding retry warnings in the sgload logs:

WARN[01-19|03:28:30] HttpClientRetry                          url="http://ec2-54-208-202-94.compute-1.amazonaws.com:4984/db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=5982-1" numAttempts=4

With the current retry settings:

https://github.com/hashicorp/go-retryablehttp/blob/master/client.go#L33-L35

	defaultRetryWaitMin = 1 * time.Second
	defaultRetryWaitMax = 30 * time.Second
       // we set max retries to 10 times elsewhere

and backoff logic:

https://github.com/hashicorp/go-retryablehttp/blob/master/client.go#L295-L302

func backoff(min, max time.Duration, iter int) time.Duration {
	mult := math.Pow(2, float64(iter)) * float64(min)
	sleep := time.Duration(mult)
	if float64(sleep) != mult || sleep > max {
		sleep = max
	}
	return sleep
}

since it's on its 4th retry (numAttempts=4 in logs), I believe that means that sgload has already :

1s
2s
4s
8s

--> 15 seconds total.

which would lead to long gateload roundtrip times. Here is a snippet from the sgload logs, which shows some gateload round trip times apparently taking up to 6 minutes 50 seconds: https://gist.github.com/tleyden/e5e6daec49939f8b1cd2e2769d89c156#file-gistfile1-txt-L763-L790

I'm running sgload on the command line with the following settings:

go run main.go gateload --createreaders --createwriters 
--numreaders 500 
--numwriters 500 
--numupdaters 0 
--numdocs 250000 
--numchannels 100 
--batchsize 1  
--expvarprogressenabled 
--writerdelayms 10000 
--loglevel debug 
--sg-url http://ec2-.....compute-1.amazonaws.com:4984/db/

@tleyden
Copy link
Contributor Author

tleyden commented Jan 19, 2017

I haven't been able to repro the issues in #56 (comment) unfortunately. I'm seeing another issue though that I want to document while it's happening.

Steps to repro:

The sgload writers all finished, however the readers are basically "stuck" and not getting new docs.

The expvars have been stuck at:

sgload": {
"TotalNumDocsPushed": 250000,
"TotalNumDocsPushedExpected": 250000,
"TotalNumRevsPulled": 3419483,
"TotalNumRevsPulledExpected": 6250000
},

for several hours.

and when I run an ngrep, I'm not seeing anything except:

https://gist.github.com/tleyden/3067b330409c1f91392148d2e6658f54

(and not sure what to make of it)

@tleyden
Copy link
Contributor Author

tleyden commented Jan 19, 2017

Possible problems:

  • Changes request isn't returning correct data
  • Stuck in wait mode, not getting notified .. restarting a different changes call would return data

Diagnoses:

  • Verbose mode that logs since values for changes requests
  • Terminate changes requests on the client side, if you don't get a response within X minutes, log an error and retry (note: make sure to report
  • Try to repro with smaller sgload params to repro locally

@tleyden
Copy link
Contributor Author

tleyden commented Jan 19, 2017

TODO:

Add logkey to Sync Gateway called HTTP++ which logs full request and response body

NOTE: log * doesn't enable it

@tleyden
Copy link
Contributor Author

tleyden commented Jan 20, 2017

I added a 5 minute timeout for all requests in sgload, specifically targeting _changes requests.

In perf run 317, I'm seeing lots of sgload errors like:

20:39:33 �[33mWARN�[0m[01-20|04:39:33] Error getting changes.  Retrying.        �[33merror�[0m="net/http: request canceled (Client.Timeout exceeded while reading body)" �[33magent.ID�[0m=2346
20:39:33 
20:39:33 �[33mWARN�[0m[01-20|04:39:33] Error getting changes.  Retrying.        �[33merror�[0m="net/http: request canceled (Client.Timeout exceeded while reading body)" �[33magent.ID�[0m=463
20:39:33 
20:39:33 �[33mWARN�[0m[01-20|04:39:33] Error getting changes.  Retrying.        �[33merror�[0m="net/http: request canceled (Client.Timeout exceeded while reading body)" �[33magent.ID�[0m=1906
20:39:33 
20:39:33 �[33mWARN�[0m[01-20|04:39:33] Error getting changes.  Retrying.        �[33merror�[0m="net/http: request canceled (Client.Timeout exceeded while reading body)" �[33magent.ID�[0m=732

In the Sync Gateway logs I picked out a random _changes request that I saw to try to manually run the same request:

2017-01-20T04:39:25.092Z HTTP:  #955099: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=86720-4  (as reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1)

Based on that since value, I used postman (gui curl client) to send this request:

http://{{host}}:{{port_public}}/{{db}}/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=86720-4

with credentials username: reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1 and password reader-passw0rd-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1

The postman UI hung for about 1-2 minutes before I finally cancelled it.

Then I changed the request parameters from feed=longpoll -> feed=normal and left everything else the same, and got a near instant response with:

https://gist.github.com/tleyden/e3267144b619932a8fc8101ba2b6953c

I was able to repeat the same thing several times, and it would block indefinitely with the feed=longpoll setting and return instantly with the feed=normal setting

@tleyden
Copy link
Contributor Author

tleyden commented Jan 20, 2017

I'm also seeing the same error as previous in the sync gateway logs:

cat logs/* | grep -i "stored hash"
2017-01-20T04:23:06.253Z HTTP: #801529:     --> 500 Internal error: Stored hash not found for sequence [84843-7], returning zero clock  (0.7 ms)
2017-01-20T04:23:08.211Z HTTP: #801559:     --> 500 Internal error: Stored hash not found for sequence [83752-2], returning zero clock  (0.7 ms)
2017-01-20T04:23:08.255Z HTTP: #801560:     --> 500 Internal error: Stored hash not found for sequence [84843-7], returning zero clock  (0.6 ms)
2017-01-20T04:23:43.563Z HTTP: #842970:     --> 500 Internal error: Stored hash not found for sequence [86161-5], returning zero clock  (99.5 ms)
2017-01-20T04:23:44.570Z HTTP: #843407:     --> 500 Internal error: Stored hash not found for sequence [86161-5], returning zero clock  (0.7 ms)
2017-01-20T04:23:46.571Z HTTP: #843408:     --> 500 Internal error: Stored hash not found for sequence [86161-5], returning zero clock  (0.6 ms)
2017-01-20T04:23:50.924Z HTTP: #844606:     --> 500 Internal error: Stored hash not found for sequence [86161-5], returning zero clock  (185.4 ms)

@tleyden
Copy link
Contributor Author

tleyden commented Jan 20, 2017

Focusing in one one of the 500 Internal error: Stored hash not found for sequence messages, I found this log line:

2017-01-20T04:23:50.924Z HTTP:  #844606: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=86161-5  (as reader-user-1919-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T04:23:50.924Z HTTP: #844606:     --> 500 Internal error: Stored hash not found for sequence [86161-5], returning zero clock  (185.4 ms)

and retried that via curl with the credentials for reader-user-1919-testrunid and the same _changes parameter, and got a response:

https://gist.github.com/tleyden/bdb353e2760c2e4e426343bc193e755f

I looked for previous _changes requests for that user and found:

2017-01-20T04:13:13.571Z HTTP:  #70146: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=2771-6  (as reader-user-1919-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T04:23:43.563Z HTTP:  #842970: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=86161-5  (as reader-user-1919-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T04:23:43.563Z HTTP: #842970:     --> 500 Internal error: Stored hash not found for sequence [86161-5], returning zero clock  (99.5 ms)
2017-01-20T04:23:44.570Z HTTP:  #843407: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=86161-5  (as reader-user-1919-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T04:23:44.570Z HTTP: #843407:     --> 500 Internal error: Stored hash not found for sequence [86161-5], returning zero clock  (0.7 ms)
2017-01-20T04:23:46.571Z HTTP:  #843408: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=86161-5  (as reader-user-1919-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T04:23:46.571Z HTTP: #843408:     --> 500 Internal error: Stored hash not found for sequence [86161-5], returning zero clock  (0.6 ms)
2017-01-20T04:23:50.924Z HTTP:  #844606: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=86161-5  (as reader-user-1919-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T04:23:50.924Z HTTP: #844606:     --> 500 Internal error: Stored hash not found for sequence [86161-5], returning zero clock  (185.4 ms)

I tried re-running the first one with since=2771-6 and got this response:

https://gist.github.com/tleyden/00b3779669b3a572af2281f170133985

and was surprise not to see 86161-5 anywhere in the response.

I also tried the changes request with no since value, and got:

https://gist.github.com/tleyden/cdb5bcb90c5a1a1bb60dd0377aa1ddd3

and was surprised not to see either 2771-6 or 86161-5 anywhere in the response.

@tleyden
Copy link
Contributor Author

tleyden commented Jan 20, 2017

For the stuck changes request mentioned earlier:

2017-01-20T04:39:25.092Z HTTP:  #955099: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=86720-4  (as reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1)

with credentials username: reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1 and password reader-passw0rd-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1

I enabled Changes+ logging via the REST API, and now I see this in the logs

2017-01-20T07:20:05.493Z HTTP:  #971273: GET /db/_changes?feed=longpoll&limit=100&heartbeat=30000&style=all_docs&since=86720-4  (as reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T07:20:05.494Z Changes+: Vector multi changes feed...
2017-01-20T07:20:05.494Z Changes+: Vector MultiChangesFeed({*}, {Since:86720-4 Limit:100 Conflicts:true IncludeDocs:false Wait:true Continuous:false Terminator:0xc44fd9a720 HeartbeatMs:30000 TimeoutMs:300000 ActiveOnly:false}) ...   (to reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T07:20:05.494Z Changes+: MultiChangesFeed: channels expand to "!:1,22-4f4adc00-ba19-4287-bb25-86acc82b90f1:5" ...   (to reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T07:20:05.494Z Changes+: GotChannelSince... !:1,22-4f4adc00-ba19-4287-bb25-86acc82b90f1:5
2017-01-20T07:20:05.494Z Changes+: Starting for channel... !, 675.1
2017-01-20T07:20:05.494Z Changes+: [changesFeed] Found 0 changes for channel !
2017-01-20T07:20:05.494Z Changes+: Starting for channel... 22-4f4adc00-ba19-4287-bb25-86acc82b90f1, 675.5
2017-01-20T07:20:05.495Z Changes+: [changesFeed] Found 0 changes for channel 22-4f4adc00-ba19-4287-bb25-86acc82b90f1
2017-01-20T07:20:05.495Z Changes+: MultiChangesFeed waiting...   (to reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T07:20:05.495Z Changes+: Waiting for "data-bucket"'s count to pass 376

@tleyden
Copy link
Contributor Author

tleyden commented Jan 20, 2017

With Changes+ logging enabled, and re-running the request with feed=normal, it is emitting the following logs:

2017-01-20T17:47:25.497Z HTTP:  #972531: GET /db/_changes?feed=normal&limit=100&heartbeat=30000&style=all_docs&since=86720-4  (as reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T17:47:25.498Z Changes+: Vector multi changes feed...
2017-01-20T17:47:25.498Z Changes+: Vector MultiChangesFeed({*}, {Since:86720-4 Limit:100 Conflicts:true IncludeDocs:false Wait:false Continuous:false Terminator:0xc45f82baa0 HeartbeatMs:30000 TimeoutMs:300000 ActiveOnly:false}) ...   (to reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T17:47:25.498Z Changes+: MultiChangesFeed: channels expand to "!:1,22-4f4adc00-ba19-4287-bb25-86acc82b90f1:5" ...   (to reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T17:47:25.498Z Changes+: GotChannelSince... !:1,22-4f4adc00-ba19-4287-bb25-86acc82b90f1:5
2017-01-20T17:47:25.498Z Changes+: Starting for channel... !, 675.1
2017-01-20T17:47:25.499Z Changes+: [changesFeed] Found 0 changes for channel !
2017-01-20T17:47:25.499Z Changes+: Starting for channel... 22-4f4adc00-ba19-4287-bb25-86acc82b90f1, 675.5
2017-01-20T17:47:25.518Z Changes+: [changesFeed] Found 100 changes for channel 22-4f4adc00-ba19-4287-bb25-86acc82b90f1
2017-01-20T17:47:25.518Z Changes+: MultiChangesFeed sending &{Seq:86949-14 ID:86-writer-user-1287-4f4adc00-ba19-4287-bb25-86acc82b90f1 Deleted:false Removed:{} Doc:map[] Changes:[map[rev:1-ca94de9af8e9cb3ae38d96518958707e]] Err:<nil> allRemoved:false branched:false}   (to reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T17:47:25.518Z Changes+: MultiChangesFeed sending &{Seq:86949-14::0.257 ID:95-writer-user-1169-4f4adc00-ba19-4287-bb25-86acc82b90f1 Deleted:false Removed:{} Doc:map[] Changes:[map[rev:1-fa83d9c1d4a3eccb3b9c17246f93cce3]] Err:<nil> allRemoved:false branched:false}   (to reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1)
2017-01-20T17:47:25.518Z Changes+: MultiChangesFeed sending &{Seq:86949-14::1.237 ID:86-writer-user-15-4f4adc00-ba19-4287-bb25-86acc82b90f1 Deleted:false Removed:{} Doc:map[] Changes:[map[rev:1-2618d6c513f6cdee149773172e1634c6]] Err:<nil> allRemoved:false branched:false}   (to reader-user-1934-4f4adc00-ba19-4287-bb25-86acc82b90f1)
....  etc .. MultiChangesFeed sending line repeats 100+ times

more log output in a gist

@djpongh djpongh added the icebox label Jan 20, 2017
@tleyden
Copy link
Contributor Author

tleyden commented Jan 24, 2017

Note to self: gateload test that shows high pushtosubscriber times

http://uberjenkins.sc.couchbase.com/view/Performance/job/sync-gateway-perf-test/328/parameters/

@tleyden
Copy link
Contributor Author

tleyden commented Jan 25, 2017

It's worth looking into why https://github.com/couchbaselabs/sync-gateway-accel/issues/52 is easily reproducible by gateload but not by sgload

@djpongh djpongh removed this from the 1.4.0 milestone Mar 23, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants