Skip to content

Upgrade JS-commons with MyLargeSegments support #812

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jul 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
"node": ">=6"
},
"dependencies": {
"@splitsoftware/splitio-commons": "1.16.0",
"@splitsoftware/splitio-commons": "1.16.1-rc.3",
"@types/google.analytics": "0.0.40",
"@types/ioredis": "^4.28.0",
"bloom-filters": "^3.0.0",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,9 @@ const userKey = 'nicolas@split.io';
const secondUserKey = 'marcio@split.io';

const baseUrls = {
sdk: 'https://sdk.push-fallbacking/api',
events: 'https://events.push-fallbacking/api',
auth: 'https://auth.push-fallbacking/api'
sdk: 'https://sdk.push-fallback/api',
events: 'https://events.push-fallback/api',
auth: 'https://auth.push-fallback/api'
};
const config = {
core: {
Expand All @@ -51,11 +51,14 @@ const config = {
scheduler: {
featuresRefreshRate: 0.2,
segmentsRefreshRate: 0.25,
largeSegmentsRefreshRate: 0.25,
impressionsRefreshRate: 3000
},
urls: baseUrls,
streamingEnabled: true,
// debug: true,
sync: {
largeSegmentsEnabled: true
}
};
const settings = settingsFactory(config);

Expand All @@ -79,30 +82,31 @@ const MILLIS_DESTROY = MILLIS_STREAMING_DISABLED_CONTROL + settings.scheduler.fe

/**
* Sequence of calls:
* 0.0 secs: initial SyncAll (/splitChanges, /mySegments/*), auth, SSE connection
* 0.1 secs: SSE connection opened -> syncAll (/splitChanges, /mySegments/nicolas)
* 0.2 secs: Streaming down (OCCUPANCY event) -> fetch due to fallback to polling (/splitChanges, /mySegments/nicolas)
* 0.0 secs: initial SyncAll (/splitChanges, /my(Large)Segments/nicolas), auth, SSE connection
* 0.1 secs: SSE connection opened -> syncAll (/splitChanges, /my(Large)Segments/nicolas)
* 0.2 secs: Streaming down (OCCUPANCY event) -> fetch due to fallback to polling (/splitChanges, /my(Large)Segments/nicolas)
* 0.3 secs: SPLIT_UPDATE event ignored
* 0.4 secs: periodic fetch due to polling (/splitChanges)
* 0.45 secs: periodic fetch due to polling (/mySegments/*)
* 0.5 secs: Streaming up (OCCUPANCY event) -> syncAll (/splitChanges, /mySegments/nicolas)
* 0.55 secs: create a new client while streaming -> initial fetch (/mySegments/marcio), auth, SSE connection and syncAll (/splitChanges, /mySegments/nicolas, /mySegments/marcio)
* 0.45 secs: periodic fetch due to polling (/my(Large)Segments/*)
* 0.5 secs: Streaming up (OCCUPANCY event) -> syncAll (/splitChanges, /my(Large)Segments/nicolas)
* 0.55 secs: create a new client while streaming -> initial fetch (/my(Large)Segments/marcio), auth, SSE connection and syncAll (/splitChanges, /my(Large)Segments/nicolas, /my(Large)Segments/marcio)
* 0.6 secs: SPLIT_UPDATE event -> /splitChanges
* 0.7 secs: Streaming down (CONTROL event) -> fetch due to fallback to polling (/splitChanges, /mySegments/nicolas, /mySegments/marcio)
* 0.7 secs: Streaming down (CONTROL event) -> fetch due to fallback to polling (/splitChanges, /my(Large)Segments/nicolas, /my(Large)Segments/marcio)
* 0.8 secs: MY_SEGMENTS_UPDATE event ignored
* 0.9 secs: periodic fetch due to polling (/splitChanges)
* 0.95 secs: periodic fetch due to polling (/mySegments/nicolas, /mySegments/marcio, /mySegments/facundo)
* 1.0 secs: Streaming up (CONTROL event) -> syncAll (/splitChanges, /mySegments/nicolas, /mySegments/marcio, /mySegments/facundo)
* 0.95 secs: periodic fetch due to polling (/my(Large)Segments/nicolas, /my(Large)Segments/marcio, /my(Large)Segments/facundo)
* 1.0 secs: Streaming up (CONTROL event) -> syncAll (/splitChanges, /my(Large)Segments/nicolas, /my(Large)Segments/marcio, /my(Large)Segments/facundo)
* 1.1 secs: MY_SEGMENTS_UPDATE event -> /mySegments/nicolas
* 1.2 secs: Streaming down (CONTROL event) -> fetch due to fallback to polling (/splitChanges, /mySegments/nicolas, /mySegments/marcio, /mySegments/facundo)
* 1.2 secs: Streaming down (CONTROL event) -> fetch due to fallback to polling (/splitChanges, /my(Large)Segments/nicolas, /my(Large)Segments/marcio, /my(Large)Segments/facundo)
* 1.3 secs: STREAMING_RESET control event -> auth, SSE connection, syncAll and stop polling
* 1.5 secs: STREAMING_RESET control event -> auth, SSE connection, syncAll
* 1.6 secs: Streaming closed (CONTROL STREAMING_DISABLED event) -> fetch due to fallback to polling (/splitChanges, /mySegments/nicolas, /mySegments/marcio, /mySegments/facundo)
* 1.8 secs: periodic fetch due to polling (/splitChanges): due to update without segments, mySegments are not fetched
* 1.8 secs: periodic fetch due to polling (/splitChanges)
* 1.85 secs: periodic fetch due to polling (/myLargeSegments/*). /mySegments/* are not fetched due to update without segments
* 2.0 secs: periodic fetch due to polling (/splitChanges)
* 2.1 secs: destroy client
*/
export function testFallbacking(fetchMock, assert) {
export function testFallback(fetchMock, assert) {
assert.plan(20);
fetchMock.reset();

Expand Down Expand Up @@ -213,6 +217,10 @@ export function testFallbacking(fetchMock, assert) {
return { status: 200, body: authPushEnabledNicolas };
});

// MyLargeSegments are fetched one more time than MySegments due to smart pausing of MySegments sync at the end of the test
fetchMock.get({ url: url(settings, '/myLargeSegments/nicolas%40split.io'), repeat: 14 }, { status: 200, body: { myLargeSegments: [] } });
fetchMock.get({ url: url(settings, '/myLargeSegments/marcio%40split.io'), repeat: 10 }, { status: 200, body: { myLargeSegments: [] } });

// initial split and mySegment sync
fetchMock.getOnce(url(settings, '/splitChanges?s=1.1&since=-1'), { status: 200, body: splitChangesMock1 });
fetchMock.getOnce(url(settings, '/mySegments/nicolas%40split.io'), { status: 200, body: mySegmentsNicolasMock1 });
Expand Down
2 changes: 1 addition & 1 deletion src/__tests__/browserSuites/push-refresh-token.spec.js
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ export function testRefreshToken(fetchMock, assert) {
sseCount++;
switch (sseCount) {
case 1:
assert.true(nearlyEqual(Date.now() - start, 0), 'first connection is created inmediatelly');
assert.true(nearlyEqual(Date.now() - start, 0), 'first connection is created immediately');
break;
case 2:
assert.true(nearlyEqual(Date.now() - start, MILLIS_REFRESH_TOKEN + MILLIS_CONNDELAY), 'second connection is created with a delay');
Expand Down
2 changes: 1 addition & 1 deletion src/__tests__/browserSuites/readiness.spec.js
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ export default function (fetchMock, assert) {
});
});

assert.test(t => { // Timeout test, we have retries but mySegmnets takes too long
assert.test(t => { // Timeout test, we have retries but mySegments takes too long
const testUrls = {
sdk: 'https://sdk.baseurl/readinessSuite2',
events: 'https://events.baseurl/readinessSuite2'
Expand Down
30 changes: 15 additions & 15 deletions src/__tests__/browserSuites/telemetry.spec.js
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ export default async function telemetryBrowserSuite(fetchMock, t) {
fetchMock.getOnce(baseUrls.sdk + '/splitChanges?s=1.1&since=-1', 500);
fetchMock.getOnce(baseUrls.sdk + '/splitChanges?s=1.1&since=-1', { status: 200, body: splitChangesMock1 });
fetchMock.getOnce(baseUrls.sdk + '/mySegments/user-key', 500);
fetchMock.getOnce(baseUrls.sdk + '/mySegments/user-key', { status: 200, body: { 'mySegments': [ 'one_segment'] } });
fetchMock.getOnce(baseUrls.sdk + '/mySegments/user-key', { status: 200, body: { 'mySegments': ['one_segment'] } });

// We need to handle all requests properly
fetchMock.postOnce(baseUrls.events + '/testImpressions/bulk', 200);
Expand Down Expand Up @@ -76,7 +76,7 @@ export default async function telemetryBrowserSuite(fetchMock, t) {

// @TODO check if iDe value is correct
assert.deepEqual(data, {
mE: {}, hE: { sp: { 500: 1 }, ms: { 500: 1 } }, tR: 0, aR: 0, iQ: 4, iDe: 1, iDr: 0, spC: 31, seC: 1, skC: 1, eQ: 1, eD: 0, sE: [], t: [], ufs: { sp: 0, ms: 0 }
mE: {}, hE: { sp: { 500: 1 }, ms: { 500: 1 } }, tR: 0, aR: 0, iQ: 4, iDe: 1, iDr: 0, spC: 32, seC: 1, skC: 1, eQ: 1, eD: 0, sE: [], t: [], ufs: {}
}, 'metrics/usage JSON payload should be the expected');

finish.next();
Expand All @@ -96,7 +96,7 @@ export default async function telemetryBrowserSuite(fetchMock, t) {
// @TODO check if iDe value is correct
assert.deepEqual(data, {
mL: {}, mE: {}, hE: {}, hL: {}, // errors and latencies were popped
tR: 0, aR: 0, iQ: 4, iDe: 1, iDr: 0, spC: 31, seC: 1, skC: 1, eQ: 1, eD: 0, sE: [], t: [], ufs: { sp: 0, ms: 0 }
tR: 0, aR: 0, iQ: 4, iDe: 1, iDr: 0, spC: 32, seC: 1, skC: 1, eQ: 1, eD: 0, sE: [], t: [], ufs: {}
}, '2nd metrics/usage JSON payload should be the expected');
return 200;
});
Expand All @@ -108,7 +108,7 @@ export default async function telemetryBrowserSuite(fetchMock, t) {
delete data.tR; // delete to validate other properties

assert.deepEqual(data, {
oM: 0, st: 'memory', aF: 1, rF: 0, sE: false,
oM: 0, st: 'memory', aF: 1, rF: 0, sE: false, lE: false,
rR: { sp: 99999, ms: 60, im: 300, ev: 60, te: 1 } /* override featuresRefreshRate */,
uO: { s: true, e: true, a: false, st: false, t: true } /* override sdk, events and telemetry URLs */,
iQ: 30000, eQ: 500, iM: 0, iL: false, hP: false, nR: 1 /* 1 non ready usage */, t: [], i: [], uC: 2 /* Default GRANTED */,
Expand Down Expand Up @@ -188,7 +188,7 @@ export default async function telemetryBrowserSuite(fetchMock, t) {
const splitFilters = [{ type: 'bySet', values: ['a', '_b', 'a', 'a', 'c', 'd', '_d'] }];

fetchMock.get(baseUrls.sdk + '/mySegments/nicolas%40split.io', { status: 200, body: { 'mySegments': [] } });
fetchMock.getOnce(baseUrls.sdk + '/splitChanges?s=1.1&since=-1&sets=a,c,d', { status: 200, body: { splits: [], since: 1457552620999, till: 1457552620999 } });
fetchMock.getOnce(baseUrls.sdk + '/splitChanges?s=1.1&since=-1&sets=a,c,d', { status: 200, body: { splits: [], since: 1457552620999, till: 1457552620999 } });
fetchMock.postOnce(baseUrls.telemetry + '/v1/metrics/config', (url, opts) => {
const data = JSON.parse(opts.body);

Expand All @@ -202,25 +202,25 @@ export default async function telemetryBrowserSuite(fetchMock, t) {
fetchMock.postOnce(baseUrls.telemetry + '/v1/metrics/usage', (url, opts) => {
const data = JSON.parse(opts.body);

assert.deepEqual(data.mL.tf, [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], 'Latencies stats');
assert.deepEqual(data.mL.tfs, [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], 'Latencies stats');
assert.deepEqual(data.mL.tcf, [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], 'Latencies stats');
assert.deepEqual(data.mL.tcfs, [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], 'Latencies stats');
assert.deepEqual(data.mL.tf, [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Latencies stats');
assert.deepEqual(data.mL.tfs, [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Latencies stats');
assert.deepEqual(data.mL.tcf, [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Latencies stats');
assert.deepEqual(data.mL.tcfs, [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'Latencies stats');

factory.client().destroy().then(() => {
assert.end();
});

return 200;
});
fetchMock.postOnce(baseUrls.telemetry + '/v1/metrics/usage', 200);
fetchMock.postOnce(baseUrls.telemetry + '/v1/metrics/usage', 200);

factory = SplitFactoryForTest({...baseConfig, sync: {splitFilters}});
factory = SplitFactoryForTest({ ...baseConfig, sync: { splitFilters } });
const client = factory.client();
assert.deepEqual(client.getTreatmentsByFlagSet('a'),[]);
assert.deepEqual(client.getTreatmentsByFlagSets(['a']),[]);
assert.deepEqual(client.getTreatmentsWithConfigByFlagSet('a'),[]);
assert.deepEqual(client.getTreatmentsWithConfigByFlagSets(['a']),[]);
assert.deepEqual(client.getTreatmentsByFlagSet('a'), []);
assert.deepEqual(client.getTreatmentsByFlagSets(['a']), []);
assert.deepEqual(client.getTreatmentsWithConfigByFlagSet('a'), []);
assert.deepEqual(client.getTreatmentsWithConfigByFlagSets(['a']), []);

}, 'SDK with sets configured has sets information in config POST and evaluation by sets telemetry in stats POST');

Expand Down
70 changes: 70 additions & 0 deletions src/__tests__/mocks/splitchanges.since.-1.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,75 @@
{
"splits": [
{
"orgId": null,
"environment": null,
"trafficTypeId": null,
"trafficTypeName": null,
"name": "in_large_segment",
"seed": -1984784937,
"status": "ACTIVE",
"killed": false,
"defaultTreatment": "no",
"conditions": [
{
"matcherGroup": {
"combiner": "AND",
"matchers": [
{
"keySelector": {
"trafficType": "user",
"attribute": null
},
"matcherType": "IN_LARGE_SEGMENT",
"negate": false,
"userDefinedSegmentMatcherData": {
"segmentName": "harnessians"
},
"whitelistMatcherData": null,
"unaryNumericMatcherData": null,
"betweenMatcherData": null,
"unaryStringMatcherData": null
}
]
},
"partitions": [
{
"treatment": "yes",
"size": 100
}
]
},
{
"matcherGroup": {
"combiner": "AND",
"matchers": [
{
"keySelector": {
"trafficType": "user",
"attribute": null
},
"matcherType": "IN_LARGE_SEGMENT",
"negate": false,
"userDefinedSegmentMatcherData": {
"segmentName": "splitters"
},
"whitelistMatcherData": null,
"unaryNumericMatcherData": null,
"betweenMatcherData": null,
"unaryStringMatcherData": null
}
]
},
"partitions": [
{
"treatment": "yes",
"size": 100
}
]
}
],
"configurations": {}
},
{
"orgId": null,
"environment": null,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,9 @@ import { settingsFactory } from '../../settings';
const key = 'nicolas@split.io';

const baseUrls = {
sdk: 'https://sdk.push-fallbacking/api',
events: 'https://events.push-fallbacking/api',
auth: 'https://auth.push-fallbacking/api'
sdk: 'https://sdk.push-fallback/api',
events: 'https://events.push-fallback/api',
auth: 'https://auth.push-fallback/api'
};
const config = {
core: {
Expand Down Expand Up @@ -96,7 +96,7 @@ const MILLIS_DESTROY = MILLIS_STREAMING_DISABLED_CONTROL + settings.scheduler.fe
* 2.1 secs: periodic fetch due to polling (/segmentChanges/*)
* 2.1 secs: destroy client
*/
export function testFallbacking(fetchMock, assert) {
export function testFallback(fetchMock, assert) {
assert.plan(17);
fetchMock.reset();
__setEventSource(EventSourceMock);
Expand Down
2 changes: 1 addition & 1 deletion src/__tests__/nodeSuites/push-refresh-token.spec.js
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ export function testRefreshToken(fetchMock, assert) {
sseCount++;
switch (sseCount) {
case 1:
assert.true(nearlyEqual(Date.now() - start, 0), 'first connection is created inmediatelly');
assert.true(nearlyEqual(Date.now() - start, 0), 'first connection is created immediately');
break;
case 2:
assert.true(nearlyEqual(Date.now() - start, MILLIS_REFRESH_TOKEN + MILLIS_CONNDELAY), 'second connection is created with a delay');
Expand Down
4 changes: 2 additions & 2 deletions src/__tests__/nodeSuites/telemetry.spec.js
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ export default async function telemetryNodejsSuite(key, fetchMock, assert) {

// @TODO check if iDe value is correct
assert.deepEqual(data, {
mE: {}, hE: { sp: { 500: 1 } }, tR: 0, aR: 0, iQ: 4, iDe: 1, iDr: 0, spC: 31, seC: 3, skC: 3, eQ: 1, eD: 0, sE: [], t: [], ufs: { sp: 0, ms: 0 }
mE: {}, hE: { sp: { 500: 1 } }, tR: 0, aR: 0, iQ: 4, iDe: 1, iDr: 0, spC: 32, seC: 3, skC: 3, eQ: 1, eD: 0, sE: [], t: [], ufs: {}
}, 'metrics/usage JSON payload should be the expected');

finish.next();
Expand All @@ -85,7 +85,7 @@ export default async function telemetryNodejsSuite(key, fetchMock, assert) {
// @TODO check if iDe value is correct
assert.deepEqual(data, {
mL: {}, mE: {}, hE: {}, hL: {}, // errors and latencies were popped
tR: 0, aR: 0, iQ: 4, iDe: 1, iDr: 0, spC: 31, seC: 3, skC: 3, eQ: 1, eD: 0, sE: [], t: [], ufs: { sp: 0, ms: 0 }
tR: 0, aR: 0, iQ: 4, iDe: 1, iDr: 0, spC: 32, seC: 3, skC: 3, eQ: 1, eD: 0, sE: [], t: [], ufs: {}
}, '2nd metrics/usage JSON payload should be the expected');
return 200;
});
Expand Down
4 changes: 2 additions & 2 deletions src/__tests__/push/browser.spec.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ import { testAuthWithPushDisabled, testAuthWith401, testNoEventSource, testSSEWi
import { testPushRetriesDueToAuthErrors, testPushRetriesDueToSseErrors, testSdkDestroyWhileAuthRetries, testSdkDestroyWhileAuthSuccess, testSdkDestroyWhileConnDelay } from '../browserSuites/push-initialization-retries.spec';
import { testSynchronization } from '../browserSuites/push-synchronization.spec';
import { testSynchronizationRetries } from '../browserSuites/push-synchronization-retries.spec';
import { testFallbacking } from '../browserSuites/push-fallbacking.spec';
import { testFallback } from '../browserSuites/push-fallback.spec';
import { testRefreshToken } from '../browserSuites/push-refresh-token.spec';
import { testSplitKillOnReadyFromCache } from '../browserSuites/push-corner-cases.spec';
import { testFlagSets } from '../browserSuites/push-flag-sets.spec';
Expand Down Expand Up @@ -32,7 +32,7 @@ tape('## Browser JS - E2E CI Tests for PUSH ##', function (assert) {
assert.test('E2E / PUSH synchronization: happy paths', testSynchronization.bind(null, fetchMock));
assert.test('E2E / PUSH synchronization: retries', testSynchronizationRetries.bind(null, fetchMock));

assert.test('E2E / PUSH fallbacking, CONTROL, OCCUPANCY and STREAMING_RESET messages', testFallbacking.bind(null, fetchMock));
assert.test('E2E / PUSH fallback, CONTROL, OCCUPANCY and STREAMING_RESET messages', testFallback.bind(null, fetchMock));

assert.test('E2E / PUSH refresh token and connection delay', testRefreshToken.bind(null, fetchMock));

Expand Down
Loading
Loading