Skip to content

Add benchmark spec tests #1048

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jul 27, 2018
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 15 additions & 2 deletions packages/firestore/test/unit/specs/describe_spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,9 @@ const NO_WEB_TAG = 'no-web';
const NO_ANDROID_TAG = 'no-android';
const NO_IOS_TAG = 'no-ios';
const NO_LRU = 'no-lru';
const BENCHMARK_TAG = 'benchmark';
const KNOWN_TAGS = [
BENCHMARK_TAG,
EXCLUSIVE_TAG,
PERSISTENCE_TAG,
NO_WEB_TAG,
Expand All @@ -40,6 +42,9 @@ const KNOWN_TAGS = [
NO_LRU
];

// TOOD(mrschmidt): Make this configurable with mocha options.
const RUN_BENCHMARK_TESTS = false;

const WEB_SPEC_TEST_FILTER = (tags: string[]) =>
tags.indexOf(NO_WEB_TAG) === -1;

Expand Down Expand Up @@ -127,6 +132,8 @@ export function specTest(
runner = it.only;
} else if (!WEB_SPEC_TEST_FILTER(tags)) {
runner = it.skip;
} else if (tags.indexOf(BENCHMARK_TAG) >= 0 && !RUN_BENCHMARK_TESTS) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this logic should just go in WEB_SPEC_TEST_FILTER... or else we should get rid of WEB_SPEC_TEST_FILTER and inline the NO_WEB_TAG check here. There's no reason to have both NO_WEB_TAG and WEB_SPEC_TEST_FILTER defined as constants if they're used for the same thing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we special case a bunch of tags in here already, it probably makes more sense to remove web filter. Consider it gone.

runner = it.skip;
} else if (usePersistence && tags.indexOf('no-lru') !== -1) {
// spec should have a comment explaining why it is being skipped.
runner = it.skip;
Expand All @@ -135,8 +142,14 @@ export function specTest(
}
const mode = usePersistence ? '(Persistence)' : '(Memory)';
const fullName = `${mode} ${name}`;
runner(fullName, () => {
return spec.runAsTest(fullName, usePersistence);
runner(fullName, async () => {
const start = Date.now();
await spec.runAsTest(fullName, usePersistence);
const end = Date.now();
if (tags.indexOf(BENCHMARK_TAG) >= 0) {
// tslint:disable-next-line:no-console
console.log(`Runtime: ${end - start} ms.`);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW- If you wanted to get fancy you could play with integrating console.time() / console.profile() to get higher-precision timings and to automatically collect profiles. Though they're non-standard. So in theory it could break our tests on some browsers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking that high profile timers run in a headless Chrome instance might give us more of an illusion of precision than actual accuracy.

});
}
} else {
Expand Down
216 changes: 216 additions & 0 deletions packages/firestore/test/unit/specs/perf_spec.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,216 @@
/**
* Copyright 2018 Google Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

import { Query } from '../../../src/core/query';
import { doc, orderBy, path } from '../../util/helpers';

import { describeSpec, specTest } from './describe_spec';
import { spec } from './spec_builder';

const STEP_COUNT = 10;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


describeSpec('Performance Tests:', ['benchmark'], () => {
specTest('Insert a new document', [], () => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you include ${STEP_COUNT} in the description of tests to accurately reflect what's being benchmarked? With the results in your PR description I was disappointed how slow everything was, but knowing that your doing 10 reps makes me feel a fair amount better.

Similarly, the results (e.g. when copy/pasted into an email or PR description) would be more meaningful if the descriptions were more explicit / precise as to what's being measured. E.g.: Write document and handle server acknowledgement [repeat ${STEP_COUNT} times]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added it to the test name via the description in describe.

let steps = spec().withGCEnabled(false);
for (let i = 0; i < STEP_COUNT; ++i) {
steps = steps.userSets(`collection/{i}`, { doc: i }).writeAcks(i);
}
return steps;
});

specTest('Insert a new document and wait for snapshot', [], () => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More explicit: Start a listen, write a document, ack write, handle watch snapshot, unlisten [repeat ${STEP_COUNT} times]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated this and most other test titles.

let currentVersion = 1;
let steps = spec().withGCEnabled(false);

for (let i = 0; i < STEP_COUNT; ++i) {
const query = Query.atPath(path(`collection/${i}`));
const docLocal = doc(
`collection/${i}`,
0,
{ doc: i },
{ hasLocalMutations: true }
);
const docRemote = doc(`collection/${i}`, ++currentVersion, { doc: i });

steps = steps
.userListens(query)
.userSets(`collection/${i}`, { doc: i })
.expectEvents(query, {
added: [docLocal],
fromCache: true,
hasPendingWrites: true
})
.writeAcks(++currentVersion)
.watchAcksFull(query, ++currentVersion, docRemote)
.expectEvents(query, { metadata: [docRemote] })
.userUnlistens(query)
.watchRemoves(query);
}
return steps;
});

specTest('Watch has cached mutations', [], () => {
const cachedDocumentCount = 100;

const query = Query.atPath(path(`collection`)).addOrderBy(orderBy('v'));

let steps = spec().withGCEnabled(false);

const docs = [];

for (let i = 0; i < cachedDocumentCount; ++i) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a little tempting to somehow introduce the equivalent of for-loops into the spec test schema to avoid generating overly large / verbose JSON files with stuff like this and the STEP_COUNT stuff... But that's probably a rabbit hole best avoided for now. :-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

go/changestats

:)

steps.userSets(`collection/${i}`, { v: i });
docs.push(
doc(`collection/${i}`, 0, { v: i }, { hasLocalMutations: true })
);
}

for (let i = 1; i <= STEP_COUNT; ++i) {
steps = steps
.userListens(query)
.expectEvents(query, {
added: docs,
fromCache: true,
hasPendingWrites: true
})
.userUnlistens(query);
}

return steps;
});

specTest('Update a single document', [], () => {
let steps = spec().withGCEnabled(false);
steps = steps.userSets(`collection/doc`, { v: 0 });
for (let i = 1; i <= STEP_COUNT; ++i) {
steps = steps.userPatches(`collection/doc`, { v: i }).writeAcks(i);
}
return steps;
});

specTest('Update a single document and wait for snapshot', [], () => {
const query = Query.atPath(path(`collection/doc`));

let currentVersion = 1;
let steps = spec().withGCEnabled(false);

let docLocal = doc(
`collection/doc`,
0,
{ v: 0 },
{ hasLocalMutations: true }
);
let docRemote = doc(`collection/doc`, ++currentVersion, { v: 0 });
let lastRemoteVersion = currentVersion;

steps = steps
.userListens(query)
.userSets(`collection/doc`, { v: 0 })
.expectEvents(query, {
added: [docLocal],
fromCache: true,
hasPendingWrites: true
})
.writeAcks(++currentVersion)
.watchAcksFull(query, ++currentVersion, docRemote)
.expectEvents(query, { metadata: [docRemote] });

for (let i = 1; i <= STEP_COUNT; ++i) {
docLocal = doc(
`collection/doc`,
lastRemoteVersion,
{ v: i },
{ hasLocalMutations: true }
);
docRemote = doc(`collection/doc`, ++currentVersion, { v: i });
lastRemoteVersion = currentVersion;

steps = steps
.userPatches(`collection/doc`, { v: i })
.expectEvents(query, { modified: [docLocal], hasPendingWrites: true })
.writeAcks(++currentVersion)
.watchSends({ affects: [query] }, docRemote)
.watchSnapshots(++currentVersion)
.expectEvents(query, { metadata: [docRemote] });
}
return steps;
});

specTest('Watch sends 100 documents', [], () => {
const documentsPerStep = 100;

const query = Query.atPath(path(`collection`)).addOrderBy(orderBy('v'));

let currentVersion = 1;
let steps = spec().withGCEnabled(false);

steps = steps
.userListens(query)
.watchAcksFull(query, currentVersion)
.expectEvents(query, {});

for (let i = 1; i <= STEP_COUNT; ++i) {
const docs = [];

for (let j = 0; j < documentsPerStep; ++j) {
docs.push(
doc(`collection/${j}`, ++currentVersion, { v: currentVersion })
);
}

const changeType = i === 1 ? 'added' : 'modified';

steps = steps
.watchSends({ affects: [query] }, ...docs)
.watchSnapshots(++currentVersion)
.expectEvents(query, { [changeType]: docs });
}

return steps;
});

specTest('Watch has cached results', [], () => {
const documentsPerStep = 100;

let currentVersion = 1;
let steps = spec().withGCEnabled(false);

for (let i = 1; i <= STEP_COUNT; ++i) {
const collPath = `collection/${i}/coll`;
const query = Query.atPath(path(collPath)).addOrderBy(orderBy('v'));

const docs = [];
for (let j = 0; j < documentsPerStep; ++j) {
docs.push(doc(`${collPath}/${j}`, ++currentVersion, { v: j }));
}

steps = steps
.userListens(query)
.watchAcksFull(query, ++currentVersion, ...docs)
.expectEvents(query, { added: docs })
.userUnlistens(query)
.watchRemoves(query)
.userListens(query, 'resume-token-' + currentVersion)
.expectEvents(query, { added: docs, fromCache: true })
.watchAcksFull(query, ++currentVersion)
.expectEvents(query, {})
.userUnlistens(query)
.watchRemoves(query);
}

return steps;
});
});