Skip to content

Implement limbo resolution throttling. #2790

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Mar 30, 2020
44 changes: 35 additions & 9 deletions packages/firestore/src/core/sync_engine.ts
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ import { AsyncQueue } from '../util/async_queue';
import { TransactionRunner } from './transaction_runner';

const LOG_TAG = 'SyncEngine';
const DEFAULT_MAX_CONCURRENT_LIMBO_RESOLUTIONS = 100;

/**
* QueryView contains all of the data that SyncEngine needs to keep track of for
Expand Down Expand Up @@ -154,6 +155,9 @@ export class SyncEngine implements RemoteSyncer, SharedClientStateSyncer {
private limboResolutionsByTarget: {
[targetId: number]: LimboResolution;
} = {};
private readonly maxConcurrentLimboResolutions: number = DEFAULT_MAX_CONCURRENT_LIMBO_RESOLUTIONS;
/** The keys of documents whose limbo resolutions are enqueued. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are now a number of limbo-related structures in here and it would be worthwhile to document them in relation to one another and give an overview of how they're supposed to fit together.

Also, since things start in the limboListenQueue, I suggest making it structurally first among these. That way, when reading through this list, the structure suggests the flow.

Also, this specific comment doesn't do much to suggest what the queue is actually for. It's worth calling out that these are waiting to start a the query. Maybe something like this?

The keys of documents that are in limbo for which we haven't yet started a limbo resolution query.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added/improved the documentation for limboListenQueue, limboTargetsByKey, and limboResolutionsByTarget. I don't have enough context to provide meaningful documentation for limboDocumentRefs. Since this PR does not touch anything related to limboDocumentRefs I've left it alone. I also moved limboListenQueue in the code to be structurally first.

private limboListenQueue: DocumentKey[] = [];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the difference between a "limbo listen" and a "limbo resolution"? Is there any? If not I suggest sticking with one name to reduce the conceptual load here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A limbo "resolution" is the entire process that begins when a document enters the limbo state and ends when the limbo state is resolved (i.e. after getting a response from the server). A "listen" is the thing that the remoteStore does when we call remoteStore.listen(...). I agree, however, that the terminology is confusing and I'll make any changes needed to be consistent.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still concerned about the "listen" in limboListenQueue as a name here. Even your comment doesn't mention "listens" so it's not clear if there's an actual conceptual difference between limbo resolution and a limbo listen or if this is just an accidental inconsistency that wasn't caught in review.

Since you've resolved to distinguish active and enqueued limbo resolutions, why can't this be enqueuedLimboResolutions? That is: instead of focusing on the fact that they're enqueued to start a listener, just know that these are limbo resolutions in the enqueued state without specifying exactly what they're waiting for. This resolves the inconsistency without any effective loss of clarity, because exactly what's next after enqueueing isn't a critical detail in this name.

If you prefer "Queue" in the name, "deferred" might better name for the enqueued resolutions because then you can make this deferredLimboResolutionQueue. enqueuedLimboResolutionQueue looks strange.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I've renamed the variable to enqueuedLimboResolutions.

private limboDocumentRefs = new ReferenceSet();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many of the existing variable names are now ambiguous because it's not clear if they apply to active limbo resolutions, inactive limbo resolutions, or all. I suggest prefixing all these names to clearly indicate which kind each is.

Other ideas for ways to clarify this:

  • active vs pending (though "pending" could also possibly be read to mean the resolution for which we're waiting on a result)
  • active vs deferred?

If we document this distinction though I think any pair could be good and an improvement. WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the names "enqueued" and "active" for the states of documents that are in limbo. Since a limbo resolution is "active" if and only if it is associated with a target, the limboTargetsByKey and limboResolutionsByTarget imply that they only apply to "active" limbo resolutions. Similarly, a limbo resolution is "enqueued" if and only if its document key is in the limboListenQueue. My sense is that adding "active" and "inactive" to these variable names will be confusing because they have no counterpart for the opposite state.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

variable names will be confusing because they have no counterpart for the opposite state

That's an interesting perspective but I think the reasoning is kind of subtle: that because only active limbo resolutions can have a target the active-ness is self-explanatory. Would this have been self-evident to you a week ago?

My reasoning was that the prefix helps identify which variables are used in which state, and suggests which elements have to be modified as you transition between states. That is, this would have the effect of grouping these things together more than necessarily differentiating similar structures of the same type that might be used in the different states.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. I've added "active" and "enqueued" to all relevant variable names throughout this PR.

/** Stores user completion handlers, indexed by User and BatchId. */
private mutationUserCallbacks = {} as {
Expand All @@ -174,8 +178,13 @@ export class SyncEngine implements RemoteSyncer, SharedClientStateSyncer {
private remoteStore: RemoteStore,
// PORTING NOTE: Manages state synchronization in multi-tab environments.
private sharedClientState: SharedClientState,
private currentUser: User
) {}
private currentUser: User,
maxConcurrentLimboResolutions?: number
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be made simpler by just defaulting the argument.

private maxConcurrentLimboResolutions: number = DEFAULT_MAX_CONCURRENT_LIMBO_RESOLUTIONS

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I also removed the DEFAULT_MAX_CONCURRENT_LIMBO_RESOLUTIONS constant and just inlined the value of 100. IMO this improves readability at no cost to performance or generality.

) {
if (maxConcurrentLimboResolutions) {
this.maxConcurrentLimboResolutions = maxConcurrentLimboResolutions;
}
}

// Only used for testing.
get isPrimaryClient(): boolean {
Expand Down Expand Up @@ -486,6 +495,7 @@ export class SyncEngine implements RemoteSyncer, SharedClientStateSyncer {
// So go ahead and remove it from bookkeeping.
this.limboTargetsByKey = this.limboTargetsByKey.remove(limboKey);
delete this.limboResolutionsByTarget[targetId];
this.startEnqueuedLimboResolutions();

// TODO(klimt): We really only should do the following on permission
// denied errors, but we don't have the cause code here.
Expand Down Expand Up @@ -740,6 +750,7 @@ export class SyncEngine implements RemoteSyncer, SharedClientStateSyncer {
this.remoteStore.unlisten(limboTargetId);
this.limboTargetsByKey = this.limboTargetsByKey.remove(key);
delete this.limboResolutionsByTarget[limboTargetId];
this.startEnqueuedLimboResolutions();
}

private updateTrackedLimbos(
Expand Down Expand Up @@ -770,29 +781,44 @@ export class SyncEngine implements RemoteSyncer, SharedClientStateSyncer {
const key = limboChange.key;
if (!this.limboTargetsByKey.get(key)) {
log.debug(LOG_TAG, 'New document in limbo: ' + key);
this.limboListenQueue.push(key);
this.startEnqueuedLimboResolutions();
}
}

private startEnqueuedLimboResolutions(): void {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name is off:

  • we tend to use "start" and "stop" for beginning and ending continuous processes, like startWatchStream, after which the watch stream has started until we stop it and
  • this method doesn't necessarily start any resolutions if the number of pending resolutions is at the maximum.

In the RemoteStore there's a fillWritePipeline that has a similar high level behavior. The analogous name would be fillLimboResolutionPipeline, but while it's an improvement I can't say I think this is an obvious winner, probably because we don't explicitly have a pipeline the way the remote store does. The symmetry is appealing though.

Other ideas:

  • drainLimboResolutionQueue is problematic because it suggests waiting for all the queued resolutions.
  • I've seen "pump" used for this kind of operation.
  • fillActiveLimboResolutions? (building on pending vs active, suggested elsewhere)

All things considered I'd probably just go with fillLimboResolutionPipeline for symmetry with RemoteStore more than anything but I'm definitely open to alternatives that avoid "start".

Also, please add comments. The comments above fillWritePipeline are a good example: what the thing does and when it should be used.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add the requested documentation to the method.

Regarding the naming issue... what about using the verb "trigger" in the method name? The "pipeline" idea makes sense too, but the method in question is not technically "filling" the pipeline and I find that verb to be misleading. The "pump" verb is sounding good to me though. According to https://en.wikipedia.org/wiki/Event_loop the verb "pump" has a meaning in an event loop/message queue context, which has many analogues to the limbo resolution queue. That article also suggests the verb "dispatch".

So here are some other names to consider:

  • triggerLimboResolutions()
  • pumpLimboResolutionQueue() (my preference)
  • dispatchLimboResolutionListens()

Regarding the limbo resolution state names, I'm leaning towards the terms "enqueued" and "active". I prefer "enqueued" over "pending" because, IMO, it is a more concrete term than "pending" and therefore requires less explanation in documentation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What you implemented was pumpLimboResolutionListenQueue. I offer that you should construct this as "pump" plus the name of the thing. If you go with enqueuedLimboResolutions, above, then match with pumpEnqueuedLimboResolutions.

I'm also fine with "trigger" if that reads better to you.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I've renamed the method to pumpEnqueuedLimboResolutions().

while (
this.limboListenQueue.length > 0 &&
this.limboTargetsByKey.size < this.maxConcurrentLimboResolutions
) {
const key = this.limboListenQueue.shift()!;
const limboTargetId = this.limboTargetIdGenerator.next();
const query = Query.atPath(key.path);
this.limboResolutionsByTarget[limboTargetId] = new LimboResolution(key);
this.limboTargetsByKey = this.limboTargetsByKey.insert(
key,
limboTargetId
);
this.remoteStore.listen(
new TargetData(
query.toTarget(),
Query.atPath(key.path).toTarget(),
limboTargetId,
TargetPurpose.LimboResolution,
ListenSequence.INVALID
)
);
this.limboTargetsByKey = this.limboTargetsByKey.insert(
key,
limboTargetId
);
}
}

// Visible for testing
currentLimboDocs(): SortedMap<DocumentKey, TargetId> {
activeLimboDocumentResolutions(): SortedMap<DocumentKey, TargetId> {
return this.limboTargetsByKey;
}

// Visible for testing
documentsEnqueuedForLimboResolution(): DocumentKey[] {
return this.limboListenQueue;
}

private async emitNewSnapsAndNotifyLocalStore(
changes: MaybeDocumentMap,
remoteEvent?: RemoteEvent
Expand Down
263 changes: 263 additions & 0 deletions packages/firestore/test/unit/specs/limbo_spec.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -639,4 +639,267 @@ describeSpec('Limbo Documents:', [], () => {
);
}
);

specTest(
'Limbo resolution throttling with all results at once from watch',
['no-ios'],
() => {
const query = Query.atPath(path('collection'));
const doc1 = doc('collection/a', 1000, { key: 'a' });
const doc2 = doc('collection/b', 1000, { key: 'b' });
const doc3 = doc('collection/c', 1000, { key: 'c' });
const limboQuery1 = Query.atPath(doc1.key.path);
const limboQuery2 = Query.atPath(doc2.key.path);
const limboQuery3 = Query.atPath(doc3.key.path);

return (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a comment indicating what user behavior we're simulating, namely that while we're listening all three documents are deleted, but watch resets us rather than sending the removals. It's also probably worth noting that in reality this scenario isn't all that likely, but it's a convenient way to force documents into limbo.

Here and throughout.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

spec()
.withMaxConcurrentLimboResolutions(2)
.userListens(query)
.watchAcksFull(query, 1000, doc1, doc2, doc3)
.expectEvents(query, {
added: [doc1, doc2, doc3]
})
.watchResets(query)
.watchSends({ affects: [query] })
.watchCurrents(query, 'resume-token-2000')
.watchSnapshots(2000)
.expectLimboDocsEx({
activeKeys: [doc1.key, doc2.key],
inactiveKeys: [doc3.key]
})
// Limbo document causes query to be "inconsistent"
.expectEvents(query, { fromCache: true })
.watchAcks(limboQuery1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since these test cases differ in the simulated watch behavior, it's worth calling out specifically what we're simulating. In this case, watch completes all the limbo resolution queries in a single snapshot.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

.watchAcks(limboQuery2)
.watchCurrents(limboQuery1, 'resume-token-2001')
.watchCurrents(limboQuery2, 'resume-token-2001')
.watchSnapshots(2001)
.expectLimboDocs(doc3.key)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to reorder this with the expectEvents line below? I suggest this because it would more clearly suggest the binding between doc3 going into limbo and the watchAcks line below. If possible, please apply this throughout.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is possible. Done.

.expectEvents(query, {
removed: [doc1, doc2],
fromCache: true
})
.watchAcks(limboQuery3)
.watchCurrents(limboQuery3, 'resume-token-2001')
.watchSnapshots(2001)
.expectLimboDocs()
.expectEvents(query, {
removed: [doc3],
fromCache: false
})
);
}
);

specTest(
'Limbo resolution throttling with results one at a time from watch',
['no-ios'],
() => {
const query = Query.atPath(path('collection'));
const doc1 = doc('collection/a', 1000, { key: 'a' });
const doc2 = doc('collection/b', 1000, { key: 'b' });
const doc3 = doc('collection/c', 1000, { key: 'c' });
const limboQuery1 = Query.atPath(doc1.key.path);
const limboQuery2 = Query.atPath(doc2.key.path);
const limboQuery3 = Query.atPath(doc3.key.path);

return (
spec()
.withMaxConcurrentLimboResolutions(2)
.userListens(query)
.watchAcksFull(query, 1000, doc1, doc2, doc3)
.expectEvents(query, {
added: [doc1, doc2, doc3]
})
.watchResets(query)
.watchSends({ affects: [query] })
.watchCurrents(query, 'resume-token-2000')
.watchSnapshots(2000)
.expectLimboDocsEx({
activeKeys: [doc1.key, doc2.key],
inactiveKeys: [doc3.key]
})
// Limbo document causes query to be "inconsistent"
.expectEvents(query, { fromCache: true })
.watchAcks(limboQuery1)
.watchCurrents(limboQuery1, 'resume-token-2001')
.watchSnapshots(2001)
.expectLimboDocs(doc2.key, doc3.key)
.expectEvents(query, {
removed: [doc1],
fromCache: true
})
.watchAcks(limboQuery2)
.watchCurrents(limboQuery2, 'resume-token-2001')
.watchSnapshots(2001)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Snapshot versions should be strictly increasing. This should be > 2001 and the one below should be even greater.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

.expectLimboDocs(doc3.key)
.expectEvents(query, {
removed: [doc2],
fromCache: true
})
.watchAcks(limboQuery3)
.watchCurrents(limboQuery3, 'resume-token-2001')
.watchSnapshots(2001)
.expectLimboDocs()
.expectEvents(query, {
removed: [doc3],
fromCache: false
})
);
}
);

specTest(
'Limbo resolution throttling when a limbo listen is rejected.',
['no-ios'],
() => {
const query = Query.atPath(path('collection'));
const doc1 = doc('collection/a', 1000, { key: 'a' });
const doc2 = doc('collection/b', 1000, { key: 'b' });
const limboQuery1 = Query.atPath(doc1.key.path);
const limboQuery2 = Query.atPath(doc2.key.path);

return (
spec()
.withMaxConcurrentLimboResolutions(1)
.userListens(query)
.watchAcksFull(query, 1000, doc1, doc2)
.expectEvents(query, { added: [doc1, doc2] })
// Watch tells us that the query results have changed to the empty
// set, which makes our local cache inconsistent with the remote
// state, causing a fromCache=true event to be raised.
.watchResets(query)
.watchSends({ affects: [query] })
.watchCurrents(query, 'resume-token-1001')
.watchSnapshots(1001)
// Both doc1 and doc2 are in limbo, but the maximum number of limbo
// listens was set to 1, which causes doc1 to get resolved and doc2
// to get enqueued.
.expectLimboDocsEx({
activeKeys: [doc1.key],
inactiveKeys: [doc2.key]
})
// Limbo document causes query to be "inconsistent"
.expectEvents(query, { fromCache: true })
.watchRemoves(
limboQuery1,
new RpcError(Code.RESOURCE_EXHAUSTED, 'Resource exhausted')
)
// When a limbo listen gets rejected, we assume that it was deleted.
// But now that doc1 is resolved, the limbo resolution for doc2 can
// start.
.expectLimboDocs(doc2.key)
.expectEvents(query, { removed: [doc1], fromCache: true })
// Reject the listen for the second limbo resolution as well, in order
// to exercise the code path of a rejected limbo resolution without
// any enqueued limbo resolutions.
.watchRemoves(
limboQuery2,
new RpcError(Code.RESOURCE_EXHAUSTED, 'Resource exhausted')
)
.expectLimboDocs()
.expectEvents(query, { removed: [doc2] })
);
}
);

specTest(
// This test exercises the steps that resulted in unbounded reads that
// motivated throttling:
// https://github.com/firebase/firebase-js-sdk/issues/2683
'Limbo resolution throttling with existence filter mismatch',
['no-ios'],
() => {
const query = Query.atPath(path('collection'));
const docA1 = doc('collection/a1', 1000, { key: 'a1' });
const docA2 = doc('collection/a2', 1000, { key: 'a2' });
const docA3 = doc('collection/a3', 1000, { key: 'a3' });
const docB1 = doc('collection/b1', 1000, { key: 'b1' });
const docB2 = doc('collection/b2', 1000, { key: 'b2' });
const docB3 = doc('collection/b3', 1000, { key: 'b3' });
const docA1Query = Query.atPath(docA1.key.path);
const docA2Query = Query.atPath(docA2.key.path);
const docA3Query = Query.atPath(docA3.key.path);

return (
spec()
.withMaxConcurrentLimboResolutions(2)
.userListens(query)
.watchAcks(query)
.watchSends({ affects: [query] }, docA1, docA2, docA3)
.watchCurrents(query, 'resume-token-1000')
.watchSnapshots(1000)
.expectEvents(query, { added: [docA1, docA2, docA3] })
// At this point the query is consistent and matches 3 documents:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "matches 3 documents" clause listing them out duplicates the expectEvents code above. I think you can probably drop the "At this point ..." too. This could just be "Now pretend the client loses its network connection."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

// docA1, docA2, and docA3. Then, network connectivity is lost.
.disableNetwork()
// The query listener is notified that the results are being served
// from cache since without network connection there is no way to know
// if we are in sync with the server.
.expectEvents(query, { fromCache: true })
.enableNetwork()
// The network connection has come back so the client re-registers
// the listener, providing the resume token from before. Watch will
// then send updates that occurred since the timestamp encoded in the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible to shorten "timestamp encoded in the resume token" to just "resume token". We should treat the resume token as an opaque marker of time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

// resume token.
.restoreListen(query, 'resume-token-1000')
.watchAcks(query)
// Watch now tells us that the query results on the server are docB1,
Copy link
Contributor

@wilhuff wilhuff Mar 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this comment describes what is happening it's not telling us why it's happening, and the latter is more important to understanding the intent of the test. A better comment would be something like:

While this client was disconnected, another client deleted all the docAs replaced them with docBs. If Watch has to re-run the underlying query when this client re-listens, Watch won't be able to tell that docAs were deleted and will only send us existing documents that changed since the resume token. This will cause it to just send the docBs with an existence filter with a count of 3.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I took your suggestion verbatim.

// docB2, and docB3, along with an existence filter to state that the
// total number of documents that match the query is 3.
.watchSends({ affects: [query] }, docB1, docB2, docB3)
.watchFilters([query], docB1.key, docB2.key, docB3.key)
.watchSnapshots(1001)
// The query listener is now inconsistent because it had thought that
// the set of matching documents was docA1, docA2, and docA3 but the
// server just told us that the set of matching documents is
// completely different: docB1, docB2, and docB3. So the query
// notifies the user that these documents were added, but still says
// fromCache=true because we need to resolve docA1, docA2, and docA3.
.expectEvents(query, {
added: [docB1, docB2, docB3],
fromCache: true
})
// After the existence filter mismatch the client re-listens without
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using "after" here makes this seem like the existence filter mismatch is happening elsewhere, but it's really happening here right? I suggest rephrasing this to something that more actively explains why this is going on:

The view now contains the docAs and the docBs (6 documents), but the existence filter indicated only three should match. This causes the client to re-listen without a resume token.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I took your suggestion verbatim.

// a resume token.
.expectActiveTargets({ query, resumeToken: '' })
// When the existence filter mismatch was detected, we removed then
// re-added the target; therefore, watch acknowledges the removal.
.watchRemoves(query)
// Watch has re-run the query and returns the same result set: docB1,
// docB2, and docB3. This puts docA1, docA2, and docA3 into limbo,
// which the client then issues queries to resolve. Since the maximum
// number of concurrent limbo resolutions was set to 2, only the first
// two limbo resolutions are started, with the 3rd being enqueued.
.watchAcksFull(query, 1002, docB1, docB2, docB3)
.expectLimboDocsEx({
activeKeys: [docA1.key, docA2.key],
inactiveKeys: [docA3.key]
})
.watchAcks(docA1Query)
.watchAcks(docA2Query)
.watchCurrents(docA1Query, 'resume-token-1003')
.watchCurrents(docA2Query, 'resume-token-1003')
.watchSnapshots(1003)
// Watch has now confirmed that docA1 and docA2 have been deleted. So
// the listener sends an event that the documents have
// been removed; however, since docA3 is still enqueued for limbo
// resolution the results are still from cache; however, now that
// there are 0 limbo resolutions in progress, the limbo resolution for
// docA3 is started.
.expectEvents(query, { removed: [docA1, docA2], fromCache: true })
.expectLimboDocs(docA3.key)
.watchAcks(docA3Query)
.watchCurrents(docA3Query, 'resume-token-1004')
.watchSnapshots(1004)
// Watch has now confirmed that docA3 has been deleted. So the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are good comments, but they're a little wordy. If you could make them more concise it would help the next reader focus on the interesting bits. For example:

  • "the listener sends an event about this" is mostly superfluous because we're always sending events.
  • Similarly, "since we are in sync with the server and all docs that were in limbo have been resolved" is at least self-duplicating because "in sync" implies no documents in limbo.

you could shorten it like so:

Watch has confirmed the final limbo document has been deleted so expect an event with fromCache=false, indicating we're in sync again.

I realize this is something of a tall order because it's not obvious which behaviors are implicit, but give it a shot.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Comments have been overhauled, as per our offline discussions.

// listener sends an event about this and now specifies
// fromCache=false since we are in sync with the server and all docs
// that were in limbo have been resolved.
.expectEvents(query, { removed: [docA3] })
.expectLimboDocs()
);
}
);
});
Loading