-
Notifications
You must be signed in to change notification settings - Fork 930
Implement limbo resolution throttling. #2790
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
d8d7293
5900aa8
f0a2077
45802c3
545fa89
abfa6fb
4c1a20c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -73,6 +73,7 @@ import { AsyncQueue } from '../util/async_queue'; | |
import { TransactionRunner } from './transaction_runner'; | ||
|
||
const LOG_TAG = 'SyncEngine'; | ||
const DEFAULT_MAX_CONCURRENT_LIMBO_RESOLUTIONS = 100; | ||
|
||
/** | ||
* QueryView contains all of the data that SyncEngine needs to keep track of for | ||
|
@@ -154,6 +155,9 @@ export class SyncEngine implements RemoteSyncer, SharedClientStateSyncer { | |
private limboResolutionsByTarget: { | ||
[targetId: number]: LimboResolution; | ||
} = {}; | ||
private readonly maxConcurrentLimboResolutions: number = DEFAULT_MAX_CONCURRENT_LIMBO_RESOLUTIONS; | ||
/** The keys of documents whose limbo resolutions are enqueued. */ | ||
private limboListenQueue: DocumentKey[] = []; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What's the difference between a "limbo listen" and a "limbo resolution"? Is there any? If not I suggest sticking with one name to reduce the conceptual load here. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A limbo "resolution" is the entire process that begins when a document enters the limbo state and ends when the limbo state is resolved (i.e. after getting a response from the server). A "listen" is the thing that the remoteStore does when we call remoteStore.listen(...). I agree, however, that the terminology is confusing and I'll make any changes needed to be consistent. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm still concerned about the "listen" in Since you've resolved to distinguish active and enqueued limbo resolutions, why can't this be If you prefer "Queue" in the name, "deferred" might better name for the enqueued resolutions because then you can make this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. I've renamed the variable to |
||
private limboDocumentRefs = new ReferenceSet(); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Many of the existing variable names are now ambiguous because it's not clear if they apply to active limbo resolutions, inactive limbo resolutions, or all. I suggest prefixing all these names to clearly indicate which kind each is. Other ideas for ways to clarify this:
If we document this distinction though I think any pair could be good and an improvement. WDYT? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I like the names "enqueued" and "active" for the states of documents that are in limbo. Since a limbo resolution is "active" if and only if it is associated with a target, the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
That's an interesting perspective but I think the reasoning is kind of subtle: that because only active limbo resolutions can have a target the active-ness is self-explanatory. Would this have been self-evident to you a week ago? My reasoning was that the prefix helps identify which variables are used in which state, and suggests which elements have to be modified as you transition between states. That is, this would have the effect of grouping these things together more than necessarily differentiating similar structures of the same type that might be used in the different states. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Makes sense. I've added "active" and "enqueued" to all relevant variable names throughout this PR. |
||
/** Stores user completion handlers, indexed by User and BatchId. */ | ||
private mutationUserCallbacks = {} as { | ||
|
@@ -174,8 +178,13 @@ export class SyncEngine implements RemoteSyncer, SharedClientStateSyncer { | |
private remoteStore: RemoteStore, | ||
// PORTING NOTE: Manages state synchronization in multi-tab environments. | ||
private sharedClientState: SharedClientState, | ||
private currentUser: User | ||
) {} | ||
private currentUser: User, | ||
maxConcurrentLimboResolutions?: number | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This could be made simpler by just defaulting the argument. private maxConcurrentLimboResolutions: number = DEFAULT_MAX_CONCURRENT_LIMBO_RESOLUTIONS There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. I also removed the |
||
) { | ||
if (maxConcurrentLimboResolutions) { | ||
this.maxConcurrentLimboResolutions = maxConcurrentLimboResolutions; | ||
} | ||
} | ||
|
||
// Only used for testing. | ||
get isPrimaryClient(): boolean { | ||
|
@@ -486,6 +495,7 @@ export class SyncEngine implements RemoteSyncer, SharedClientStateSyncer { | |
// So go ahead and remove it from bookkeeping. | ||
this.limboTargetsByKey = this.limboTargetsByKey.remove(limboKey); | ||
delete this.limboResolutionsByTarget[targetId]; | ||
this.startEnqueuedLimboResolutions(); | ||
|
||
// TODO(klimt): We really only should do the following on permission | ||
// denied errors, but we don't have the cause code here. | ||
|
@@ -740,6 +750,7 @@ export class SyncEngine implements RemoteSyncer, SharedClientStateSyncer { | |
this.remoteStore.unlisten(limboTargetId); | ||
this.limboTargetsByKey = this.limboTargetsByKey.remove(key); | ||
delete this.limboResolutionsByTarget[limboTargetId]; | ||
this.startEnqueuedLimboResolutions(); | ||
} | ||
|
||
private updateTrackedLimbos( | ||
|
@@ -770,29 +781,44 @@ export class SyncEngine implements RemoteSyncer, SharedClientStateSyncer { | |
const key = limboChange.key; | ||
if (!this.limboTargetsByKey.get(key)) { | ||
log.debug(LOG_TAG, 'New document in limbo: ' + key); | ||
this.limboListenQueue.push(key); | ||
this.startEnqueuedLimboResolutions(); | ||
} | ||
} | ||
|
||
private startEnqueuedLimboResolutions(): void { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This name is off:
In the Other ideas:
All things considered I'd probably just go with Also, please add comments. The comments above There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I will add the requested documentation to the method. Regarding the naming issue... what about using the verb "trigger" in the method name? The "pipeline" idea makes sense too, but the method in question is not technically "filling" the pipeline and I find that verb to be misleading. The "pump" verb is sounding good to me though. According to https://en.wikipedia.org/wiki/Event_loop the verb "pump" has a meaning in an event loop/message queue context, which has many analogues to the limbo resolution queue. That article also suggests the verb "dispatch". So here are some other names to consider:
Regarding the limbo resolution state names, I'm leaning towards the terms "enqueued" and "active". I prefer "enqueued" over "pending" because, IMO, it is a more concrete term than "pending" and therefore requires less explanation in documentation. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What you implemented was I'm also fine with "trigger" if that reads better to you. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. I've renamed the method to |
||
while ( | ||
this.limboListenQueue.length > 0 && | ||
this.limboTargetsByKey.size < this.maxConcurrentLimboResolutions | ||
) { | ||
const key = this.limboListenQueue.shift()!; | ||
const limboTargetId = this.limboTargetIdGenerator.next(); | ||
const query = Query.atPath(key.path); | ||
this.limboResolutionsByTarget[limboTargetId] = new LimboResolution(key); | ||
this.limboTargetsByKey = this.limboTargetsByKey.insert( | ||
key, | ||
limboTargetId | ||
); | ||
this.remoteStore.listen( | ||
new TargetData( | ||
query.toTarget(), | ||
Query.atPath(key.path).toTarget(), | ||
limboTargetId, | ||
TargetPurpose.LimboResolution, | ||
ListenSequence.INVALID | ||
) | ||
); | ||
this.limboTargetsByKey = this.limboTargetsByKey.insert( | ||
key, | ||
limboTargetId | ||
); | ||
} | ||
} | ||
|
||
// Visible for testing | ||
currentLimboDocs(): SortedMap<DocumentKey, TargetId> { | ||
activeLimboDocumentResolutions(): SortedMap<DocumentKey, TargetId> { | ||
return this.limboTargetsByKey; | ||
} | ||
|
||
// Visible for testing | ||
documentsEnqueuedForLimboResolution(): DocumentKey[] { | ||
return this.limboListenQueue; | ||
} | ||
|
||
private async emitNewSnapsAndNotifyLocalStore( | ||
changes: MaybeDocumentMap, | ||
remoteEvent?: RemoteEvent | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -639,4 +639,267 @@ describeSpec('Limbo Documents:', [], () => { | |
); | ||
} | ||
); | ||
|
||
specTest( | ||
'Limbo resolution throttling with all results at once from watch', | ||
['no-ios'], | ||
() => { | ||
const query = Query.atPath(path('collection')); | ||
const doc1 = doc('collection/a', 1000, { key: 'a' }); | ||
const doc2 = doc('collection/b', 1000, { key: 'b' }); | ||
const doc3 = doc('collection/c', 1000, { key: 'c' }); | ||
const limboQuery1 = Query.atPath(doc1.key.path); | ||
const limboQuery2 = Query.atPath(doc2.key.path); | ||
const limboQuery3 = Query.atPath(doc3.key.path); | ||
|
||
return ( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please add a comment indicating what user behavior we're simulating, namely that while we're listening all three documents are deleted, but watch resets us rather than sending the removals. It's also probably worth noting that in reality this scenario isn't all that likely, but it's a convenient way to force documents into limbo. Here and throughout. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. |
||
spec() | ||
.withMaxConcurrentLimboResolutions(2) | ||
.userListens(query) | ||
.watchAcksFull(query, 1000, doc1, doc2, doc3) | ||
.expectEvents(query, { | ||
added: [doc1, doc2, doc3] | ||
}) | ||
.watchResets(query) | ||
.watchSends({ affects: [query] }) | ||
.watchCurrents(query, 'resume-token-2000') | ||
.watchSnapshots(2000) | ||
.expectLimboDocsEx({ | ||
activeKeys: [doc1.key, doc2.key], | ||
inactiveKeys: [doc3.key] | ||
}) | ||
// Limbo document causes query to be "inconsistent" | ||
.expectEvents(query, { fromCache: true }) | ||
.watchAcks(limboQuery1) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since these test cases differ in the simulated watch behavior, it's worth calling out specifically what we're simulating. In this case, watch completes all the limbo resolution queries in a single snapshot. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. |
||
.watchAcks(limboQuery2) | ||
.watchCurrents(limboQuery1, 'resume-token-2001') | ||
.watchCurrents(limboQuery2, 'resume-token-2001') | ||
.watchSnapshots(2001) | ||
.expectLimboDocs(doc3.key) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is it possible to reorder this with the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, this is possible. Done. |
||
.expectEvents(query, { | ||
removed: [doc1, doc2], | ||
fromCache: true | ||
}) | ||
.watchAcks(limboQuery3) | ||
.watchCurrents(limboQuery3, 'resume-token-2001') | ||
.watchSnapshots(2001) | ||
.expectLimboDocs() | ||
.expectEvents(query, { | ||
removed: [doc3], | ||
fromCache: false | ||
}) | ||
); | ||
} | ||
); | ||
|
||
specTest( | ||
'Limbo resolution throttling with results one at a time from watch', | ||
['no-ios'], | ||
() => { | ||
const query = Query.atPath(path('collection')); | ||
const doc1 = doc('collection/a', 1000, { key: 'a' }); | ||
const doc2 = doc('collection/b', 1000, { key: 'b' }); | ||
const doc3 = doc('collection/c', 1000, { key: 'c' }); | ||
const limboQuery1 = Query.atPath(doc1.key.path); | ||
const limboQuery2 = Query.atPath(doc2.key.path); | ||
const limboQuery3 = Query.atPath(doc3.key.path); | ||
|
||
return ( | ||
spec() | ||
.withMaxConcurrentLimboResolutions(2) | ||
.userListens(query) | ||
.watchAcksFull(query, 1000, doc1, doc2, doc3) | ||
.expectEvents(query, { | ||
added: [doc1, doc2, doc3] | ||
}) | ||
.watchResets(query) | ||
.watchSends({ affects: [query] }) | ||
.watchCurrents(query, 'resume-token-2000') | ||
.watchSnapshots(2000) | ||
.expectLimboDocsEx({ | ||
activeKeys: [doc1.key, doc2.key], | ||
inactiveKeys: [doc3.key] | ||
}) | ||
// Limbo document causes query to be "inconsistent" | ||
.expectEvents(query, { fromCache: true }) | ||
.watchAcks(limboQuery1) | ||
.watchCurrents(limboQuery1, 'resume-token-2001') | ||
.watchSnapshots(2001) | ||
.expectLimboDocs(doc2.key, doc3.key) | ||
.expectEvents(query, { | ||
removed: [doc1], | ||
fromCache: true | ||
}) | ||
.watchAcks(limboQuery2) | ||
.watchCurrents(limboQuery2, 'resume-token-2001') | ||
.watchSnapshots(2001) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Snapshot versions should be strictly increasing. This should be > 2001 and the one below should be even greater. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fixed. |
||
.expectLimboDocs(doc3.key) | ||
.expectEvents(query, { | ||
removed: [doc2], | ||
fromCache: true | ||
}) | ||
.watchAcks(limboQuery3) | ||
.watchCurrents(limboQuery3, 'resume-token-2001') | ||
.watchSnapshots(2001) | ||
.expectLimboDocs() | ||
.expectEvents(query, { | ||
removed: [doc3], | ||
fromCache: false | ||
}) | ||
); | ||
} | ||
); | ||
|
||
specTest( | ||
'Limbo resolution throttling when a limbo listen is rejected.', | ||
['no-ios'], | ||
() => { | ||
const query = Query.atPath(path('collection')); | ||
const doc1 = doc('collection/a', 1000, { key: 'a' }); | ||
const doc2 = doc('collection/b', 1000, { key: 'b' }); | ||
const limboQuery1 = Query.atPath(doc1.key.path); | ||
const limboQuery2 = Query.atPath(doc2.key.path); | ||
|
||
return ( | ||
spec() | ||
.withMaxConcurrentLimboResolutions(1) | ||
.userListens(query) | ||
.watchAcksFull(query, 1000, doc1, doc2) | ||
.expectEvents(query, { added: [doc1, doc2] }) | ||
// Watch tells us that the query results have changed to the empty | ||
// set, which makes our local cache inconsistent with the remote | ||
// state, causing a fromCache=true event to be raised. | ||
.watchResets(query) | ||
.watchSends({ affects: [query] }) | ||
.watchCurrents(query, 'resume-token-1001') | ||
.watchSnapshots(1001) | ||
// Both doc1 and doc2 are in limbo, but the maximum number of limbo | ||
// listens was set to 1, which causes doc1 to get resolved and doc2 | ||
// to get enqueued. | ||
.expectLimboDocsEx({ | ||
activeKeys: [doc1.key], | ||
inactiveKeys: [doc2.key] | ||
}) | ||
// Limbo document causes query to be "inconsistent" | ||
.expectEvents(query, { fromCache: true }) | ||
.watchRemoves( | ||
limboQuery1, | ||
new RpcError(Code.RESOURCE_EXHAUSTED, 'Resource exhausted') | ||
) | ||
// When a limbo listen gets rejected, we assume that it was deleted. | ||
// But now that doc1 is resolved, the limbo resolution for doc2 can | ||
// start. | ||
.expectLimboDocs(doc2.key) | ||
.expectEvents(query, { removed: [doc1], fromCache: true }) | ||
// Reject the listen for the second limbo resolution as well, in order | ||
// to exercise the code path of a rejected limbo resolution without | ||
// any enqueued limbo resolutions. | ||
.watchRemoves( | ||
limboQuery2, | ||
new RpcError(Code.RESOURCE_EXHAUSTED, 'Resource exhausted') | ||
) | ||
.expectLimboDocs() | ||
.expectEvents(query, { removed: [doc2] }) | ||
); | ||
} | ||
); | ||
|
||
specTest( | ||
// This test exercises the steps that resulted in unbounded reads that | ||
// motivated throttling: | ||
// https://github.com/firebase/firebase-js-sdk/issues/2683 | ||
'Limbo resolution throttling with existence filter mismatch', | ||
['no-ios'], | ||
() => { | ||
const query = Query.atPath(path('collection')); | ||
const docA1 = doc('collection/a1', 1000, { key: 'a1' }); | ||
const docA2 = doc('collection/a2', 1000, { key: 'a2' }); | ||
const docA3 = doc('collection/a3', 1000, { key: 'a3' }); | ||
const docB1 = doc('collection/b1', 1000, { key: 'b1' }); | ||
const docB2 = doc('collection/b2', 1000, { key: 'b2' }); | ||
const docB3 = doc('collection/b3', 1000, { key: 'b3' }); | ||
const docA1Query = Query.atPath(docA1.key.path); | ||
const docA2Query = Query.atPath(docA2.key.path); | ||
const docA3Query = Query.atPath(docA3.key.path); | ||
|
||
return ( | ||
spec() | ||
.withMaxConcurrentLimboResolutions(2) | ||
.userListens(query) | ||
.watchAcks(query) | ||
.watchSends({ affects: [query] }, docA1, docA2, docA3) | ||
.watchCurrents(query, 'resume-token-1000') | ||
.watchSnapshots(1000) | ||
.expectEvents(query, { added: [docA1, docA2, docA3] }) | ||
// At this point the query is consistent and matches 3 documents: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The "matches 3 documents" clause listing them out duplicates the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. |
||
// docA1, docA2, and docA3. Then, network connectivity is lost. | ||
.disableNetwork() | ||
// The query listener is notified that the results are being served | ||
// from cache since without network connection there is no way to know | ||
// if we are in sync with the server. | ||
.expectEvents(query, { fromCache: true }) | ||
.enableNetwork() | ||
// The network connection has come back so the client re-registers | ||
// the listener, providing the resume token from before. Watch will | ||
// then send updates that occurred since the timestamp encoded in the | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's possible to shorten "timestamp encoded in the resume token" to just "resume token". We should treat the resume token as an opaque marker of time. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. |
||
// resume token. | ||
.restoreListen(query, 'resume-token-1000') | ||
.watchAcks(query) | ||
// Watch now tells us that the query results on the server are docB1, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. While this comment describes what is happening it's not telling us why it's happening, and the latter is more important to understanding the intent of the test. A better comment would be something like:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. I took your suggestion verbatim. |
||
// docB2, and docB3, along with an existence filter to state that the | ||
// total number of documents that match the query is 3. | ||
.watchSends({ affects: [query] }, docB1, docB2, docB3) | ||
.watchFilters([query], docB1.key, docB2.key, docB3.key) | ||
.watchSnapshots(1001) | ||
// The query listener is now inconsistent because it had thought that | ||
// the set of matching documents was docA1, docA2, and docA3 but the | ||
// server just told us that the set of matching documents is | ||
// completely different: docB1, docB2, and docB3. So the query | ||
// notifies the user that these documents were added, but still says | ||
// fromCache=true because we need to resolve docA1, docA2, and docA3. | ||
.expectEvents(query, { | ||
added: [docB1, docB2, docB3], | ||
fromCache: true | ||
}) | ||
// After the existence filter mismatch the client re-listens without | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Using "after" here makes this seem like the existence filter mismatch is happening elsewhere, but it's really happening here right? I suggest rephrasing this to something that more actively explains why this is going on:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. I took your suggestion verbatim. |
||
// a resume token. | ||
.expectActiveTargets({ query, resumeToken: '' }) | ||
// When the existence filter mismatch was detected, we removed then | ||
// re-added the target; therefore, watch acknowledges the removal. | ||
.watchRemoves(query) | ||
// Watch has re-run the query and returns the same result set: docB1, | ||
// docB2, and docB3. This puts docA1, docA2, and docA3 into limbo, | ||
// which the client then issues queries to resolve. Since the maximum | ||
// number of concurrent limbo resolutions was set to 2, only the first | ||
// two limbo resolutions are started, with the 3rd being enqueued. | ||
.watchAcksFull(query, 1002, docB1, docB2, docB3) | ||
.expectLimboDocsEx({ | ||
activeKeys: [docA1.key, docA2.key], | ||
inactiveKeys: [docA3.key] | ||
}) | ||
.watchAcks(docA1Query) | ||
.watchAcks(docA2Query) | ||
.watchCurrents(docA1Query, 'resume-token-1003') | ||
.watchCurrents(docA2Query, 'resume-token-1003') | ||
.watchSnapshots(1003) | ||
// Watch has now confirmed that docA1 and docA2 have been deleted. So | ||
// the listener sends an event that the documents have | ||
// been removed; however, since docA3 is still enqueued for limbo | ||
// resolution the results are still from cache; however, now that | ||
// there are 0 limbo resolutions in progress, the limbo resolution for | ||
// docA3 is started. | ||
.expectEvents(query, { removed: [docA1, docA2], fromCache: true }) | ||
.expectLimboDocs(docA3.key) | ||
.watchAcks(docA3Query) | ||
.watchCurrents(docA3Query, 'resume-token-1004') | ||
.watchSnapshots(1004) | ||
// Watch has now confirmed that docA3 has been deleted. So the | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These are good comments, but they're a little wordy. If you could make them more concise it would help the next reader focus on the interesting bits. For example:
you could shorten it like so:
I realize this is something of a tall order because it's not obvious which behaviors are implicit, but give it a shot. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. Comments have been overhauled, as per our offline discussions. |
||
// listener sends an event about this and now specifies | ||
// fromCache=false since we are in sync with the server and all docs | ||
// that were in limbo have been resolved. | ||
.expectEvents(query, { removed: [docA3] }) | ||
.expectLimboDocs() | ||
); | ||
} | ||
); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are now a number of limbo-related structures in here and it would be worthwhile to document them in relation to one another and give an overview of how they're supposed to fit together.
Also, since things start in the
limboListenQueue
, I suggest making it structurally first among these. That way, when reading through this list, the structure suggests the flow.Also, this specific comment doesn't do much to suggest what the queue is actually for. It's worth calling out that these are waiting to start a the query. Maybe something like this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added/improved the documentation for
limboListenQueue
,limboTargetsByKey
, andlimboResolutionsByTarget
. I don't have enough context to provide meaningful documentation forlimboDocumentRefs
. Since this PR does not touch anything related tolimboDocumentRefs
I've left it alone. I also movedlimboListenQueue
in the code to be structurally first.