Skip to content

Dequeue limbo resolutions when their respective queries are stopped #2404

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Feb 5, 2021
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions firebase-firestore/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@
Bundles contain pre-packaged data produced with the NodeJS Server SDK and
can be used to populate Firestore's cache without reading documents from
the backend.
- [fixed] Fixed a Firestore bug where local cache inconsistencies were
unnecessarily being resolved, causing the `Task` objects returned from `get()`
invocations to never complete (#2404).

# (22.0.2)
- [changed] A write to a document that contains FieldValue transforms is no
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,13 +55,12 @@
import com.google.firebase.firestore.util.Util;
import io.grpc.Status;
import java.io.IOException;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Queue;
import java.util.Set;

/**
Expand Down Expand Up @@ -130,7 +129,7 @@ interface SyncEngineCallback {
* The keys of documents that are in limbo for which we haven't yet started a limbo resolution
* query.
*/
private final Queue<DocumentKey> enqueuedLimboResolutions;
private final LinkedHashSet<DocumentKey> enqueuedLimboResolutions;

/** Keeps track of the target ID for each document that is in limbo with an active target. */
private final Map<DocumentKey, Integer> activeLimboTargetsByKey;
Expand Down Expand Up @@ -169,7 +168,7 @@ public SyncEngine(
queryViewsByQuery = new HashMap<>();
queriesByTarget = new HashMap<>();

enqueuedLimboResolutions = new ArrayDeque<>();
enqueuedLimboResolutions = new LinkedHashSet<>();
activeLimboTargetsByKey = new HashMap<>();
activeLimboResolutionsByTarget = new HashMap<>();
limboDocumentRefs = new ReferenceSet();
Expand Down Expand Up @@ -603,6 +602,7 @@ private void removeAndCleanupTarget(int targetId, Status status) {
}

private void removeLimboTarget(DocumentKey key) {
enqueuedLimboResolutions.remove(key);
// It's possible that the target already got removed because the query failed. In that case,
// the key won't exist in `limboTargetsByKey`. Only do the cleanup if we still have the target.
Integer targetId = activeLimboTargetsByKey.get(key);
Expand Down Expand Up @@ -676,7 +676,7 @@ private void updateTrackedLimboDocuments(List<LimboDocumentChange> limboChanges,

private void trackLimboChange(LimboDocumentChange change) {
DocumentKey key = change.getKey();
if (!activeLimboTargetsByKey.containsKey(key)) {
if (!activeLimboTargetsByKey.containsKey(key) && !enqueuedLimboResolutions.contains(key)) {
Logger.debug(TAG, "New document in limbo: %s", key);
enqueuedLimboResolutions.add(key);
pumpEnqueuedLimboResolutions();
Expand All @@ -694,7 +694,9 @@ private void trackLimboChange(LimboDocumentChange change) {
private void pumpEnqueuedLimboResolutions() {
while (!enqueuedLimboResolutions.isEmpty()
&& activeLimboTargetsByKey.size() < maxConcurrentLimboResolutions) {
DocumentKey key = enqueuedLimboResolutions.remove();
Iterator<DocumentKey> it = enqueuedLimboResolutions.iterator();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like you need to import Iterator.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

DocumentKey key = it.next();
it.remove();
int limboTargetId = targetIdGenerator.nextId();
activeLimboResolutionsByTarget.put(limboTargetId, new LimboResolution(key));
activeLimboTargetsByKey.put(key, limboTargetId);
Expand All @@ -708,15 +710,15 @@ private void pumpEnqueuedLimboResolutions() {
}

@VisibleForTesting
public Map<DocumentKey, Integer> getActiveLimboDocumentResolutions() {
public HashMap<DocumentKey, Integer> getActiveLimboDocumentResolutions() {
// Make a defensive copy as the Map continues to be modified.
return new HashMap<>(activeLimboTargetsByKey);
return new HashMap(activeLimboTargetsByKey);
}

@VisibleForTesting
public Queue<DocumentKey> getEnqueuedLimboDocumentResolutions() {
// Make a defensive copy as the Queue continues to be modified.
return new ArrayDeque<>(enqueuedLimboResolutions);
public LinkedHashSet<DocumentKey> getEnqueuedLimboDocumentResolutions() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super nit (here and above): We usually try to use more generic types in our return types, as this allows us to change the implementation without changing the callsites. If the callsites don't require the HashMap or LinkedHashSet functionality, I would just use Map<> and Set<>.

Please note that this might not apply if getEnqueuedLimboDocumentResolutions() only uses the Set API but its consumers rely on the iteration order of a LinkedHashSet. Then I would still return LinkedHashSet as we will not be able to replace it later. You should still generify the signature in line 713 though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

// Make a defensive copy as the LinkedHashSet continues to be modified.
return new LinkedHashSet(enqueuedLimboResolutions);
}

public void handleCredentialChange(User user) {
Expand Down
Loading