-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Implement global resume token #1696
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly LGTM.
[self.persistence.referenceDelegate removeTarget:queryData]; | ||
[self.targetIDs removeObjectForKey:@(queryData.targetID)]; | ||
[self.targetIDs removeObjectForKey:boxedTargetID]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dupe of line 454.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
chr::duration_cast<chr::microseconds>(chr::seconds(timestamp_.seconds())); | ||
auto nanosecond_part = chr::duration_cast<chr::microseconds>( | ||
chr::nanoseconds(timestamp_.nanoseconds())); | ||
auto result = second_part + nanosecond_part; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't care strongly, but I like the JS one-liner that I could write with my eyes closed. :-)
return this.timestamp.seconds * 1e6 + this.timestamp.nanoseconds / 1000;
But staring at this method made me question why we're using micros anyway... I don't think the backend guarantees that read times will never use nano-precision, so this is theoretically a lossy conversion. That's fine for your delta computation, but it seems slightly strange / dangerous to expose a lossy conversion on SnapshotVersion.
Can you ditch this and just use snapshotVersion.timestamp().seconds() in your delta computation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out-of-band comment: is there a reason not to use Timestamp::ToTimePoint
(it's unavailable if STLPort is used, but <chrono>
would be unavailable as well)?
auto microseconds =
chr::duration_cast<chr::microseconds>(timestamp_.ToTimePoint().time_since_epoch());
return static_cast<int64_t>(microseconds.count()); // Or perhaps omit the cast;
// whatever int type is used in count(), it should be signed and no larger than int64
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That comment is why I tagged you as a reviewer :-).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did this because the typescript client already had it and the other clients didn't. I think the typescript client has it for generating the spec test json though.
Given the interval is minutes, just using seconds makes sense to me. Removed.
|
||
// Don't allow resume token changes to be buffered indefinitely. This allows us to be reasonably | ||
// up-to-date after a crash and avoids needing to loop over all active queries on shutdown. | ||
// Especially in the browser we may not get time to do anything interesting while the current |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: the reference to browser is probably extraneous.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In portable code like this I prefer copypasta from the source to customizing everything.
// up-to-date after a crash and avoids needing to loop over all active queries on shutdown. | ||
// Especially in the browser we may not get time to do anything interesting while the current | ||
// tab is closing. | ||
int64_t newSeconds = newQueryData.snapshotVersion.timestamp().seconds(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Entirely optional: you could use <chrono>
:
static const std::chrono::duration kResumeTokenMaxAge = std::chrono::minutes(5);
// Avoids having to specify resolution in the variable name
// ...
if (newQueryData.snapshotVersion.timestamp().ToTimePoint() -
oldQueryData.snapshotVersion.timestamp().ToTimePoint()
>= kResumeTokenMaxAge) {
return YES;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is non-portable back to the other platforms (where I plan to backfill the micros to seconds change). I prefer to keep it simple and portable.
int64_t timeDelta = newSeconds - oldSeconds; | ||
if (timeDelta >= kResumeTokenMaxAgeSeconds) return YES; | ||
|
||
// Otherwise if the only thing that has changed about a target is its resume token it's not |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ultranit: add a comma after "resume token".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a "then" instead.
Port of firebase/firebase-js-sdk#1052
Watch sends us a periodic heartbeat with an updated resume token that applies to all active targets by sending us an empty list of target ids.
Our current implementation in the
FSTWatchChangeAggregator
loops over the target ids in the change so in this case we fail to take any action. If a client has been listening to a query that sees no changes for more than thirty minutes we'll fail to persist any resume token updatesduring that period.
This change:
FSTWatchChangeAggregator
for the empty target IDs listNote that while a query is active, the in-memory state is updated, so any incidental re-listens that happen as a result of stream loss or re-auth will always use an up-to-date token.