-
Notifications
You must be signed in to change notification settings - Fork 928
Take RemoteStore offline during user change #3193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
|
@@ -55,6 +55,7 @@ import { | |||
} from './watch_change'; | ||||
import { ByteString } from '../util/byte_string'; | ||||
import { isIndexedDbTransactionError } from '../local/simple_db'; | ||||
import { User } from '../auth/user'; | ||||
|
||||
const LOG_TAG = 'RemoteStore'; | ||||
|
||||
|
@@ -756,13 +757,27 @@ export class RemoteStore implements TargetMetadataProvider { | |||
await this.enableNetwork(); | ||||
} | ||||
|
||||
async handleCredentialChange(): Promise<void> { | ||||
async handleCredentialChange(user: User): Promise<void> { | ||||
this.asyncQueue.verifyOperationInProgress(); | ||||
|
||||
if (this.canUseNetwork()) { | ||||
// Tear down and re-create our network streams. This will ensure we get a fresh auth token | ||||
// for the new user and re-fill the write pipeline with new mutations from the LocalStore | ||||
// (since mutations are per-user). | ||||
// Tear down and re-create our network streams. This will ensure we get a | ||||
// fresh auth token for the new user and re-fill the write pipeline with | ||||
// new mutations from the LocalStore (since mutations are per-user). | ||||
logDebug(LOG_TAG, 'RemoteStore restarting streams for new credential'); | ||||
await this.restartNetwork(); | ||||
|
||||
this.networkEnabled = false; | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This makes me uneasy. Previously There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This matches the code in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I understand that this is needed to silence assertions in the current implementation, but the larger point I'm after here is that this potentially doesn't actually work. Consider this sequence of events:
Or as an alternative to the last step, the device encounters a network transition that prompts What I'm after is that by mixing the intent of the To be clear: the use of So, unless I'm grossly misunderstanding what's happening here, I think I'm proposing we:
This way these events can happen all at once or in any order and we won't accidentally enable the network during an event ordering we didn't anticipate. Another possible alternative, rather than tracking each condition independently, is to track a cumulative failure counter. I think you still need to track I haven't fully thought this through, but this also seems to help with the case where multiple events of the same type happen before the first event is resolved. For example: if multiple credential change events happen before the syncEngine recovers, we won't start to enable the network after the first completes because the second in-flight credential change prevents it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for the long insightful comment! I am convinced that re-using this flag was not the right idea. While we could go with a counter, I have a feeling that it will go out of sync at some point, and we will not be able to figure out why. In the end, I used something similar to what the realtime database does:
As for having multiple user changes, since they are schedule via the retryable operations, only one user change can be active at a time. |
||||
await this.disableNetworkInternal(); | ||||
this.onlineStateTracker.set(OnlineState.Unknown); | ||||
|
||||
await this.executeWithRecovery(async () => { | ||||
await this.syncEngine.handleUserChange(user); | ||||
await this.enableNetwork(); | ||||
}); | ||||
} else { | ||||
await this.executeWithRecovery(() => | ||||
this.syncEngine.handleUserChange(user) | ||||
); | ||||
} | ||||
} | ||||
|
||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are still plenty of other references to
handleCredentialChange
andCredentialChangeListener
. It seems preferable to keep these consistent, no? That way you can search for credentialchange and find all the places we do something about this.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I renamed the RemoteStore and SyncEngine method to
handleCredentialChange
. LocalStore and SharedClientState still usehandleUserChange
since SyncEngine "swallows" no-op user changes.