-
Notifications
You must be signed in to change notification settings - Fork 616
Performance optimizations to speed up reading large collections #123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@var-const how much of an improvement were you able to get from these potential fixes? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very much a strawman PR; I'd like to discuss feasibility/correctness of the potential optimizations before cleaning them up. There are three optimizations for discussion, annotated with comments in their respective files; I'm using a short name for each of them to make them easier to distinguish.
I'm testing performance using a physical Pixel XL 1st generation (API level 27) on the project Sam attached to the original issue. The database contains 1000 documents; here's a sample document as printed to log (with ad-hoc formatting).
The fact that the document is somewhat large matters; without any optimizations, a run normally takes 9-10 seconds for this data set. However, if I create 1000 small documents (just 5 fields), reading those takes 5 seconds. In the profiler, most of CPU time is spent on various serialization-related tasks (gRPC parsing received bytes into protos, our code deserializing the documents and serializing them again to write them to local store, etc.).
Most of the time is spent:
- while
WatchStream
is accumulating server responses. CPU time is split in half between gRPC parsing received bytes into protos andRemoteSerializer
'sdecodeWatchChange
. This step normally takes 2-3 seconds, though sometimes longer, which I suspect is due to network issues. - in
LocalStore#applyRemoteEvent
-- around 5.5 seconds. Of those, 0.7 are spent in the last line (return localDocuments.getDocuments(changedDocKeys)
) and ~4.7 in the "core" loop of the function; - some time is also spent in
emitNewSnapsAndNotifyLocalStore
, but it appears negligible compared toapplyRemoteEvent
.
The potential optimizations in this PR:
- "Avoid encode" -- the document protos are received from the wire, deserialized into model objects by
RemoteSerializer
, then again serialized byLocalSerializer
so they can be written to the local database. Keeping the original protos around would allow avoiding the encode part. This optimization makes the core loop ofapplyRemoteEvent
go down from 4.7s to 3.5s. The potential gotcha is that in theory,LocalSerializer
could serialize those protos differently; - "No double get" -- the last line of
applyRemoteEvent
reads the persisted documents again from the local storage. AFAIU, this is only done because the logic to apply pending mutations is withinLocalDocumentsView
; otherwise, the documents are already available withinapplyRemoteEvent
. The optimization allows passing a map of documents directly toLocalDocumentsView
, which applies pending mutations to them directly. This optimization makes the last line ofapplyRemoteEvent
execute in ~50ms instead of ~700ms; - "No get by one" -- each iteration of the core loop in
applyRemoteEvent
tries to retrieve the document for the key being processed from local storage. The optimization is to retrieve all the documents in a single query before entering the loop (AFAIU, no loop iteration affects other iterations). This optimization (in isolation) makes execution time of the core loop inapplyRemoteEvent
go down from 4.7s to 3.4s.
All three together, the optimizations can make applyRemoteEvent
finish in ~2.7s instead of ~5.4s. Total time I see is around 6 seconds.
@Nullable | ||
private List<MaybeDocument> getDocumentsInternal(Iterable<DocumentKey> keys, List<MutationBatch> batches) { | ||
List<MaybeDocument> documents = remoteDocumentCache.getAll(keys); | ||
// TODO(varconst): uncomment and fix. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't bother with batches, because there are no pending write batches in the case being tested.
return results; | ||
} | ||
|
||
ImmutableSortedMap<DocumentKey, MaybeDocument> getDocuments(Map<DocumentKey, MaybeDocument> docsByKey) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This avoids accessing the local database and just applies pending batches to the given documents.
if (document instanceof NoDocument) { | ||
NoDocument noDocument = (NoDocument) document; | ||
builder.setNoDocument(encodeNoDocument(noDocument)); | ||
builder.setHasCommittedMutations(noDocument.hasCommittedMutations()); | ||
} else if (document instanceof Document) { | ||
Document existingDocument = (Document) document; | ||
builder.setDocument(encodeDocument(existingDocument)); | ||
if (existingDocument.getProto() != null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optimization 1 ("Avoid encode"): avoid serializing a Document
to a proto again; instead, keep the serialized version around and reuse it. Storing the proto right in the document object is a strawman.
for (Entry<DocumentKey, MaybeDocument> entry : documentUpdates.entrySet()) { | ||
keys.add(entry.getKey()); | ||
} | ||
List<MaybeDocument> existingDocs = remoteDocuments.getAll(keys); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optimization 2 ("No get by one"): get all the documents from the local database in a single query.
IIUC, keys are not repeated, so retrieving all the documents before going into the main for loop is okay, because no iteration of the loop may affect subsequent iterations.
@@ -376,7 +392,7 @@ public SnapshotVersion getLastRemoteSnapshotVersion() { | |||
queryCache.setLastRemoteSnapshotVersion(remoteVersion); | |||
} | |||
|
|||
return localDocuments.getDocuments(changedDocKeys); | |||
return localDocuments.getDocuments(changedDocs); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optimization 3 ("No double get"): avoid retrieving the documents from local database again, we already have them in this function; just apply the pending write batches to them.
"(%x) Stream received: %s", | ||
System.identityHashCode(AbstractStream.this), | ||
response); | ||
if (Logger.isDebugEnabled()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even with Proguard, logging adds about a second to 2-3 seconds the network usually takes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hrm! I wonder why... Since we're not building up the log string or anything, it seems like this should just be a plain cheap method call (which immediately no-ops on the receiving end), unless getClass().getSimpleName()
or System.identityHashCode()
is expensive (my bet would be on the former).
Any interest in digging a little deeper to see why it's slow? We may learn something that helps us preemptively improve other parts of the code [e.g. maybe we should cache this.getClass().getSimpleName()].
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The profiler shows String.format
as the top culprit; probably nothing we can do about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still struggling to see how String.format() would be getting called by this code. Maybe we can chat during standup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pushing me to dig into this. Looks like I got confused here. In the app, logging is on, which is why String.format
is called here. When I tested the numbers, looks like I conflated no Proguard with logging, and Proguard with no logging (I used several repos to run tests, one of which had logging turned off).
To hopefully untangle this, I reran the numbers, using three "variables":
- SDK version;
- Proguard/no Proguard in the app;
- logging/no logging.
Results (three runs in each case, Release mode, same device as before):
- SDK 17.1.1, no Proguard, no logging: 4.4s - 4.8s (raw numbers: 4384ms / 4466ms / 4759ms);
- SDK 17.1.1, no Proguard, with logging: 4.5s - 4.8s (raw numbers: 4789ms / 4525ms / 4836ms);
- SDK 17.1.1, with Proguard, no logging: 4.4s - 4.6s (raw numbers: 4565ms / 4362ms / 4836ms);
- SDK 17.1.1, with Proguard, with logging: 4.5s - 4.6s (raw numbers: 4478ms / 4602ms / 4602ms (not a typo)).
So it seems that in 17.1.1 (IIUC, the last SDK version that was proguarded), the difference between logging enabled and disabled is negligible, and whether the app itself is proguarded doesn't really matter.
- SDK 17.1.2, no Proguard, no logging: 4.7s - 5.1s (raw numbers: 5064ms / 4992ms / 4717ms);
- SDK 17.1.2, no Proguard, with logging: 6.3s - 6.5s (raw numbers: 6303ms / 6351ms / 6503ms);
- SDK 17.1.2, with Proguard, no logging: 4.1s - 4.5s (raw numbers: 4361ms / 4094ms / 4490ms);
- SDK 17.1.2, with Proguard, with logging: 4.5s - 4.7s (raw numbers: 4536ms / 4503ms / 4682ms).
In 17.1.2, an app pays significant penalty if it enables logging but isn't proguarded. The rest of the numbers are probably within error margin.
- this branch, no Proguard, no logging: 2.7s - 2.9s (raw numbers: 2774ms / 2851ms / 2690ms);
- this branch, no Proguard, with logging: 4.8s - 4.9s (raw numbers: 4992ms / 4807ms / 4939ms);
- this branch, with Proguard, no logging: 2.7s - 2.8s (raw numbers: 2802ms / 2783ms / 2710ms);
- this branch, with Proguard, with logging: 2.8s - 2.9s (raw numbers: 2881ms / 2861ms / 2795ms).
Surprisingly, for this branch, the penalty of no Proguard/logging seems even higher than 17.1.2 (but perhaps within fluctuation). If Proguard is enabled, logging doesn't add any significant difference.
@@ -1015,6 +1015,7 @@ public WatchChange decodeWatchChange(ListenResponse protoChange) { | |||
!version.equals(SnapshotVersion.NONE), "Got a document change without an update time"); | |||
ObjectValue data = decodeFields(docChange.getDocument().getFieldsMap()); | |||
Document document = new Document(key, version, data, Document.DocumentState.SYNCED); | |||
document.setProto(docChange.getDocument()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For "Avoid encode" optimization: keep the original proto around.
Update: another straightforward optimization is adding a |
Discussed offline. These all seem like reasonable changes with the exception of the mutable field on the document. We should preserve immutability of Document (and ensure that when creating derived versions the proto is invalidated.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mikelehen Michael, can I assign you as a reviewer, seeing that Gil is away for the time being?
Map<DocumentKey, MaybeDocument> results = new HashMap<>(); | ||
Iterator<DocumentKey> keyIter = documentKeys.iterator(); | ||
while (keyIter.hasNext()) { | ||
// Make sure each key has a corresponding entry, which is null in case the document is not |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps there's a better way to achieve this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure. Part of me would rather we do the db lookups first and then only insert nulls for the missing items. It feels like it matches the intention better, but it's probably a little extra code, and I don't know which way is actually more efficient.
private int queriesPerformed = 0; | ||
private Iterator<Object> argsIter; | ||
|
||
private static final int LIMIT = 900; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm concerned about off-by-one errors, but setting the limit so much below 999 is kinda superstitious... Let me know what you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm on board with 900.
* The longer version of the constructor additionally takes {@code argsHead} parameter that | ||
* contains parameters that will be reissued in each subquery, i.e. subqueries take the form: | ||
* | ||
* <p>[head][argsHead][an auto-generated comma-separated list of '?' placeholders][tail] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think this also needs a code example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you went above-and-beyond by having one for the first constructor. :-) Devs can always do find-all-references to find examples. :-)
* https://www.sqlite.org/limits.html). This class wraps most of the messy details of splitting a | ||
* large query into several smaller ones. | ||
*/ | ||
static class LongQuery { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is my attempt at refactoring out the common part between two methods that have a long IN
clause. Let me know if you think it's overengineered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah, I think it's great. It avoids duplication but also simplifies the consuming code in a clean way.
* https://www.sqlite.org/limits.html). This class wraps most of the messy details of splitting a | ||
* large query into several smaller ones. | ||
*/ | ||
static class LongQuery { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestions about the class name are very welcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like it. 🤷♂️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with the LongQuery
name but would suggest de-nesting this and just making it a top-level class.
There are a few reasons for this suggestion:
- Consumers manipulate this type directly (
new SQLitePersistence.LongQuery(...)
) - You're passing the
SQLitePersistence
instance explicitly; if all classes that used aSQLitePersistence
were nested in here this class would be absurd. - This class was already ~500 lines
$0.02
@@ -53,6 +54,21 @@ public MaybeDocument get(DocumentKey key) { | |||
return docs.get(key); | |||
} | |||
|
|||
@Override | |||
public Map<DocumentKey, MaybeDocument> getAll(Iterable<DocumentKey> keys) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of the reasons I'm returning a Map
and not an ImmutableSortedMap
is because the latter doesn't have keySet
/values
methods, which are very useful.
firebase-firestore/src/main/java/com/google/firebase/firestore/local/LocalStore.java
Show resolved
Hide resolved
@@ -69,20 +69,46 @@ private MaybeDocument getDocument(DocumentKey key, List<MutationBatch> inBatches | |||
return document; | |||
} | |||
|
|||
// Returns the view of the given {@code docs} as they would appear after applying all mutations in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note the significant refactoring in this file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR looks great to me! I've left some minor suggestions, but I'm very much on board with all of these changes and I think you did a great job of optimizing while introducing minimal complexity. Thanks!
(And also thanks for using github comments to highlight your changes / rationale / concerns... that is super helpful. :-))
* Similar to {@code #getDocuments}, but creates the local view from the given {@code baseDocs} | ||
* without retrieving documents from the local store. | ||
*/ | ||
ImmutableSortedMap<DocumentKey, MaybeDocument> getDocuments( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like this method should have a different name (passing documents to a method called getDocuments()
seems odd). I was going to suggest just making it a 1-parameter overload of applyLocalMutationsToDocuments()
, but I see we do this extra "don't conflate missing / deleted" thing, which I don't actually fully understand...
So maybe getDocumentsForBaseDocuments()
, getFromBaseDocuments()
, or something? That's a bit awkward too though. If you're not a fan and can't think of anything better, feel free to just leave as-is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. I tentatively renamed the function to getLocalViewOfDocuments
. If you're not a fan, I'll go with getFromBaseDocuments
.
firebase-firestore/src/main/java/com/google/firebase/firestore/local/LocalStore.java
Show resolved
Hide resolved
@@ -355,6 +362,7 @@ public SnapshotVersion getLastRemoteSnapshotVersion() { | |||
key, | |||
existingDoc.getVersion(), | |||
doc.getVersion()); | |||
changedDocs.put(key, existingDoc); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like this probably isn't necessary (nothing changed). WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, the previous behavior was to add the key to changedDocKeys
unconditionally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, logically this seems unnecessary. However, the previous behavior added the key to changedDocKeys
unconditionally (line 338) and then used that key set to retrieve the documents. From a glance at the code, I cannot determine whether this change would be effectively a no-op or not.
The tests pass either way, but I'm not sure we cover the situation when the server sends outdated docs. All in all, I'm a little wary about this change... What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does https://github.com/firebase/firebase-js-sdk/blob/master/packages/firestore/test/unit/specs/listen_spec.test.ts#L228 cover this?
In general the best way to see if we have a test covering a behavior is to set a breakpoint and then check if we hit it under the debugger.
I applaud your skeptical approach to changing existing behavior, but it's also worth considering that a lot of this code was written in a hurry or has evolved organically over time. For example, we only learned that Watch even had this kind of behavior after we observed it in a bug bash. The code to defend against this was added later and it's likely we just didn't adjust the initial change computation.
In any case, our approach in the LocalStore has always been to err on the side of over-notifying because the view code is ultimately responsible for computing what has changed. This likely has no visible effect precisely because the view is discarding updates that don't net any changes, but that doesn't invalidate the logic behind changing this. We just need to avoid under-notifying--that's something for which the view can't compensate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure it's safe. Since we're not updating the cached doc, it logically makes sense not to include it in the changed docs. The result of this method is used to update our Views, and again, since we kept the existing doc, no update should be needed. And this code is exercised by the "Listens: Individual documents cannot revert" spec test.
So I'd feel comfortable removing it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
|
||
Iterator<DocumentKey> iter = keys.iterator(); | ||
while (iter.hasNext()) { | ||
DocumentKey key = iter.next(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for (DocumentKey key: keys) {
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
private final String head; | ||
private final String tail; | ||
private final List<Object> argsHead; | ||
private final List<Object> allArgs; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like this may not be needed (it's only written and read in the constructor).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, the iterator takes care of that. Done.
@@ -1014,7 +1014,11 @@ public WatchChange decodeWatchChange(ListenResponse protoChange) { | |||
hardAssert( | |||
!version.equals(SnapshotVersion.NONE), "Got a document change without an update time"); | |||
ObjectValue data = decodeFields(docChange.getDocument().getFieldsMap()); | |||
Document document = new Document(key, version, data, Document.DocumentState.SYNCED); | |||
// The document will be serialized again before being written to local storage, memoize the | |||
// encoded form to avoid encoding it again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slight wording suggestion:
// The document may soon be re-serialized back to protos in order to store it
// in local persistence. Memoize the encoded form to avoid encoding it again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks.
List<DocumentKey> keys = new ArrayList<>(); | ||
|
||
Iterator<String> iter = paths.iterator(); | ||
while (iter.hasNext()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for( : )
syntax again. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
* Creates a new {@code LongQuery} with parameters that describe a template for creating each | ||
* subquery. | ||
* | ||
* <p>Each subquery will have the following form: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of this explanatory text is great and I wonder if it'd make sense to move some of it to the class instead of on the constructor, since I think it applies to both constructors (and it would have made it easier for me to understand the class members above).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done and slightly expanded to hopefully make more sense as a general class description.
* The longer version of the constructor additionally takes {@code argsHead} parameter that | ||
* contains parameters that will be reissued in each subquery, i.e. subqueries take the form: | ||
* | ||
* <p>[head][argsHead][an auto-generated comma-separated list of '?' placeholders][tail] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you went above-and-beyond by having one for the first constructor. :-) Devs can always do find-all-references to find examples. :-)
private int queriesPerformed = 0; | ||
private Iterator<Object> argsIter; | ||
|
||
private static final int LIMIT = 900; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm on board with 900.
I noticed that I was looking at performance in
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Changes look good. I do think we can remove the extra changedDocs insert, when we ignore the watch update.
I'm also still confused about the logging perf impact we're apparently seeing (both your change to suppress logging and the fact that proguard is having such a big impact on perf). Maybe we can sync up in standup real quick.
"(%x) Stream received: %s", | ||
System.identityHashCode(AbstractStream.this), | ||
response); | ||
if (Logger.isDebugEnabled()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still struggling to see how String.format() would be getting called by this code. Maybe we can chat during standup.
@@ -355,6 +362,7 @@ public SnapshotVersion getLastRemoteSnapshotVersion() { | |||
key, | |||
existingDoc.getVersion(), | |||
doc.getVersion()); | |||
changedDocs.put(key, existingDoc); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure it's safe. Since we're not updating the cached doc, it logically makes sense not to include it in the changed docs. The result of this method is used to update our Views, and again, since we kept the existing doc, no update should be needed. And this code is exercised by the "Listens: Individual documents cannot revert" spec test.
So I'd feel comfortable removing it.
/retest |
1 similar comment
/retest |
…from Android (#2140) Straightforward port of firebase/firebase-android-sdk#123.
…from Android (#1433) Straightforward port of firebase/firebase-android-sdk#123.
* Clean up FIRAuth bits from FIRApp (#2110) * Clean up FIRAuth bits from FIRApp * Fix tvOS sample's Auth APIs. (#2158) * Update CHANGELOG for Firestore v0.16.1 (#2164) * Update the name of certificates bundle (#2171) To accommodate for release 5.14.0. * Fix format string in FSTRemoteStore error logging (#2172) * C++: replace `FSTMaybeDocumentDictionary` with a C++ equivalent (#2139) Also eliminate most usages of `FSTDocumentKey`, remove most methods from the Objective-C class and make it just a wrapper over `DocumentKey`. The only usage that cannot be directly replaced by C++ `DocumentKey` is in `FSTFieldValue`. * Port performance optimizations to speed up reading large collections from Android (#2140) Straightforward port of firebase/firebase-android-sdk#123. * When searching for gRPC certificates, search the main bundle as well (#2183) When the project is manually configured, it's possible that the certificates file gets added to the main bundle, not the Firestore framework bundle; make sure the bundle can be loaded in that case as well. * Fix Rome instructions (#2184) * Use registerLibrary for pods in Firebase workspace (#2137) * Add versioning to Functions and convert to FIRLibrary * Convert Firestore to FIRLibrary * Point travis to FirebaseCore pre-release for its deps * Update user agent strings to match spec * Port Memory remote document cache to C++ (#2176) * Port Memory remote document cache to C++ * Minor tweaks to release note (#2182) * Minor tweaks * Update CHANGELOG.md * Update CHANGELOG.md * Port leveldb remote document cache to C++ (#2186) * Port leveldb remote document cache * Remove start from persistence interface (#2173) * Remove start from persistence interface, switch FSTLevelDB to use a factory method that returns Status * Fix small typos in public documentation. (#2192) * fix the unit test #1451 (#2187) * Port FSTRemoteDocumentCacheTest to use C++ interface (#2194) * Release 5.15.0 (#2195) * Update versions for Release 5.15.0 * Create 5.15.0.json * Update CHANGELOG for Firestore v0.16.1 (#2164) * Update the name of certificates bundle (#2171) To accommodate for release 5.14.0. * Fix format string in FSTRemoteStore error logging (#2172) * Update CHANGELOG.md * Update 5.15.0.json * Port FSTRemoteDocumentCache (#2196) * Remove FSTRemoteDocumentCache * Fix leaks in Firestore (#2199) * Clean up retain cycle in FSTLevelDB. * Explicitly CFRelease our SCNetworkReachabilityRef. * Make gRPC stream delegates weak * Port DocumentState and UnknownDocument. (#2160) Part of heldwriteacks. Serialization work for this is largely deferred until after nanopb-master is merged with master. * Port FSTMemoryQueryCache to C++ (#2197) * Port FSTLevelDBQueryCache to C++ (#2202) * Port FSTLevelDBQueryCache to C++ * Fix Storage private imports. (#2206) * Add missing Foundation imports to Interop headers (#2207) * Migrate Firestore to the v1 protocol (#2200) * Use python executable directly * python2 is not guaranteed to exist * scripts aren't directly executable on Windows * Add Firestore v1 protos * Point cmake at Firestore v1 protos * Regenerate protobuf-related sources * Make local protos refer to v1 protos * fixup! Regenerate protobuf-related sources * Remove v1beta1 protos * s/v1beta1/v1/g in source. * s/v1beta1/v1/ in the Xcode project * Remove stale bug comments. This was fixed by adding an explicit FieldPath API rather than exposing escaping to the end user. * Add SymbolCollisionTest comment for ARCore (#2210) * Continue work on ReferenceSet (#2213) * Migrate FSTDocumentReference to C++ * Change SortedSet template parameter ordering Makes it easier to specify a comparator without specifying what the empty member of the underlying map is. * Migrate MemoryMutationQueue to C++ references by key * migrate.py * CMake * Finish porting ReferenceSet * Swap reference set implementation * Port MemoryQueryCache to use ported ReferenceSet * Port FSTReferenceSetTest * Port usage for limbo document refs * Port LRU and LocalStore usages * Remove FSTReferenceSet and FSTDocumentReference * Style * Add newline * Implement QueryCache interface and port QueryCache tests (#2209) * Implement QueryCache interface and port tests * Port production usages of QueryCache (#2211) * Remove FSTQueryCache and implementations * Switch size() to size_t * Keep imports consistent (#2217) * Fix private tests by removing unnecessary storyboard entries (#2218) * Fix xcode 9 build of FDLBuilderTestAppObjC (#2219) * Rework FieldMask to use a (ordered) set of FieldPaths (#2136) Rather than a vector. Port of firebase/firebase-android-sdk#137 * Travis to Xcode 10.1 and clang-format to 8.0.0 (tags/google/stable/2018-08-24) (#2222) * Travis to Xcode 10.1 * Update to clang-format 8 * Update clang-format homebrew link * Work around space in filename style.sh issue
* Initial structural work for the google data logger. (#2162) * Inital commit * Remove Example project and replace with test_spec in the podspec * Update gitignore to ignore the generated folder. * Add a script to generate the project. * Add some basic structure and tests. * Remove unnecessary files and address PR feedback. Removes .gitkeep, .gitignore, and .travis.yml files Modifies the root .gitignore to ignore files generated by cocoapod-generate Modifies the root .travis.yml to add this podspec to CI Updates the README with some instructions * Adding googledatalogger branch to travis CI * Adding copyrights to files that were missing them * Move GDLLogTransformer to the public header directory An alternative is to set CLANG_ALLOW_NON_MODULAR_INCLUDES_IN_FRAMEWORK_MODULES = 'YES', but I'm not sure this will work when publishing the pod. * Add additional base infrastructure for the logging client (#2174) * Generalize the concept of a logSource Rename and change the type of the 'log source' to be more appropriately generalized as a log mapping identifier string. * Expand the API of the logger. * Add infrastructure for log storage. * Add infrastructure for the log writer. * Remove an unnecessary comment. * Style fixes applied * Change a missed assert message to make more sense. * Flesh out the log event and log writer classes (#2175) * Add timekeeping infrastructure. * Add the log proto protocol. * Flesh out the log event a bit. * Flesh out the log writer. * Put in comments for the log proto protocol. * Move queue to a private header and update the TODO. * Add comment about the QoS tier * Fix style * Enable travis for GoogleDataLogger using cocoapods-generate (#2185) * Add logTarget as a property to GDLLogEvent and connect the logger to the writer. * Enabled building and testing GoogleDataLogger in travis using cocoapods-generate * Update Gemfile.lock * Revert "Add logTarget as a property to GDLLogEvent and connect the logger to the writer." This reverts commit cce26d3. * Fix the workspace path. * Add xcpretty gem * Add the test directive to the GoogleDataLogger invocation * Refactor GoogleDataLogger into its own section Also remove GoogleDataLogger from Xcode9.4 pod lib linting, because the failure was not reproducible. * Create a log wrapper and better test GDLLogWriter (#2190) * Add logTarget as a property to GDLLogEvent and connect the logger to the writer. * Create a log wrapper for use with GULLogger. * GDLLogTransformer should inherit <NSObject> and require transform: * Protect against doesNotRespond exceptions and expand tests * Style and a missing @param. * Update a comment * Implement NSSecureCoding protocol for GDLLogEvent (#2191) * Implement NSSecureCodingProtocol for GDLLogEvent * Style changes. * Refactor to address some comments and structure GDLLogEvent and GDLLogProto are moved to the public folder. GDLLogEvent has had some public API moved to a private header. GDLLogWriter now writes the extension object to data after transforming, for serialization purposes. Various headers updated to conform to module header rules. * Create some core infrastructure for backends (#2198) * s/GDLLogClock/GDLClock/ This isn't a class of log clocks, it's a class of clocks. * Create some core infrastructure to support backends and prioritization of logs. * Docs and slight changes to the scorer API. * Missing return statement * Change 'score' terminology to 'prioritize'. Also style issues. * Change the protocol being used for a prioritizer. * Implement -hash and -copy of GDLLogEvent, copy on log, and don't assign extensionBytes in log writer (#2204) * Implement -hash and -copy of GDLLogEvent Also implements a custom setter for setting the extension that changes the default behavior to set extensionBytes upon assignment of extension. Copy the log upon logging, as the comments promised. Remove setting extensionBytes in the log writer. Implement a missing method * Copy the log object upon logging * Don't assign extensionBytes in the log writer * Make an implicit loss of precision explicit. * Add a comment on performance * Add some test helpers and structure for new classes (#2212) * Test helpers for GDLBackend and GDLLogPrioritizer * Add shared uploader structure * Implement some stubbed methods, update umbrella header, add missing test (#2214) * Add missing test * Implement some stubbed methods, update the umbrella header * Implement log storage (#2215) * Implement log storage Includes tests and categories on existing classes to add testing functionality * Better error handling in tye log storage * Style and pod lib lint fixes * Add missing comment * Implement NSSecureCoding for GDLLogStorage (#2216) * Implement NSSecureCoding for GDLLogStorage * Fix style * Rename variable * merge master into googledatalogger branch (#2224) * Clean up FIRAuth bits from FIRApp (#2110) * Clean up FIRAuth bits from FIRApp * Fix tvOS sample's Auth APIs. (#2158) * Update CHANGELOG for Firestore v0.16.1 (#2164) * Update the name of certificates bundle (#2171) To accommodate for release 5.14.0. * Fix format string in FSTRemoteStore error logging (#2172) * C++: replace `FSTMaybeDocumentDictionary` with a C++ equivalent (#2139) Also eliminate most usages of `FSTDocumentKey`, remove most methods from the Objective-C class and make it just a wrapper over `DocumentKey`. The only usage that cannot be directly replaced by C++ `DocumentKey` is in `FSTFieldValue`. * Port performance optimizations to speed up reading large collections from Android (#2140) Straightforward port of firebase/firebase-android-sdk#123. * When searching for gRPC certificates, search the main bundle as well (#2183) When the project is manually configured, it's possible that the certificates file gets added to the main bundle, not the Firestore framework bundle; make sure the bundle can be loaded in that case as well. * Fix Rome instructions (#2184) * Use registerLibrary for pods in Firebase workspace (#2137) * Add versioning to Functions and convert to FIRLibrary * Convert Firestore to FIRLibrary * Point travis to FirebaseCore pre-release for its deps * Update user agent strings to match spec * Port Memory remote document cache to C++ (#2176) * Port Memory remote document cache to C++ * Minor tweaks to release note (#2182) * Minor tweaks * Update CHANGELOG.md * Update CHANGELOG.md * Port leveldb remote document cache to C++ (#2186) * Port leveldb remote document cache * Remove start from persistence interface (#2173) * Remove start from persistence interface, switch FSTLevelDB to use a factory method that returns Status * Fix small typos in public documentation. (#2192) * fix the unit test #1451 (#2187) * Port FSTRemoteDocumentCacheTest to use C++ interface (#2194) * Release 5.15.0 (#2195) * Update versions for Release 5.15.0 * Create 5.15.0.json * Update CHANGELOG for Firestore v0.16.1 (#2164) * Update the name of certificates bundle (#2171) To accommodate for release 5.14.0. * Fix format string in FSTRemoteStore error logging (#2172) * Update CHANGELOG.md * Update 5.15.0.json * Port FSTRemoteDocumentCache (#2196) * Remove FSTRemoteDocumentCache * Fix leaks in Firestore (#2199) * Clean up retain cycle in FSTLevelDB. * Explicitly CFRelease our SCNetworkReachabilityRef. * Make gRPC stream delegates weak * Port DocumentState and UnknownDocument. (#2160) Part of heldwriteacks. Serialization work for this is largely deferred until after nanopb-master is merged with master. * Port FSTMemoryQueryCache to C++ (#2197) * Port FSTLevelDBQueryCache to C++ (#2202) * Port FSTLevelDBQueryCache to C++ * Fix Storage private imports. (#2206) * Add missing Foundation imports to Interop headers (#2207) * Migrate Firestore to the v1 protocol (#2200) * Use python executable directly * python2 is not guaranteed to exist * scripts aren't directly executable on Windows * Add Firestore v1 protos * Point cmake at Firestore v1 protos * Regenerate protobuf-related sources * Make local protos refer to v1 protos * fixup! Regenerate protobuf-related sources * Remove v1beta1 protos * s/v1beta1/v1/g in source. * s/v1beta1/v1/ in the Xcode project * Remove stale bug comments. This was fixed by adding an explicit FieldPath API rather than exposing escaping to the end user. * Add SymbolCollisionTest comment for ARCore (#2210) * Continue work on ReferenceSet (#2213) * Migrate FSTDocumentReference to C++ * Change SortedSet template parameter ordering Makes it easier to specify a comparator without specifying what the empty member of the underlying map is. * Migrate MemoryMutationQueue to C++ references by key * migrate.py * CMake * Finish porting ReferenceSet * Swap reference set implementation * Port MemoryQueryCache to use ported ReferenceSet * Port FSTReferenceSetTest * Port usage for limbo document refs * Port LRU and LocalStore usages * Remove FSTReferenceSet and FSTDocumentReference * Style * Add newline * Implement QueryCache interface and port QueryCache tests (#2209) * Implement QueryCache interface and port tests * Port production usages of QueryCache (#2211) * Remove FSTQueryCache and implementations * Switch size() to size_t * Keep imports consistent (#2217) * Fix private tests by removing unnecessary storyboard entries (#2218) * Fix xcode 9 build of FDLBuilderTestAppObjC (#2219) * Rework FieldMask to use a (ordered) set of FieldPaths (#2136) Rather than a vector. Port of firebase/firebase-android-sdk#137 * Travis to Xcode 10.1 and clang-format to 8.0.0 (tags/google/stable/2018-08-24) (#2222) * Travis to Xcode 10.1 * Update to clang-format 8 * Update clang-format homebrew link * Work around space in filename style.sh issue * Create testing infrastructure that simplifies catching exceptions in dispatch_queues (#2226) * Apply updated clang-format * Add a custom assert and use it instead of NSAssert * Define a shared unit test test class and change unit tests to use it * Add the GDLAssertHelper to be used by GDLAsserts * Change copyright year, style, and only define the assert macro body if !defined(NS_BLOCK_ASSERTIONS) * Remove rvm specification from travis (#2227) * Implement additional tests and enhance GDLLogEvent (#2231) * Move qosTier to the public API and add a custom prioritization dict * Set the default qosTier in each logging API * Change a missing transform: impl to an error, rearrange error enums We can only rearrange the enums because we've not shipped anything yet. * Implement additional tests * Remove extra space Damned flat macbook keyboards. * Refactor to allow injection of fakes, and take warnings seriously (#2232) * Create fakes that can be used during unit tests * Create a private header for the logger * All log storage to be injected into the log writer, and now give logs to log storage. Also changes the tests to use the fakes * Treat all warnings as errors, and warn pedantic * Ok nevermind, don't warn_pedantic. * remove trailing comma * Remove obsolete TODOs Not needed, because a fake is being used. * Add fakes and injection to log storage for the uploader, implement a fast qos test * Move all unit tests to the Tests/Unit folder (#2234) * s/GDLUploader->GDLUploadCoordinator/g and s/GDLLogBackend/GDLLogUploader (#2256) * Implement a clock. (#2273) * Implement a clock. Files to pay attention to: GDLClock.h/m * style * Enhance the log storage class (#2275) * Rename fields related to the previous notion of 'backends' to 'uploaders'. Also changes the declaration of the uploader callback block. * Add the ability to delete a set of logs by filename, and a conversion method for hashes to files. * Change the log storage removeLogs method to be log hashes instead of URLS * Style, and change the completion block to include an error * Change to sync, since async to adding more async created race condition opportunity. * Test new storage methods * Fix coordinator method declarations * Add test-related functionality, make GDLRegistrar thread-safe and tested (#2285) * Add some functionality to the test prioritizer * Change the registrar's API and make it thread safe. * Add an error message enum for a failed upload * Add more functionality to the log storage fake * Make a property readonly. * Implement the upload coordinator (#2290) * Implement the upload coordinator This is a thread safe class to manage the different GDLUploader implementations. * Remove a bad comment * Spelling * Code cleanup (#2297) Add some nullability specifiers, remove a test that won't compile, change the pod gen script. * Update podspec and factor out common test sources (#2336) * Update podspec and factor out common test sources * Add a wifi-only QoS specifier * Change the prioritizer protocol to include upload conditions * Remove an unused log target. * Call unprioritizeLog and remove an assert that wasn't helpful * Put the upload completionBlock on the uploader queue * Fix the CI and podspec. * [DO NOT MERGE TO MASTER] Raise the cocoapods version to 1.6.0.rc.2 * [DO NOT MERGE TO MASTER] Update Gemfile correctly * [DO NOT MERGE TO MASTER] Use the tag, not the version number. * Correct an incorrect commit * Remove the name for standard unit tests * Implement an integration/E2E test of the logging pipeline (#2356) * Move the -removeLog API to be file-private, it's unused publicly. * Remove altering of in flight log set, that's done in the onComplete block * Copy the set of logs given to upload so it's not altered while the pipeline is operating on it. * Implement an integration/E2E test of the library's pipeline Includes a dependency on GCDWebServer in the test_spec to run an HTTP server endpoint that the uploader can upload to. * Rename -protoBytes to -transportBytes * Change the integration test timing * Spelling. * Fix the scheme names in build.sh * Change cocoapods version from 1.6.0.rc.2 to 1.6.0. (#2358) * Change cocoapods version from 1.6.0.rc.2 to 1.6.0. * Update Gemfile.lock * Rename googledatalogger to GoogleDataTransport (#2379) * Rename GoogleDataLogger to GoogleDataTransport All files should be renamed, GDL prefixes should now be GDT. * Remove references to logging and replace with notion of 'transporting' * Style, and cleaning up a few more references * Change travis config to googledatatransport instead of googledatalogger * Add 'upload packages' to allow prioritizers to pass arbitrary data to uploaders (#2470) * Update some comments and move the event clock snapshot to the public header * Create the notion of an 'upload package' This will allow prioritizers to pass arbitrary data to the uploader that might be needed at upload time. * Make the -transportBytes protocol method required. * Make the rest of the framework use the upload package * Style * Remove cct.nanopb.c It was accidentally added. * Implement a stored event object to simplify in-memory storage (#2497) * Implement a stored event object to simplify in-memory storage This will make passing data to the prioritizers and uploaders easier, and it significantly simplifies logic in the storage system while reducing memory footprint. * Remove two files that always seem to sneak in somehow. * Style and a needed cast * Lay the groundwork for CCT support, and include the prioritizer imple… (#2602) * Lay the groundwork for CCT support, and include the prioritizer implementation. * Style * Adjust a flaky test. * Add GoogleDataTransportCCTSupport configs to travis * Fix small issues with travis configs * Fix syntax error in build.sh * Add SpecsStaging as a source for generation of GDTCCT * Address shortening warning * Remove headers that were unintentionally committed. * Make the podspec description != to the summary * Spelling * Expose all properties of a GDTClock, better -hash and -isEqual impls (#2632) * Expose all properties of a GDTClock, add -hash and -isEqual implementations for GDTStoredEvent * Style, remove unused define * Change GDTStoredEvent -isEqual implementation * Remove unintentionally committed files * Implement the CCT proto using nanopb (#2652) * Write scripts to help generate the CCT proto * Expand the event generator to generate consistent events * Add the CCT proto and generator options * Add the nanopb generated sources and nanopb helpers * Ignore the proto-related directories in GDTCCTSupport * Fix whitespace * Clean up generate_cct_protos.sh Use git instead of curl, rename zip to tar.gz, use readonly variables and proper variable expansion * Address review comments Check return of pb_decode correctly. Use more arbitrary numbers for test event data. Reimplement GDTCCTEncodeString more intelligently. Use better initial values for messages. * Fix memory leak in tests * Use FOUNDATION_EXTERN consistently * Fix test name and always initialize nanopb vars. * Change FOUNDATION_EXTERN to FOUNDATION_EXPORT, fix missing assert code * Reinit a corrupted response to the default * Populate an error if decoding the response fails. * Make high priority events force an upload via upload conditions (#2699) * Make high priority events force an upload via upload conditions Forced uploads should be defined as an additional upload condition, as opposed to constructing an ad-hoc upload package. This allows the prioritizer and uploader to do housekeeping more easily. Address a race condition during high-priority event sending in which a package can be asked for whilst the storage is still asking the prioritizer to deprioritize events that have already been removed from disk. * Make _timeMillis in GDTClock be the actual unix time * Style * Implement the CCT prioritizer and an integration test (#2701) * Implement the CCT prioritizer * Add support for high priority uploads * Add an integration test, demonstrating usage of CCT * Update the podspec * Change library podspecs to require 1.5.3, and change old tag string (#2702) * Remove the GULLogger dependency (#2703) * Remove the GULLogger dependency Wrote a quick console logging function that will log to the console in debug mode. * Style * Remove the .proto and .options files from the cocoapod (#2708) * Remove the .proto and .options files from the cocoapod * Update check_whitespace * Implement app-lifecycle groundwork (#2800) * Create the lifecycle class * Add running in background properties and implement archive paths for the stateful classes. * Move -stopTimer into the upload coordinator * Add test for encoding and decoding the storage singleton. * Add a test transport target. * Style * Temporarily commenting out sending lifecycle events. * More style. * Demonstrate that I know what year I made this file * Implement app lifecycle reactivity. (#2801) * Implement app lifecycle reactivity. * Re-enable the lifecycle code * Implement reachability and upload conditions (#2826) * Add a reachability implementation * Add a reachability private header * Define more useful upload conditions and start at 1 instead of 0 * Implement determining upload conditions * Add hooks for lifecycle events to the CCT implementation. * Style * Lower cocoapods required version to 1.4.0 and remove googledatatransport branch from travis config * Add travis_retry for GoogleDataTransport unit tests. The E2E integration tests are a bit flaky.
* Initial structural work for the google data logger. (#2162) * Inital commit * Remove Example project and replace with test_spec in the podspec * Update gitignore to ignore the generated folder. * Add a script to generate the project. * Add some basic structure and tests. * Remove unnecessary files and address PR feedback. Removes .gitkeep, .gitignore, and .travis.yml files Modifies the root .gitignore to ignore files generated by cocoapod-generate Modifies the root .travis.yml to add this podspec to CI Updates the README with some instructions * Adding googledatalogger branch to travis CI * Adding copyrights to files that were missing them * Move GDLLogTransformer to the public header directory An alternative is to set CLANG_ALLOW_NON_MODULAR_INCLUDES_IN_FRAMEWORK_MODULES = 'YES', but I'm not sure this will work when publishing the pod. * Add additional base infrastructure for the logging client (#2174) * Generalize the concept of a logSource Rename and change the type of the 'log source' to be more appropriately generalized as a log mapping identifier string. * Expand the API of the logger. * Add infrastructure for log storage. * Add infrastructure for the log writer. * Remove an unnecessary comment. * Style fixes applied * Change a missed assert message to make more sense. * Flesh out the log event and log writer classes (#2175) * Add timekeeping infrastructure. * Add the log proto protocol. * Flesh out the log event a bit. * Flesh out the log writer. * Put in comments for the log proto protocol. * Move queue to a private header and update the TODO. * Add comment about the QoS tier * Fix style * Enable travis for GoogleDataLogger using cocoapods-generate (#2185) * Add logTarget as a property to GDLLogEvent and connect the logger to the writer. * Enabled building and testing GoogleDataLogger in travis using cocoapods-generate * Update Gemfile.lock * Revert "Add logTarget as a property to GDLLogEvent and connect the logger to the writer." This reverts commit cce26d3. * Fix the workspace path. * Add xcpretty gem * Add the test directive to the GoogleDataLogger invocation * Refactor GoogleDataLogger into its own section Also remove GoogleDataLogger from Xcode9.4 pod lib linting, because the failure was not reproducible. * Create a log wrapper and better test GDLLogWriter (#2190) * Add logTarget as a property to GDLLogEvent and connect the logger to the writer. * Create a log wrapper for use with GULLogger. * GDLLogTransformer should inherit <NSObject> and require transform: * Protect against doesNotRespond exceptions and expand tests * Style and a missing @param. * Update a comment * Implement NSSecureCoding protocol for GDLLogEvent (#2191) * Implement NSSecureCodingProtocol for GDLLogEvent * Style changes. * Refactor to address some comments and structure GDLLogEvent and GDLLogProto are moved to the public folder. GDLLogEvent has had some public API moved to a private header. GDLLogWriter now writes the extension object to data after transforming, for serialization purposes. Various headers updated to conform to module header rules. * Create some core infrastructure for backends (#2198) * s/GDLLogClock/GDLClock/ This isn't a class of log clocks, it's a class of clocks. * Create some core infrastructure to support backends and prioritization of logs. * Docs and slight changes to the scorer API. * Missing return statement * Change 'score' terminology to 'prioritize'. Also style issues. * Change the protocol being used for a prioritizer. * Implement -hash and -copy of GDLLogEvent, copy on log, and don't assign extensionBytes in log writer (#2204) * Implement -hash and -copy of GDLLogEvent Also implements a custom setter for setting the extension that changes the default behavior to set extensionBytes upon assignment of extension. Copy the log upon logging, as the comments promised. Remove setting extensionBytes in the log writer. Implement a missing method * Copy the log object upon logging * Don't assign extensionBytes in the log writer * Make an implicit loss of precision explicit. * Add a comment on performance * Add some test helpers and structure for new classes (#2212) * Test helpers for GDLBackend and GDLLogPrioritizer * Add shared uploader structure * Implement some stubbed methods, update umbrella header, add missing test (#2214) * Add missing test * Implement some stubbed methods, update the umbrella header * Implement log storage (#2215) * Implement log storage Includes tests and categories on existing classes to add testing functionality * Better error handling in tye log storage * Style and pod lib lint fixes * Add missing comment * Implement NSSecureCoding for GDLLogStorage (#2216) * Implement NSSecureCoding for GDLLogStorage * Fix style * Rename variable * merge master into googledatalogger branch (#2224) * Clean up FIRAuth bits from FIRApp (#2110) * Clean up FIRAuth bits from FIRApp * Fix tvOS sample's Auth APIs. (#2158) * Update CHANGELOG for Firestore v0.16.1 (#2164) * Update the name of certificates bundle (#2171) To accommodate for release 5.14.0. * Fix format string in FSTRemoteStore error logging (#2172) * C++: replace `FSTMaybeDocumentDictionary` with a C++ equivalent (#2139) Also eliminate most usages of `FSTDocumentKey`, remove most methods from the Objective-C class and make it just a wrapper over `DocumentKey`. The only usage that cannot be directly replaced by C++ `DocumentKey` is in `FSTFieldValue`. * Port performance optimizations to speed up reading large collections from Android (#2140) Straightforward port of firebase/firebase-android-sdk#123. * When searching for gRPC certificates, search the main bundle as well (#2183) When the project is manually configured, it's possible that the certificates file gets added to the main bundle, not the Firestore framework bundle; make sure the bundle can be loaded in that case as well. * Fix Rome instructions (#2184) * Use registerLibrary for pods in Firebase workspace (#2137) * Add versioning to Functions and convert to FIRLibrary * Convert Firestore to FIRLibrary * Point travis to FirebaseCore pre-release for its deps * Update user agent strings to match spec * Port Memory remote document cache to C++ (#2176) * Port Memory remote document cache to C++ * Minor tweaks to release note (#2182) * Minor tweaks * Update CHANGELOG.md * Update CHANGELOG.md * Port leveldb remote document cache to C++ (#2186) * Port leveldb remote document cache * Remove start from persistence interface (#2173) * Remove start from persistence interface, switch FSTLevelDB to use a factory method that returns Status * Fix small typos in public documentation. (#2192) * fix the unit test #1451 (#2187) * Port FSTRemoteDocumentCacheTest to use C++ interface (#2194) * Release 5.15.0 (#2195) * Update versions for Release 5.15.0 * Create 5.15.0.json * Update CHANGELOG for Firestore v0.16.1 (#2164) * Update the name of certificates bundle (#2171) To accommodate for release 5.14.0. * Fix format string in FSTRemoteStore error logging (#2172) * Update CHANGELOG.md * Update 5.15.0.json * Port FSTRemoteDocumentCache (#2196) * Remove FSTRemoteDocumentCache * Fix leaks in Firestore (#2199) * Clean up retain cycle in FSTLevelDB. * Explicitly CFRelease our SCNetworkReachabilityRef. * Make gRPC stream delegates weak * Port DocumentState and UnknownDocument. (#2160) Part of heldwriteacks. Serialization work for this is largely deferred until after nanopb-master is merged with master. * Port FSTMemoryQueryCache to C++ (#2197) * Port FSTLevelDBQueryCache to C++ (#2202) * Port FSTLevelDBQueryCache to C++ * Fix Storage private imports. (#2206) * Add missing Foundation imports to Interop headers (#2207) * Migrate Firestore to the v1 protocol (#2200) * Use python executable directly * python2 is not guaranteed to exist * scripts aren't directly executable on Windows * Add Firestore v1 protos * Point cmake at Firestore v1 protos * Regenerate protobuf-related sources * Make local protos refer to v1 protos * fixup! Regenerate protobuf-related sources * Remove v1beta1 protos * s/v1beta1/v1/g in source. * s/v1beta1/v1/ in the Xcode project * Remove stale bug comments. This was fixed by adding an explicit FieldPath API rather than exposing escaping to the end user. * Add SymbolCollisionTest comment for ARCore (#2210) * Continue work on ReferenceSet (#2213) * Migrate FSTDocumentReference to C++ * Change SortedSet template parameter ordering Makes it easier to specify a comparator without specifying what the empty member of the underlying map is. * Migrate MemoryMutationQueue to C++ references by key * migrate.py * CMake * Finish porting ReferenceSet * Swap reference set implementation * Port MemoryQueryCache to use ported ReferenceSet * Port FSTReferenceSetTest * Port usage for limbo document refs * Port LRU and LocalStore usages * Remove FSTReferenceSet and FSTDocumentReference * Style * Add newline * Implement QueryCache interface and port QueryCache tests (#2209) * Implement QueryCache interface and port tests * Port production usages of QueryCache (#2211) * Remove FSTQueryCache and implementations * Switch size() to size_t * Keep imports consistent (#2217) * Fix private tests by removing unnecessary storyboard entries (#2218) * Fix xcode 9 build of FDLBuilderTestAppObjC (#2219) * Rework FieldMask to use a (ordered) set of FieldPaths (#2136) Rather than a vector. Port of firebase/firebase-android-sdk#137 * Travis to Xcode 10.1 and clang-format to 8.0.0 (tags/google/stable/2018-08-24) (#2222) * Travis to Xcode 10.1 * Update to clang-format 8 * Update clang-format homebrew link * Work around space in filename style.sh issue * Create testing infrastructure that simplifies catching exceptions in dispatch_queues (#2226) * Apply updated clang-format * Add a custom assert and use it instead of NSAssert * Define a shared unit test test class and change unit tests to use it * Add the GDLAssertHelper to be used by GDLAsserts * Change copyright year, style, and only define the assert macro body if !defined(NS_BLOCK_ASSERTIONS) * Remove rvm specification from travis (#2227) * Implement additional tests and enhance GDLLogEvent (#2231) * Move qosTier to the public API and add a custom prioritization dict * Set the default qosTier in each logging API * Change a missing transform: impl to an error, rearrange error enums We can only rearrange the enums because we've not shipped anything yet. * Implement additional tests * Remove extra space Damned flat macbook keyboards. * Refactor to allow injection of fakes, and take warnings seriously (#2232) * Create fakes that can be used during unit tests * Create a private header for the logger * All log storage to be injected into the log writer, and now give logs to log storage. Also changes the tests to use the fakes * Treat all warnings as errors, and warn pedantic * Ok nevermind, don't warn_pedantic. * remove trailing comma * Remove obsolete TODOs Not needed, because a fake is being used. * Add fakes and injection to log storage for the uploader, implement a fast qos test * Move all unit tests to the Tests/Unit folder (#2234) * s/GDLUploader->GDLUploadCoordinator/g and s/GDLLogBackend/GDLLogUploader (#2256) * Implement a clock. (#2273) * Implement a clock. Files to pay attention to: GDLClock.h/m * style * Enhance the log storage class (#2275) * Rename fields related to the previous notion of 'backends' to 'uploaders'. Also changes the declaration of the uploader callback block. * Add the ability to delete a set of logs by filename, and a conversion method for hashes to files. * Change the log storage removeLogs method to be log hashes instead of URLS * Style, and change the completion block to include an error * Change to sync, since async to adding more async created race condition opportunity. * Test new storage methods * Fix coordinator method declarations * Add test-related functionality, make GDLRegistrar thread-safe and tested (#2285) * Add some functionality to the test prioritizer * Change the registrar's API and make it thread safe. * Add an error message enum for a failed upload * Add more functionality to the log storage fake * Make a property readonly. * Implement the upload coordinator (#2290) * Implement the upload coordinator This is a thread safe class to manage the different GDLUploader implementations. * Remove a bad comment * Spelling * Code cleanup (#2297) Add some nullability specifiers, remove a test that won't compile, change the pod gen script. * Update podspec and factor out common test sources (#2336) * Update podspec and factor out common test sources * Add a wifi-only QoS specifier * Change the prioritizer protocol to include upload conditions * Remove an unused log target. * Call unprioritizeLog and remove an assert that wasn't helpful * Put the upload completionBlock on the uploader queue * Fix the CI and podspec. * [DO NOT MERGE TO MASTER] Raise the cocoapods version to 1.6.0.rc.2 * [DO NOT MERGE TO MASTER] Update Gemfile correctly * [DO NOT MERGE TO MASTER] Use the tag, not the version number. * Correct an incorrect commit * Remove the name for standard unit tests * Implement an integration/E2E test of the logging pipeline (#2356) * Move the -removeLog API to be file-private, it's unused publicly. * Remove altering of in flight log set, that's done in the onComplete block * Copy the set of logs given to upload so it's not altered while the pipeline is operating on it. * Implement an integration/E2E test of the library's pipeline Includes a dependency on GCDWebServer in the test_spec to run an HTTP server endpoint that the uploader can upload to. * Rename -protoBytes to -transportBytes * Change the integration test timing * Spelling. * Fix the scheme names in build.sh * Change cocoapods version from 1.6.0.rc.2 to 1.6.0. (#2358) * Change cocoapods version from 1.6.0.rc.2 to 1.6.0. * Update Gemfile.lock * Rename googledatalogger to GoogleDataTransport (#2379) * Rename GoogleDataLogger to GoogleDataTransport All files should be renamed, GDL prefixes should now be GDT. * Remove references to logging and replace with notion of 'transporting' * Style, and cleaning up a few more references * Change travis config to googledatatransport instead of googledatalogger * Add 'upload packages' to allow prioritizers to pass arbitrary data to uploaders (#2470) * Update some comments and move the event clock snapshot to the public header * Create the notion of an 'upload package' This will allow prioritizers to pass arbitrary data to the uploader that might be needed at upload time. * Make the -transportBytes protocol method required. * Make the rest of the framework use the upload package * Style * Remove cct.nanopb.c It was accidentally added. * Implement a stored event object to simplify in-memory storage (#2497) * Implement a stored event object to simplify in-memory storage This will make passing data to the prioritizers and uploaders easier, and it significantly simplifies logic in the storage system while reducing memory footprint. * Remove two files that always seem to sneak in somehow. * Style and a needed cast * Lay the groundwork for CCT support, and include the prioritizer imple… (#2602) * Lay the groundwork for CCT support, and include the prioritizer implementation. * Style * Adjust a flaky test. * Add GoogleDataTransportCCTSupport configs to travis * Fix small issues with travis configs * Fix syntax error in build.sh * Add SpecsStaging as a source for generation of GDTCCT * Address shortening warning * Remove headers that were unintentionally committed. * Make the podspec description != to the summary * Spelling * Expose all properties of a GDTClock, better -hash and -isEqual impls (#2632) * Expose all properties of a GDTClock, add -hash and -isEqual implementations for GDTStoredEvent * Style, remove unused define * Change GDTStoredEvent -isEqual implementation * Remove unintentionally committed files * Implement the CCT proto using nanopb (#2652) * Write scripts to help generate the CCT proto * Expand the event generator to generate consistent events * Add the CCT proto and generator options * Add the nanopb generated sources and nanopb helpers * Ignore the proto-related directories in GDTCCTSupport * Fix whitespace * Clean up generate_cct_protos.sh Use git instead of curl, rename zip to tar.gz, use readonly variables and proper variable expansion * Address review comments Check return of pb_decode correctly. Use more arbitrary numbers for test event data. Reimplement GDTCCTEncodeString more intelligently. Use better initial values for messages. * Fix memory leak in tests * Use FOUNDATION_EXTERN consistently * Fix test name and always initialize nanopb vars. * Change FOUNDATION_EXTERN to FOUNDATION_EXPORT, fix missing assert code * Reinit a corrupted response to the default * Populate an error if decoding the response fails. * Make high priority events force an upload via upload conditions (#2699) * Make high priority events force an upload via upload conditions Forced uploads should be defined as an additional upload condition, as opposed to constructing an ad-hoc upload package. This allows the prioritizer and uploader to do housekeeping more easily. Address a race condition during high-priority event sending in which a package can be asked for whilst the storage is still asking the prioritizer to deprioritize events that have already been removed from disk. * Make _timeMillis in GDTClock be the actual unix time * Style * Implement the CCT prioritizer and an integration test (#2701) * Implement the CCT prioritizer * Add support for high priority uploads * Add an integration test, demonstrating usage of CCT * Update the podspec * Change library podspecs to require 1.5.3, and change old tag string (#2702) * Remove the GULLogger dependency (#2703) * Remove the GULLogger dependency Wrote a quick console logging function that will log to the console in debug mode. * Style * Remove the .proto and .options files from the cocoapod (#2708) * Remove the .proto and .options files from the cocoapod * Update check_whitespace * Implement app-lifecycle groundwork (#2800) * Create the lifecycle class * Add running in background properties and implement archive paths for the stateful classes. * Move -stopTimer into the upload coordinator * Add test for encoding and decoding the storage singleton. * Add a test transport target. * Style * Temporarily commenting out sending lifecycle events. * More style. * Demonstrate that I know what year I made this file * Implement app lifecycle reactivity. (#2801) * Implement app lifecycle reactivity. * Re-enable the lifecycle code * Implement reachability and upload conditions (#2826) * Add a reachability implementation * Add a reachability private header * Define more useful upload conditions and start at 1 instead of 0 * Implement determining upload conditions * Add hooks for lifecycle events to the CCT implementation. * Style * Lower cocoapods required version to 1.4.0 and remove googledatatransport branch from travis config * Add travis_retry for GoogleDataTransport unit tests. The E2E integration tests are a bit flaky.
No description provided.