Skip to content

Spelling fixes #27

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 30, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/source/algorithm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,6 @@ Future work

Another very promising option is that streaming of delta data is indeed possible. Depending on the configuration of the copy-from-base operations, different optimizations could be applied to reduce the amount of memory required for the final processed delta stream. Some configurations may even allow it to stream data from the base buffer, instead of pre-loading it for random access.

The ability to stream files at reduced memory costs would only be feasible for big files, and would have to be payed with extra pre-processing time.
The ability to stream files at reduced memory costs would only be feasible for big files, and would have to be paid with extra pre-processing time.

A very first and simple implementation could avoid memory peaks by streaming the TDS in conjunction with a base buffer, instead of writing everything into a fully allocated target buffer.
2 changes: 1 addition & 1 deletion doc/source/changes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Changelog

* Fixed possibly critical error, see https://github.com/gitpython-developers/GitPython/issues/220

- However, it only seems to occour on high-entropy data and didn't reoccour after the fix
- However, it only seems to occur on high-entropy data and didn't reoccour after the fix

*****
0.6.0
Expand Down
2 changes: 1 addition & 1 deletion gitdb/db/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ def _db_query(self, sha):
""":return: database containing the given 20 byte sha
:raise BadObject:"""
# most databases use binary representations, prevent converting
# it everytime a database is being queried
# it every time a database is being queried
try:
return self._db_cache[sha]
except KeyError:
Expand Down
2 changes: 1 addition & 1 deletion gitdb/db/loose.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ def has_object(self, sha):
return True
except BadObject:
return False
# END check existance
# END check existence

def store(self, istream):
"""note: The sha we produce will be hex by nature"""
Expand Down
2 changes: 1 addition & 1 deletion gitdb/db/mem.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ def stream_copy(self, sha_iter, odb):
for sha in sha_iter:
if odb.has_object(sha):
continue
# END check object existance
# END check object existence

ostream = self.stream(sha)
# compressed data including header
Expand Down
4 changes: 2 additions & 2 deletions gitdb/pack.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ def pack_object_at(cursor, offset, as_stream):
an object of the correct type according to the type_id of the object.
If as_stream is True, the object will contain a stream, allowing the
data to be read decompressed.
:param data: random accessable data containing all required information
:param data: random accessible data containing all required information
:parma offset: offset in to the data at which the object information is located
:param as_stream: if True, a stream object will be returned that can read
the data, otherwise you receive an info object only"""
Expand Down Expand Up @@ -447,7 +447,7 @@ def partial_sha_to_index(self, partial_bin_sha, canonical_length):
:return: index as in `sha_to_index` or None if the sha was not found in this
index file
:param partial_bin_sha: an at least two bytes of a partial binary sha as bytes
:param canonical_length: lenght of the original hexadecimal representation of the
:param canonical_length: length of the original hexadecimal representation of the
given partial binary sha
:raise AmbiguousObjectName:"""
if len(partial_bin_sha) < 2:
Expand Down
2 changes: 1 addition & 1 deletion gitdb/stream.py
Original file line number Diff line number Diff line change
Expand Up @@ -660,7 +660,7 @@ def __init__(self, fd):

def write(self, data):
""":raise IOError: If not all bytes could be written
:return: lenght of incoming data"""
:return: length of incoming data"""
self.sha1.update(data)
cdata = self.zip.compress(data)
bytes_written = write(self.fd, cdata)
Expand Down
2 changes: 1 addition & 1 deletion gitdb/test/db/test_loose.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,4 @@ def test_basics(self, path):
# END for each sha

self.failUnlessRaises(BadObject, ldb.partial_to_complete_sha_hex, '0000')
# raises if no object could be foudn
# raises if no object could be found
2 changes: 1 addition & 1 deletion gitdb/test/lib.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ def wrapper(self, *args, **kwargs):

def with_rw_directory(func):
"""Create a temporary directory which can be written to, remove it if the
test suceeds, but leave it otherwise to aid additional debugging"""
test succeeds, but leave it otherwise to aid additional debugging"""

def wrapper(self):
path = tempfile.mktemp(prefix=func.__name__)
Expand Down
2 changes: 1 addition & 1 deletion gitdb/test/test_pack.py
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ def rewind_streams():
# END verify files exist
# END for each packpath, indexpath pair

# verify the packs throughly
# verify the packs thoroughly
rewind_streams()
entity = PackEntity.create(pack_objs, rw_dir)
count = 0
Expand Down
2 changes: 1 addition & 1 deletion gitdb/test/test_util.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ def test_lockedfd(self):
self._cmp_contents(my_file, orig_data)
assert not os.path.isfile(lockfilepath)

# additional call doesnt fail
# additional call doesn't fail
lfd.commit()
lfd.rollback()

Expand Down