The pack.windowlimit configuration parameter places an upper bound
on the number of bytes used by the DeltaWindow class as it scans
through the object list. If memory usage would exceed the limit
the window is temporarily decreased in size to keep memory used
within that bound.
Change-Id: I09521b8f335475d8aee6125826da8ba2e545060d
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If we have multiple CPUs available, packing usually goes faster
when each CPU is assigned a slice of the available search space.
The number of threads to use is guessed from the runtime if it
wasn't set by the caller, or wasn't set in the configuration.
Change-Id: If554fd8973db77632a52a0f45377dd6ec13fc220
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
PackWriter now caches small deltas, or deltas that are very tiny
compared to their source inputs, so that the writing phase goes
faster by reusing those cached deltas.
The cached data is stored compressed, which usually translates to
a bigger footprint due to deltas being very hard to compress, but
saves time during writing by avoiding the deflate step. They are
held under SoftReferences so that the JVM GC can clear out deltas
if memory gets very tight. We would rather continue working and
spend a bit more CPU time during writing than crash due to OOME.
To avoid OutOfMemoryErrors during the caching phase we also trap
OOME and just abort out of the caching.
Because deflateBound() always produces something larger than what
we need to actually store the deflated data, we copy it over into
a new buffer if the actual length doesn't match the buffer length.
When packing jgit.git this saves over 111 KiB in the cache, and is
thus a worthwhile hit on CPU time.
To further save memory we store the inflated size of the delta
(which we need for the object header) in the same field as the
pathHash, as the pathHash is no longer necessary by this phase
of the packing algorithm.
Change-Id: I0da0c600d845e8ec962289751f24e65b5afa56d7
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
PackWriter now produces new deltas if there is not a suitable delta
available for reuse from an existing pack file. This permits JGit to
send less data on the wire by sending a delta relative to an object
the other side already has, instead of sending the whole object.
The delta searching algorithm is similar in style to what C Git
uses, but apparently has some differences (see below for more on).
Briefly, objects that should be considered for delta compression are
pushed onto a list. This list is then sorted by a rough similarity
score, which is derived from the path name the object was discovered
at in the repository during object counting. The list is then
walked in order.
At each position in the list, up to $WINDOW objects prior to it
are attempted as delta bases. Each object in the window is tried,
and the shortest delta instruction sequence selects the base object.
Some rough rules are used to prevent pathological behavior during
this matching phase, like skipping pairings of objects that are
not similar enough in size.
PackWriter intentionally excludes commits and annotated tags from
this new delta search phase. In the JGit repository only 28 out
of 2600+ commits can be delta compressed by C Git. As the commit
count tends to be a fair percentage of the total number of objects
in the repository, and they generally do not delta compress well,
skipping over them can improve performance with little increase in
the output pack size.
Because this implementation was rebuilt from scratch based on my own
memory of how the packing algorithm has evolved over the years in
C Git, PackWriter, DeltaWindow, and DeltaEncoder don't use exactly
the same rules everywhere, and that leads JGit to produce different
(but logically equivalent) pack files.
Repository | Pack Size (bytes) | Packing Time
| JGit - CGit = Difference | JGit / CGit
-----------+----------------------------------+-----------------
git | 25094348 - 24322890 = +771458 | 59.434s / 59.133s
jgit | 5669515 - 5709046 = - 39531 | 6.654s / 6.806s
linux-2.6 | 389M - 386M = +3M | 20m02s / 18m01s
For the above tests pack.threads was set to 1, window size=10,
delta depth=50, and delta and object reuse was disabled for both
implementations. Both implementations were reading from an already
fully packed repository on local disk. The running time reported
is after 1 warm-up run of the tested implementation.
PackWriter is writing 771 KiB more data on git.git, 3M more on
linux-2.6, but is actually 39.5 KiB smaller on jgit.git. Being
larger by less than 0.7% on linux-2.6 isn't bad, nor is taking an
extra 2 minutes to pack. On the running time side, JGit is at a
major disadvantage because linux-2.6 doesn't fit into the default
WindowCache of 20M, while C Git is able to mmap the entire pack and
have it available instantly in physical memory (assuming hot cache).
CGit also has a feature where it caches deltas that were created
during the compression phase, and uses those cached deltas during
the writing phase. PackWriter does not implement this (yet),
and therefore must create every delta twice. This could easily
account for the increased running time we are seeing.
Change-Id: I6292edc66c2e95fbe45b519b65fdb3918068889c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This is a horribly crude application, it doesn't even verify that
the object its dumping is delta encoded. Its method of getting the
delta is pretty abusive to the public PackWriter API, because right
now we don't want to expose the real internal low-level methods
actually required to do this.
Change-Id: I437a17ceb98708b5603a2061126eb251e82f4ed4
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
DeltaIndex is a simple pack style delta generator. The function works
by creating a compact index of a source buffer's blocks, and then
walking a sliding window along a desired result buffer, searching for
the window in the index. When a match is found, the window is
stretched to the longest possible length that is common with the
source buffer, and a copy instruction is created.
Rabin's polynomial hash function is used to compute the hash for a
block, permitting efficient sliding of the window in single byte
increments. The update function to slide one byte originated from
David Mazieres' work in LBFS, and our implementation of the update
step was certainly inspired by the initial work Geert Bosch proposed
for C Git in http://marc.info/?l=git&m=114565424620771&w=2.
To ensure the encoder runs in linear time with respect to the size of
the two input buffers (source and result), the maximum number of
blocks that can share the same position in the index's hashtable is
capped at a constant number. This prevents bad inputs from causing
the encoder to run in quadratic time, but comes with a penalty of
creating a longer delta due to fewer considered copy positions.
Strange hackery is used to cap the amount of memory used by the index
to be no more than 12 bytes for every 16 bytes of source buffer, no
matter what the JVM per-object overhead is. This permits an index to
always be no larger than 1.75x the source buffer length, which is an
important feature to support large windows of candidates to match
against while packing. Here the strange hackery is nothing more than
a manually managed chained hashtable, where pointers are array indexes
into storage arrays rather than object references.
Computation of the hash function for a single fixed sized block is
done through an unrolled loop, where the first 4 iterations have been
manually reduced down to eliminate unnecessary instructions. The
pattern is derived from ObjectId.equals(byte[], int, byte[], int),
where we have unrolled the loop required to compare two 20 byte
arrays. Hours of testing with the Sun 1.6 JRE concluded that the
non-obvious "foo[idx + 1]" style of reference is faster than
"foo[idx++]", and so that is what we use here during hashing.
Change-Id: If9fb2a1524361bc701405920560d8ae752221768
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Its useful to know what the flags are or what the base that was
selected is. Dump these out as part of the object's toString.
Change-Id: I8810067fb8337b08b4fcafd5f9ea3e1e31ca6726
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
A subclass may want to use this method to release handles that are
caching reuse information. Make it protected so they can override
it and update themselves.
Change-Id: I2277a56ad28560d2d2d97961cbc74bc7405a70d4
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Searching for reuse candidates should be fast compared to actually
doing delta compression. So pull the progress monitor out of this
phase and rename it back to identify the compressing objects state.
Change-Id: I5eb80919f21c1251e0e3420ff7774126f1f79b27
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Long ago when PackWriter is first written we thought that the delta
depth could be updated automatically. But its never used. Instead
make this a simple standard setter so the caller can more directly
set the delta depth of this object. This permits us to configure a
depth that takes into account more than just the depth of another
object in this same pack.
Change-Id: I1d71b74f2edd7029b8743a2c13b591098ce8cc8f
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
C Git's fast-import uses this to determine the maximum file size
that it tries to delta compress, anything equal to or above this
setting is stored with as a whole object with simple deflate.
Define the configuration so we can use it later.
Change-Id: Iea46e787d019a1b6c51135cc73d7688a02e207f5
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This flag will later control whether or not PackWriter search for a
delta base for this object. Edge objects will never get searched,
as the writer won't be outputting them, so they should always have
this flag set on. Sometime in the future this flag should also be
set for file blobs on file paths that have the "-delta" gitattribute
set in the repository's attributes file.
Change-Id: I6e518e1a6996c8ce00b523727f1b605e400e82c6
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We now at least import other pack settings like pack.window, which
means we can later use these to control how we search for deltas.
The compression level was fixed to use pack.compression rather than
the loose object core.compression setting.
Change-Id: I72ff6d481c936153ceb6a9e485fa731faf075a9a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We need to remember these so we can later cluster objects that
have similar file paths near each other as we search for deltas
between them.
Change-Id: I52cb1e4ca15c9c267a2dbf51dd0d795f885f4cf8
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
PackWriter wants to categorize objects that are similar in path name,
so blobs that are probably from the same file (or same sort of file)
can be delta compressed against each other. Avoid converting into
a string by performing the hashing directly against the path buffer
in the tree iterator.
We only hash the last 16 bytes of the path, and we try avoid any
spaces, as we want the suffix of a file such as ".java" to be more
important than the directory it is in, like "src".
Change-Id: I31770ee711526306769a6f534afb19f937e0ba85
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This is an informational function used by PackWriter to help it
better organize objects for delta compression. Storage systems
can implement it to provide up more detailed size information,
or they can simply rely on the default behavior that uses the
ObjectLoader obtained from open.
For local file storage, we can obtain this information faster
through specialized routines that parse a pack object header.
Change-Id: I13a09b4effb71ea5151b51547f7d091564531e58
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the heap limit was set to something smaller than 8 KiB, we were
still allocating the full 8 KiB block size, and accepting up to
the amount we allocated by. Instead actually put a hard cap on
the limit.
Change-Id: Id1da26fde2102e76510b1da4ede8493928a981cc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The special value 127 here means how many bytes we can put into
a single insert command. Rather than use the magical value 127,
lets name it to better document the code.
Change-Id: I5a326f4380f6ac87987fa833e9477700e984a88e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Although all modern delta decoders can process copy instructions
with a count as large as 0xffffff (~15.9 MiB), pack version 2 streams
are only supposed to use delta copy instructions up to 64 KiB.
Rewrite our copy instruction encode loop to use the lower 64 KiB
limit, even though modern decoders would support longer copies.
To improve encoding performance we now try to encode up to four full
copy commands in our buffer before we flush it to the stream, but
we don't try to implement full buffering here. We are just trying
to amortize the virtual method call to the destination stream when
we have to do a large copy.
Change-Id: I9410a16e6912faa83180a9788dc05f11e33fabae
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The encode loop had the wrong condition, objects that are 128 bytes
in size need to have their length encoded as two bytes, not one.
Change-Id: I3bef85f2b774871ba6104042b341749eb8e7595c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rename the ByteWindow's inflate() method to setInput. We have
completely refactored the purpose of this method to be feeding part
(or all) of the window as input to the Inflater, and the actual
inflate activity happens in the caller.
Change-Id: Ie93a5bae0e9e637b5e822d56993ce6b562c6ad15
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We need to validate the stream state after the InflaterInputStream
thinks the stream is done. Git expects a higher level of service from
the Inflater than the InflaterInputStream usually gives, we need to
ensure the embedded CRC is valid, and that there isn't trailing
garbage at the end of the file.
Change-Id: I1c9642a82dbd76b69e607dceccf8b85dc869a3c1
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Alex pointed out that my description of a bare repository might be
confusing for some readers. Reword the description of the error,
and make it consistent throughout the Repository class's API.
Change-Id: I87929ddd3005f578a7022f363270952d1f7f8664
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
During code review, Alex raised a few comments about commit
532421d989 ("Refactor repository construction to builder class").
Due to the size of the related series we aren't going to go back
and rebase in something this minor, so resolve them as a follow-up
commit instead.
Change-Id: Ied52f7a8f7252743353c58d20bfc3ec498933e00
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Now that any large objects are forced through a streaming loader
when its bigger than getStreamFileThreshold(), and that threshold
is pegged at Integer.MAX_VALUE as its largest size, we will never
be able to reach this code path where we threw OutOfMemoryError.
Robin pointed out that we probably should include a message here,
but the code is effectively unreachable, so there isn't any value
in adding a message at this point.
So remove it.
Change-Id: Ie611d005622e38a75537f1350246df0ab89dd500
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Since we don't know the type of object we are parsing, we don't
know if its a massive blob, or some small commit or annotated tag.
Avoid pulling the cached bytes until we have checked the type and
decided if we actually need them to continue parsing right now.
This way large blobs which won't fit in memory and would throw
a LargeObjectException don't abort parsing.
Change-Id: Ifb70df5d1c59f616aa20ee88898cb69524541636
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Callers don't necessarily need the getSize() result from a large
delta. They instead should be always using openStream() or copyTo()
for blobs going to local files, or they should be checking the
result of the constant-time isLarge() method to determine the type
of access they can use on the ObjectLoader. Avoid inflating the
delta instruction stream twice by delaying the decoding of the size
until after we have created the DeltaStream and decoded the header.
Likewise with the type, callers don't necessarily always need it
to be present in an ObjectLoader. Delay looking at it as late as
we can, thereby avoiding an ugly O(N^2) loop looking up the type
for every single object in the entire delta chain.
Change-Id: I6487b75b52a5d201d811a8baed2fb4fcd6431320
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We default this to 1 MiB for now, but we allow users to modify
it through the Repository's configuration file to be a different
value. A new repository listener is used to identify when the
setting has been updated and trigger a reconfiguration of any
active ObjectReaders.
To prevent a horrible explosion we cap core.streamFileThreshold
at no more than 1/4 of the maximum JVM heap size. We do this
because we need at least 2 byte arrays equal in size to the
stream threshold for the worst case delta inflation scenario,
and our host application probably also needs some amount of the
heap for their working set size.
Change-Id: I103b3a541dc970bbf1a6d92917a12c5a1ee34d6c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Very large delta instruction streams, or deltas which use very large
base objects, are now streamed through as large objects rather than
being inflated into a byte array.
This isn't the most efficient way to access delta encoded content, as
we may need to rewind and reprocess the base object when there was a
block moved within the file, but it will at least prevent the JVM from
having its heap explode.
When streaming a delta we have an inflater open for each level in the
delta chain, to inflate the instruction set of the delta, as well as
an inflater for the base level object. The base object is buffered,
as is the top level delta requested by the application, but we do not
buffer the intermediate delta streams. This keeps memory usage lower,
so its closer to 1024 bytes per level in the chain, without having an
adverse impact on raw throughput as the top-level buffer gets pushed
down to the lowest stream that has the next region.
Delta instructions transparently collapse here, if the top level does
not copy a region from its base, the base won't materialize that part
from its own base, etc. This allows us to avoid copying around a lot
of segments which have been deleted from the final version.
Change-Id: I724d45245cebb4bad2deeae7b896fc55b2dd49b3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Similar to the loose object support, whole packed objects can
now be streamed back to the caller. The streaming is less
efficient as we copy the data from the cached window array
into the InflaterInputStream's internal buffer, then inflate
it there before returning to the application.
Like with unpacked objects, there is plenty of room for some
optimization, especially for the copyTo method, where we don't
necessarily need so much buffering to exist.
Change-Id: Ie23be81289e37e24b91d17b0891e47b9da988008
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The class is identical, but ObjectLoader.SmallObject is part of our
public API for storage implementations to build on top of.
Change-Id: I381a3953b14870b6d3d74a9c295769ace78869dc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Big loose objects can now be streamed if they are over the large
object size threshold. This prevents the JVM heap from exploding
with a very large byte array to hold the slurped file, and then
again with its uncompressed copy.
We may have slightly slowed down the simple case for small
loose objects, as the loader no longer slurps the entire thing
and decompresses in memory. To try and keep good performance
for the very common small objects that are below 8 KiB in size,
buffers are set to 8 KiB, causing the reader to slurp most of the
file anyway. However the data has to be copied at least once,
from the BufferedInputStream into the InflaterInputStream.
New unit tests are supplied to get nearly 100% code coverage on the
unpacked code paths, for both standard and pack style loose objects.
We tested a fair chunk of the code elsewhere, but these new tests
are better isolated to the specific branches in the code path.
Change-Id: I87b764ab1b84225e9b5619a2a55fd8eaa640e1fe
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Assume that the argument of compareTo won't be mutated while we
are doing the compare, and support the wider AnyObjectId type so
MutableObjectId is suitable on either side of the compareTo call.
Change-Id: I2a63a496c0a7b04f0e5f27d588689c6d5e149d98
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This way we can stream a large file through memory, rather than
loading the entire thing into a single contiguous byte array.
Change-Id: I3ada2856af2bf518f072edec242667a486fb0df1
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of loading the entire object as a byte array and passing
that into the deflater, let the ObjectLoader copy the object onto
the DeflaterOutputStream. This has the nice side effect of using
some sort of stride hack in the Sun implementation that may improve
compression performance.
Change-Id: I3f3d681b06af0da93ab96c75468e00e183ff32fe
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Only allocate the Deflater if we can't reuse everything, but also
make sure we release it when we release the PackWriter's resources.
Change-Id: I16a32b94647af0778658eda87acbafc9a25b314a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Blobs that are too large to read as a single byte array should be
accessed through an InputStream based interface instead, allowing
the application to walk through the data stream incrementally.
Define the basic interface to support streaming contents, but don't
implement it yet for the file based backend.
Change-Id: If9e4442e9ef4ed52c3e0f1af9398199a73145516
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of creating the DirCache from a static factory method, use
an instance method on Repository, permitting the implementation to
override the method with a completely different type of DirCache
reading and writing. This would better support a repository in the
cloud strategy, or even just an in-memory unit test environment.
Change-Id: I6399894b12d6480c4b3ac84d10775dfd1b8d13e7
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Using a custom exception type makes it easire for an application
developer to understand why an exception was thrown out of a method
we declare. To remain compatiable with existing callers, we still
extend off IllegalStateException.
Change-Id: Ideeef2399b11ca460a2dbb3cd80eb76aa0a025ba
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Update a number of calling sites of RevWalk to ensure the walker's
internal ObjectReader is released after the walk is no longer used.
Because the ObjectReader is likely to hold onto a native resource
like an Inflater, we don't want to leak them outside of their
useful scope.
Where possible we also try to share ObjectReaders across several
walk pools, or between a walker and a PackWriter. This permits
the ObjectReader to actually do some caching if it felt inclined
to do so.
Not everything was updated, we'll probably need to come back and
update even more call sites, but these are some of the biggest
offenders. Test cases in particular aren't updated. My plan is to
move most storage-agnostic tests onto some purely in-memory storage
solution that doesn't do compression.
Change-Id: I04087ec79faeea208b19848939898ad7172b6672
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rather than building a custom reader, have the caller supply us one.
Change-Id: Ief2b5a6b1b75f05c8a6bc732a60d4d1041dd8254
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of creating new ObjectReader for each walker, use one for
the entire connection and delegate reads through it.
Change-Id: I7f0a2ec8c9fe60b095a7be77dc423a2ff8b443a3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We don't actually need a Repository object here, just an ObjectReader
that can load content for us. So change the API to depend on that.
However, this breaks the asCommit and asTag legacy translation methods
on RevCommit and RevTag, so we still have to keep the Repository
inside of RevWalk for those two types. Hopefully we can drop those in
the future, and then drop the Repository off the RevWalk.
Change-Id: Iba983e48b663790061c43ae9ffbb77dfe6f4818e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We don't need this field to be volatile. Events are delivered by
the same thread that created the RepositoryEvent object, and thus
any cross-thread operations would need to be handled by some other
type of synchronization in the listener, and that would protect
both the repository field and any other per-event data.
Change-Id: Iefe345959e1a2d4669709dbf82962bcc1b8913e3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Similar to what we did on Repository, the openObject method
already implied we wanted to open an object, given its main
argument was of type AnyObjectId. Simplify the method name
to just the action, has or open.
Change-Id: If055e5e0d8de0e2424c18a773f6d2bc2f66054f4
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We drop the "Object" suffix, because its pretty clear here that
we want to open an object, given that we pass in AnyObjectId as
the main parameter. We also fix the calling convention to throw
a MissingObjectException or IncorrectObjectTypeException, so that
callers don't have to do this error checking themselves.
Change-Id: I72c43353cea8372278b032f5086d52082c1eee39
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rather than taking the ProgressMonitor objects in our constructor and
carrying them around as instance fields, take them as arguments to the
actual time consuming operations we need to run.
Change-Id: I2b230d07e277de029b1061c807e67de5428cc1c4
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rather than storing this in an instance member, pass it down the
calling stack. Its cleaner, we don't have to poke the stream as
a temporary field, and then unset it.
Change-Id: I0fd323371bc12edb10f0493bf11885d7057aeb13
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Going through ObjectReader.openObject(AnyObjectId) is faster, but
also produces cleaner application level code. The error checking
is done inside of the openObject method, which means it can be
removed from the application code.
Change-Id: Ia927b448d128005e1640362281585023582b1a3a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>