We need to remember these so we can later cluster objects that
have similar file paths near each other as we search for deltas
between them.
Change-Id: I52cb1e4ca15c9c267a2dbf51dd0d795f885f4cf8
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
PackWriter wants to categorize objects that are similar in path name,
so blobs that are probably from the same file (or same sort of file)
can be delta compressed against each other. Avoid converting into
a string by performing the hashing directly against the path buffer
in the tree iterator.
We only hash the last 16 bytes of the path, and we try avoid any
spaces, as we want the suffix of a file such as ".java" to be more
important than the directory it is in, like "src".
Change-Id: I31770ee711526306769a6f534afb19f937e0ba85
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This is an informational function used by PackWriter to help it
better organize objects for delta compression. Storage systems
can implement it to provide up more detailed size information,
or they can simply rely on the default behavior that uses the
ObjectLoader obtained from open.
For local file storage, we can obtain this information faster
through specialized routines that parse a pack object header.
Change-Id: I13a09b4effb71ea5151b51547f7d091564531e58
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the heap limit was set to something smaller than 8 KiB, we were
still allocating the full 8 KiB block size, and accepting up to
the amount we allocated by. Instead actually put a hard cap on
the limit.
Change-Id: Id1da26fde2102e76510b1da4ede8493928a981cc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The special value 127 here means how many bytes we can put into
a single insert command. Rather than use the magical value 127,
lets name it to better document the code.
Change-Id: I5a326f4380f6ac87987fa833e9477700e984a88e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Although all modern delta decoders can process copy instructions
with a count as large as 0xffffff (~15.9 MiB), pack version 2 streams
are only supposed to use delta copy instructions up to 64 KiB.
Rewrite our copy instruction encode loop to use the lower 64 KiB
limit, even though modern decoders would support longer copies.
To improve encoding performance we now try to encode up to four full
copy commands in our buffer before we flush it to the stream, but
we don't try to implement full buffering here. We are just trying
to amortize the virtual method call to the destination stream when
we have to do a large copy.
Change-Id: I9410a16e6912faa83180a9788dc05f11e33fabae
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The encode loop had the wrong condition, objects that are 128 bytes
in size need to have their length encoded as two bytes, not one.
Change-Id: I3bef85f2b774871ba6104042b341749eb8e7595c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rename the ByteWindow's inflate() method to setInput. We have
completely refactored the purpose of this method to be feeding part
(or all) of the window as input to the Inflater, and the actual
inflate activity happens in the caller.
Change-Id: Ie93a5bae0e9e637b5e822d56993ce6b562c6ad15
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We need to validate the stream state after the InflaterInputStream
thinks the stream is done. Git expects a higher level of service from
the Inflater than the InflaterInputStream usually gives, we need to
ensure the embedded CRC is valid, and that there isn't trailing
garbage at the end of the file.
Change-Id: I1c9642a82dbd76b69e607dceccf8b85dc869a3c1
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Alex pointed out that my description of a bare repository might be
confusing for some readers. Reword the description of the error,
and make it consistent throughout the Repository class's API.
Change-Id: I87929ddd3005f578a7022f363270952d1f7f8664
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
During code review, Alex raised a few comments about commit
532421d989 ("Refactor repository construction to builder class").
Due to the size of the related series we aren't going to go back
and rebase in something this minor, so resolve them as a follow-up
commit instead.
Change-Id: Ied52f7a8f7252743353c58d20bfc3ec498933e00
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Now that any large objects are forced through a streaming loader
when its bigger than getStreamFileThreshold(), and that threshold
is pegged at Integer.MAX_VALUE as its largest size, we will never
be able to reach this code path where we threw OutOfMemoryError.
Robin pointed out that we probably should include a message here,
but the code is effectively unreachable, so there isn't any value
in adding a message at this point.
So remove it.
Change-Id: Ie611d005622e38a75537f1350246df0ab89dd500
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Since we don't know the type of object we are parsing, we don't
know if its a massive blob, or some small commit or annotated tag.
Avoid pulling the cached bytes until we have checked the type and
decided if we actually need them to continue parsing right now.
This way large blobs which won't fit in memory and would throw
a LargeObjectException don't abort parsing.
Change-Id: Ifb70df5d1c59f616aa20ee88898cb69524541636
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Callers don't necessarily need the getSize() result from a large
delta. They instead should be always using openStream() or copyTo()
for blobs going to local files, or they should be checking the
result of the constant-time isLarge() method to determine the type
of access they can use on the ObjectLoader. Avoid inflating the
delta instruction stream twice by delaying the decoding of the size
until after we have created the DeltaStream and decoded the header.
Likewise with the type, callers don't necessarily always need it
to be present in an ObjectLoader. Delay looking at it as late as
we can, thereby avoiding an ugly O(N^2) loop looking up the type
for every single object in the entire delta chain.
Change-Id: I6487b75b52a5d201d811a8baed2fb4fcd6431320
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We default this to 1 MiB for now, but we allow users to modify
it through the Repository's configuration file to be a different
value. A new repository listener is used to identify when the
setting has been updated and trigger a reconfiguration of any
active ObjectReaders.
To prevent a horrible explosion we cap core.streamFileThreshold
at no more than 1/4 of the maximum JVM heap size. We do this
because we need at least 2 byte arrays equal in size to the
stream threshold for the worst case delta inflation scenario,
and our host application probably also needs some amount of the
heap for their working set size.
Change-Id: I103b3a541dc970bbf1a6d92917a12c5a1ee34d6c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Very large delta instruction streams, or deltas which use very large
base objects, are now streamed through as large objects rather than
being inflated into a byte array.
This isn't the most efficient way to access delta encoded content, as
we may need to rewind and reprocess the base object when there was a
block moved within the file, but it will at least prevent the JVM from
having its heap explode.
When streaming a delta we have an inflater open for each level in the
delta chain, to inflate the instruction set of the delta, as well as
an inflater for the base level object. The base object is buffered,
as is the top level delta requested by the application, but we do not
buffer the intermediate delta streams. This keeps memory usage lower,
so its closer to 1024 bytes per level in the chain, without having an
adverse impact on raw throughput as the top-level buffer gets pushed
down to the lowest stream that has the next region.
Delta instructions transparently collapse here, if the top level does
not copy a region from its base, the base won't materialize that part
from its own base, etc. This allows us to avoid copying around a lot
of segments which have been deleted from the final version.
Change-Id: I724d45245cebb4bad2deeae7b896fc55b2dd49b3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Similar to the loose object support, whole packed objects can
now be streamed back to the caller. The streaming is less
efficient as we copy the data from the cached window array
into the InflaterInputStream's internal buffer, then inflate
it there before returning to the application.
Like with unpacked objects, there is plenty of room for some
optimization, especially for the copyTo method, where we don't
necessarily need so much buffering to exist.
Change-Id: Ie23be81289e37e24b91d17b0891e47b9da988008
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The class is identical, but ObjectLoader.SmallObject is part of our
public API for storage implementations to build on top of.
Change-Id: I381a3953b14870b6d3d74a9c295769ace78869dc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Big loose objects can now be streamed if they are over the large
object size threshold. This prevents the JVM heap from exploding
with a very large byte array to hold the slurped file, and then
again with its uncompressed copy.
We may have slightly slowed down the simple case for small
loose objects, as the loader no longer slurps the entire thing
and decompresses in memory. To try and keep good performance
for the very common small objects that are below 8 KiB in size,
buffers are set to 8 KiB, causing the reader to slurp most of the
file anyway. However the data has to be copied at least once,
from the BufferedInputStream into the InflaterInputStream.
New unit tests are supplied to get nearly 100% code coverage on the
unpacked code paths, for both standard and pack style loose objects.
We tested a fair chunk of the code elsewhere, but these new tests
are better isolated to the specific branches in the code path.
Change-Id: I87b764ab1b84225e9b5619a2a55fd8eaa640e1fe
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Assume that the argument of compareTo won't be mutated while we
are doing the compare, and support the wider AnyObjectId type so
MutableObjectId is suitable on either side of the compareTo call.
Change-Id: I2a63a496c0a7b04f0e5f27d588689c6d5e149d98
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This way we can stream a large file through memory, rather than
loading the entire thing into a single contiguous byte array.
Change-Id: I3ada2856af2bf518f072edec242667a486fb0df1
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of loading the entire object as a byte array and passing
that into the deflater, let the ObjectLoader copy the object onto
the DeflaterOutputStream. This has the nice side effect of using
some sort of stride hack in the Sun implementation that may improve
compression performance.
Change-Id: I3f3d681b06af0da93ab96c75468e00e183ff32fe
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Only allocate the Deflater if we can't reuse everything, but also
make sure we release it when we release the PackWriter's resources.
Change-Id: I16a32b94647af0778658eda87acbafc9a25b314a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Blobs that are too large to read as a single byte array should be
accessed through an InputStream based interface instead, allowing
the application to walk through the data stream incrementally.
Define the basic interface to support streaming contents, but don't
implement it yet for the file based backend.
Change-Id: If9e4442e9ef4ed52c3e0f1af9398199a73145516
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of creating the DirCache from a static factory method, use
an instance method on Repository, permitting the implementation to
override the method with a completely different type of DirCache
reading and writing. This would better support a repository in the
cloud strategy, or even just an in-memory unit test environment.
Change-Id: I6399894b12d6480c4b3ac84d10775dfd1b8d13e7
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Using a custom exception type makes it easire for an application
developer to understand why an exception was thrown out of a method
we declare. To remain compatiable with existing callers, we still
extend off IllegalStateException.
Change-Id: Ideeef2399b11ca460a2dbb3cd80eb76aa0a025ba
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Update a number of calling sites of RevWalk to ensure the walker's
internal ObjectReader is released after the walk is no longer used.
Because the ObjectReader is likely to hold onto a native resource
like an Inflater, we don't want to leak them outside of their
useful scope.
Where possible we also try to share ObjectReaders across several
walk pools, or between a walker and a PackWriter. This permits
the ObjectReader to actually do some caching if it felt inclined
to do so.
Not everything was updated, we'll probably need to come back and
update even more call sites, but these are some of the biggest
offenders. Test cases in particular aren't updated. My plan is to
move most storage-agnostic tests onto some purely in-memory storage
solution that doesn't do compression.
Change-Id: I04087ec79faeea208b19848939898ad7172b6672
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This way we can be ensured its released when the generator
is done running.
Change-Id: I6be48d26b9bd5ac176c1316a9aabdf3a897e1696
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rather than building a custom reader, have the caller supply us one.
Change-Id: Ief2b5a6b1b75f05c8a6bc732a60d4d1041dd8254
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of creating new ObjectReader for each walker, use one for
the entire connection and delegate reads through it.
Change-Id: I7f0a2ec8c9fe60b095a7be77dc423a2ff8b443a3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We don't actually need a Repository object here, just an ObjectReader
that can load content for us. So change the API to depend on that.
However, this breaks the asCommit and asTag legacy translation methods
on RevCommit and RevTag, so we still have to keep the Repository
inside of RevWalk for those two types. Hopefully we can drop those in
the future, and then drop the Repository off the RevWalk.
Change-Id: Iba983e48b663790061c43ae9ffbb77dfe6f4818e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We don't need this field to be volatile. Events are delivered by
the same thread that created the RepositoryEvent object, and thus
any cross-thread operations would need to be handled by some other
type of synchronization in the listener, and that would protect
both the repository field and any other per-event data.
Change-Id: Iefe345959e1a2d4669709dbf82962bcc1b8913e3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Similar to what we did on Repository, the openObject method
already implied we wanted to open an object, given its main
argument was of type AnyObjectId. Simplify the method name
to just the action, has or open.
Change-Id: If055e5e0d8de0e2424c18a773f6d2bc2f66054f4
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We drop the "Object" suffix, because its pretty clear here that
we want to open an object, given that we pass in AnyObjectId as
the main parameter. We also fix the calling convention to throw
a MissingObjectException or IncorrectObjectTypeException, so that
callers don't have to do this error checking themselves.
Change-Id: I72c43353cea8372278b032f5086d52082c1eee39
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rather than taking the ProgressMonitor objects in our constructor and
carrying them around as instance fields, take them as arguments to the
actual time consuming operations we need to run.
Change-Id: I2b230d07e277de029b1061c807e67de5428cc1c4
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rather than storing this in an instance member, pass it down the
calling stack. Its cleaner, we don't have to poke the stream as
a temporary field, and then unset it.
Change-Id: I0fd323371bc12edb10f0493bf11885d7057aeb13
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Going through ObjectReader.openObject(AnyObjectId) is faster, but
also produces cleaner application level code. The error checking
is done inside of the openObject method, which means it can be
removed from the application code.
Change-Id: Ia927b448d128005e1640362281585023582b1a3a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the type hint isn't OBJ_ANY and it doesn't match the actual type
observed from the object store, define the reader to throw back an
IncorrectObjectTypeException. This way the caller doesn't have to
perform this check itself before it evaluates the object data, and
we can simplify quite a few call sites.
Change-Id: I9f0dfa033857f439c94245361fcae515bc0a6533
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The ObjectReader API demands that we release the reader when we are
done with it. PackWriter contains a reader, which it uses for the
entire packing session. Expose the release of the reader through
a release method on the writer.
This still doesn't address the RevWalk and TreeWalk users, who
don't correctly release their reader. But its a small step in the
right direction.
Change-Id: I5cb0b5c1b432434a799fceb21b86479e09b84a0a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
I don't want to play games with the order of release here, its
probably safer to release the reader before the database, just
in case the one depends on the other.
Change-Id: I2394c7d2477eaf7a7e1556fc3393c59d3b31e764
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
By doing the release at the higher level class, we can ensure
the release occurs if the inserter was allocated, even if the
implementation forgets to do this. Since the higher level class
is what allocated it, it makes sense to have it also do the release.
Change-Id: Id617b2db864c3208ed68cba4eda80e51612359ad
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Everyone else does. This must have been a spot I missed during
some sort of squash while developing the series.
Change-Id: I62eae50b618f47ee33ad7cf71fc05b724f603201
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Similar to what we did with the file code, move the pack writer
into its own package so the related classes and their package
private methods are hidden from the rest of the library.
Change-Id: Ic1b5c7c8c8d266e90c910d8d68dfc8e93586854f
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We no longer need an ObjectLoader to be lazy and try to delay
the materialization of the object content. That was done only
to support PackWriter searching for a good reuse candidate.
Instead, simplify the code base by doing the materialization
immediately when the loader asks for it, because any caller
asking for the loader is going to need the content.
Change-Id: Id867b1004529744f234ab8f9cfab3d2c52ca3bd0
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
These were only used by PackWriter to help it filter object
representations. Their only user disappeared when we rewrote the
object selection code path to use the new representation type.
Change-Id: I9ed676bfe4f87fcf94aa21e53bda43115912e145
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rather than making a loader, and then using that to fill the object
representation, parse the header and set up our data directly.
This saves some time, as we don't waste cycles on information we
won't use right now.
The weight computed for a representation is now its actual stored
size in the pack file, rather than its inflated size. This accounts
for changes made when the compression level is modified on the
repository. It is however more costly to determine the weight of
the object, since we have to find its length in the pack. To try and
recover that cost we now cache the length as part of our ObjectToPack
record, so it doesn't have to be found during the output phase.
A LocalObjectToPack now costs us (assuming 32 bit pointers):
(32 bit) (64 bit)
vm header: 8 bytes 8 bytes
ObjectId: 20 bytes 20 bytes
PackedObjectInfo: 12 bytes 12 bytes
ObjectToPack: 8 bytes 12 bytes
LocalOTP: 20 bytes 24 bytes
----------- ---------
68 bytes 74 bytes
Change-Id: I923d2736186eb2ac8ab498d3eb137e17930fcb50
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This move isolates all of the local file specific implementation code
into a single package, where their package-private methods and support
classes are properly hidden away from the rest of the core library.
Because of the sheer number of files impacted, I have limited this
change to only the renames and the updated imports.
Change-Id: Icca4884e1a418f83f8b617d0c4c78b73d8a4bd17
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Objects that fall completely within a single window can be worked
with in a zero-copy fashion, provided that the window is backed by
a normal byte[] and not by a ByteBuffer.
This works for a surprising number of objects. The default window
size is 8 KiB, but most deltas are quite a bit smaller than that.
Objects smaller than 1/2 of the window size have a very good chance
of falling completely within a window's array, which means we can
work with them without copying their data around.
Larger objects, or objects which are unlucky enough to span over a
window boundary, get copied through the temporary buffer. We pay
a tiny penalty to realize we can't use the zero-copy code path,
but its easier than trying to keep track of two adjacent windows.
With this change (as well as everything preceeding it), packing
is actually a bit faster. Some crude benchmarks based on cloning
linux-2.6.git (~324 MiB, 1,624,785 objects) over localhost using
C git client and JGit daemon shows we get better throughput, and
slightly better times:
Total Time | Throughput
(old) (now) | (old) (now)
--------------+---------------------------
2m45s 2m37s | 12.49 MiB/s 21.17 MiB/s
2m42s 2m36s | 16.29 MiB/s 22.63 MiB/s
2m37s 2m31s | 16.07 MiB/s 21.92 MiB/s
Change-Id: I48b2c8d37f08d7bf5e76c5a8020cde4a16ae3396
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>