This string is part of the network protocol, and isn't meant to
be translated into another language. Clients actually scan for
the string "unpack error " off the wire and react magically to
this information. If it were translated, they would instead have
a protocol exception, which isn't very useful when there is already
an error occurring.
Change-Id: Ia5dc8d36ba65ad2552f683bb637e80b77a7d92f0
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This reverts commit db4c516f67 since
it breaks compatibility with Eclipse 3.5 which can no longer import
the projects
Bug: 323390
Change-Id: I3cc91364a6747cfcb4c611a9be5258f81562f726
Large delta streams are unpacked incrementally, but because a delta
can seek to a random position in the base to perform a copy we may
need to inflate the base repeatedly just to complete one delta.
So work around it by copying the base to a temporary file, and then
we can read from that temporary file using random seeks instead.
Its far more efficient because we now only need to inflate the
base once.
This is still really ugly because we have to dump to a temporary
file, but at least the code can successfully process a large
file without throwing OutOfMemoryError. If speed is an
issue, the user will need to increase the JVM heap and ensure
core.streamFileThreshold is set to a higher value, so we don't use
this code path as often.
Unfortunately we lose the "optimization" of skipping over portions
of a delta base that we don't actually need in the final result.
This is going to cause us to inflate and write to disk useless
regions that were deleted and do not appear in the final result.
We could later improve on our code by trying to flatten delta
instruction streams before we touch the bottom base object, and
then only store the portions of the base we really need for the
final result and that appear out-of-order. Since that is some
pretty complex code I'm punting on it for now and just doing this
simple whole-object buffering.
Because the process umask might be permitting other users to read
files we create, we put the temporary buffers into $GIT_DIR/objects.
We can reasonably assume that if a reader can read our temporary
buffer file in that directory, they can also read the base pack
file we are pulling it from and therefore its not a security breach
to expose the inflated content in a file. This requires a reader
to have write access to the repository, but only if the file is
really big. I'd rather err on the side of caution here and refuse
to read a very big file into /tmp than to possibly expose a secured
content because the Java 5 JVM won't let us create a protected
temporary file that only the current user can access.
Change-Id: I66fb80b08cbcaf0f65f2db0462c546a495a160dd
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
A tag command is added to the Git porcelain API. Tests were
also added to stress test the tag command.
Change-Id: Iab282a918eb51b0e9c55f628a3396ff01c9eb9eb
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Implementation of a checkout (or 'git read-tree') operation which
works together with DirCache. This implementation does similar things
as WorkDirCheckout which main problem is that it works with deprecated
GitIndex. Since GitIndex doesn't support multiple stages of a file
which is required in merge situations this new implementation is
required to enable merge support.
Change-Id: I13f0f23ad60d98e5168118a7e7e7308e066ecf9c
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
The Merger was was only exposing the merge base as an
AbstractTreeIterator. Since we need the merge base as
RevCommit to generate the merge result I expose it here.
Change-Id: Ibe846370a35ac9bdb0c97ce2e36b2287577fbcad
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Otherwise PackFileTest.testDelta_LargeObjectChain() reproducibly
fails with OutOfMemoryError on Mac OS X 10.6.4.
Change-Id: I6a55ff9ba181102606a0d99ffd52392a1615a422
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Updates the project level settings to run the formatter
on save on only on the edited lines.
Change-Id: I26dd69d0c95e6d73f9fdf7031f3c1dbf3becbb79
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
We should use JUnit4 for tests. This patch updates
the MANIFEST.MF and respective launch configurations.
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
PersonIdent should be parsable for an invalid commit which
contains multiple authors, like "A <a@a.org>, B <b@b.org>".
PersonIdent(String) constructor now delegates to
RawParseUtils.parsePersonIdent().
Change-Id: Ie9798d36d9ecfcc0094ca795f5a44b003136eaf7
Because we are using the large stream size, we have to be
above the STREAM_THRESHOLD constant, which I just increased.
Change-Id: I6f10ec8558d9f751d4b547fcae05af94f1c8866b
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Applying deltas in the large streaming mode is horrifically slow.
Trying to pack icu4c is impossible because a single 11 MiB file
sits on top of a 15 MiB file though a 10 deep delta chain, which
results in this very slow inflate process.
Upping the default limit to 15 MiB lets us process this large in a
reasonable time, but its still sufficiently low enough to prevent
exploding the heap of a very large process like Eclipse or Gerrit
Code Review.
We have to revisit the streaming delta application process and do
something much smarter, like flatten the delta chain before we apply
it to the base. But even that is ugly, I've seen a 155 MiB delta
sitting on top of a 450 MiB file to produce a 300 MiB result object.
If the chain is deep, we may have trouble flatting it down.
Change-Id: If5a0dcbf9d14ea683d75546f104b09bb8cd8fdbb
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We miscomputed the CRC32 checksum for a REF_DELTA type of object, by
not including the full 20 byte ObjectId of the delta base in the CRC
code we use when the delta is too large to go through our two faster
small reuse code paths. This resulted in a corruption error during
packing, where the PackFile erroneously suspected the data was wrong
on the local filesystem and aborted writing, because the CRC didn't
match what we had read from the index.
Change-Id: I7d12cdaeaf2c83ddc11223ce0108d9bd6886e025
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We didn't fully cover what we support and what we don't. It was
also a bit hard to follow the syntaxes supported. Clean that up
by documenting it.
Change-Id: I7b96fa6cbefcc2364a51f336712ad361ae42df2d
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We can now resolve expressions that reference a path within a
commit, designating a specific revision of a specific tree or
file in the project.
Change-Id: Ie6a8be629d264d72209db894bd680c5900035cc0
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We now match on the -gABBREV style output created by git describe
when its describing a non-tagged commit, and resolve that back to
the full ObjectId using the abbreviation resolution feature that
we already support.
Change-Id: Ib3033f9483d9e1c66c8bb721ff48d4485bcdaef1
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Calling it by the old numerical numbering system makes it really
hard to find the test that tests Repository.resolve(String).
Change-Id: I92d0ecbc8d66ce21bfed08888eeedf1300ffa594
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Its wrong to return null if we are resolving an abbreviation and we
have proven it matches more than one object. We know how to resolve
it if we had more nybbles, as there are two or more objects with the
same prefix. Declare that to the caller quite clearly by giving them
an AmbiguousObjectException.
Change-Id: I01bb48e587e6d001b93da8575c2c81af3eda5a32
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If we are given a DiffEntry header that already has abbreviated
ObjectIds on it, we may still be able to resolve those locally and
output the difference. Try to do that through the new resolve API
on ObjectReader.
Change-Id: I0766aa5444b7b8fff73620290f8c9f54adc0be96
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Parsing is rewritten to use the size limited form of getCachedBytes,
thus freeing the revwalk infrastructure from needing to care about
a large object vs. a small object when it gets an ObjectLoader.
Right now we hardcode our upper bound for a commit or annotated
tag to be 15 MiB. I don't know of any that is more than 1 MiB in
the wild, so going 15x that should give us some reasonable headroom.
Change-Id: If296c211d8b257d76e44908504e71dd9ba70ffa8
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rather than duplicating this block everywhere, reuse the limited size
form of getCachedBytes to acquire the content of an object.
Change-Id: I2e26a823e6fd0964d8f8dbfaa0fc2e8834c179c1
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Some algorithms are coded in a way that requires us to provide them
the entire object contents as a contiguous byte array. The parsers
in RevCommit and RevTag, or our RawText objects are really good
examples of these.
Instead of duplicating this logic everywhere, lets put it into the
base ObjectLoader type. That way the caller only needs to give us
their upper size bound, and we'll do the rest of the heavy work to
figure out if the object still fits within that bound, and get them
an array that has the complete contents.
Change-Id: Id95a7f79d2b97e39f6949370ccca2f2c9cfb1a0f
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
A chunk of code that throws LargeObjectException may or may not have
the specific ObjectId on hand when its thrown. If it does, we want
to cache it in the exception, and put that in the message. If it is
missing we want to be able to set it later from a higher level stack
frame that does have the object handy.
Change-Id: Ife25546158868bdfa886037e4493ef8235ebe4b9
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
If the loader's stream is broken and returns to us more content
than it originally declared as the size of the object, don't
copy that onto the output stream. Instead throw EOFException
and abort fast. This way we don't follow an infinite stream,
but instead will at least stop when the size was reached.
Change-Id: I7ec0c470c875f03b1f12a74a9b4d2f6e73b659bb
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the stream is a delta decompression stream, getting the size
can be expensive. Its cheaper to get it from the stream itself
rather than from the object loader.
Change-Id: Ia7f0af98681f6d56ea419a48c6fa8eea09274b28
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If we can't resolve an abbreviation, it might be because there is
a new pack file we haven't picked up yet. Try scanning the packs
again and recheck each pack if there were differences from the last
scan we did.
Because of this, we don't have to open a pack during the test where
we generate a pack on the fly. We'll miss on the first loop during
which the PackList is the NO_PACKS magic initialization constant,
and pick up the newly created index during this retry logic.
Change-Id: I7b97efb29a695ee60c90818be380f7ea23ad13a3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
ObjectReader implementations are now responsible for creating the
unique abbreviation of an ObjectId, or for resolving an abbreviation
back to its full form. In this latter case the reader can offer up
multiple candidates to the caller, who may be able to disambiguate
them based on context.
Repository.resolve() doesn't take multiple candidates into account
right now, but it could in the future by looking for a remaining
^0 or ^{commit} suffix and take an expansion if there is only one
commit that matches the input abbreviation. It could also use
the distance from an annotated tag to resolve "tag-NNN-gcommit"
style strings that are often output by `git describe`.
Change-Id: Icd3250adc8177ae05278b858933afdca0cbbdb56
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
ObjectWriter is a deprecated API that people shouldn't be using.
So get rid of it in favor of the ObjectInserter API.
Change-Id: I6218bcb26b6b9ffb64e3e470dba5dca2e0a62fd4
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This makes it easier for abstract tools like AddCommand to open the
file from the working tree, without knowing internal details about
how the tree is managed.
Change-Id: Ie64a552f07895d67506fbffb3ecf1c1be8a7b407
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Applications should favor the long style interface, especially when
their source input is a long type, e.g. coming from java.io.File.
This way when the index format is later changed to support a
larger file size than 2 GiB we can handle it by just changing the
entry code, and not need to fix a lot of applications.
Change-Id: I332563caeb110014e2d544dc33050ce67ae9e897
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
These objects should be responsible for their own formatting,
rather than delegating it to some obtuse type called ObjectInserter.
While we are at it, simplify the way we insert these into a database.
Passing in the type and calling format in application code turned
out to be a huge mistake in terms of ease-of-use of the insert API.
Change-Id: Id5bb95ee56aa2a002243e9b7853b84ec8df1d7bf
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Since these types no longer support reading, calling them a Builder
is a better description of what they do. They help the caller to
build a commit or a tag object.
Change-Id: I53cae5a800a66ea1721b0fe5e702599df31da05d
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Since we stopped supporting these types for reading, but their
name is a natural candidate for someone to try and use in code,
explain where they should be looking instead.
Change-Id: I091a1b0ef71b842016020f938ba3161431aab9c9
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
There where 3 cases where a JGitInternalExcption was created
without specifying the root cause. This has been fixed.
Change-Id: I2ee08d04732371cd9e30874b1437b61217770b6a
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
WorkingTreeIterator now optionally performs CRLF to LF conversion for
text files. A basic framework is left in place to support enabling
(or disabling) this feature based on gitattributes, and also to
support the more generic smudge/clean filter system. As there is
no gitattribute support yet in JGit this is left unimplemented,
but the mightNeedCleaning(), isBinary() and filterClean() methods
will provide reasonable places to plug that into in the future.
[sp: All bugs inside of WorkingTreeIterator are my fault, I wrote
most of it while cherry-picking this patch and building it on
top of Marc's original work.]
CQ: 4419
Bug: 301775
Change-Id: I0ca35cfbfe3f503729cbfc1d5034ad4abcd1097e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
These classes need to be visible if an application wants to define
its own native pack based protocol embedded within another layer,
much like we already support for smart HTTP.
Change-Id: I7e2ac3ad01d15b94d340128a395fe0b2f560ff35
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The reuse system used by an object database may be able to benefit
from knowing what objects are coming next, and even improve data
throughput by delaying (or moving up) objects that are stored near
each other in the source database.
Pushing the iteration down into the reuse code makes it possible
for a smarter implementation to aggregate reuse. But for the
standard pack file format on disk we don't bother, its quite
efficient already.
Change-Id: I64f0048ca7071a8b44950d6c2a5dfbca3be6bba6
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Some instances may benefit from having access to memory efficient
storage for some small values, like single flag bits. Give up a
portion of our delta depth field to make 4 bits available to any
subclass that wants it.
This still gives us room for delta chains of 1,048,576 objects,
and that is just insane. Unpacking 1 million objects to get to
something is longer than most users are willing to wait for data
from Git.
Change-Id: If17ea598dc0ddbde63d69a6fcec0668106569125
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
An ObjectReader implementation may be very slow for a single object,
but yet support bulk queries efficiently by batching multiple small
requests into a single larger request. This easily happens when the
reader is built on top of a database that is stored on another host,
as the network round-trip time starts to dominate the operation cost.
RevWalk, ObjectWalk, UploadPack and PackWriter are the first major
users of this new bulk interface, with the goal being to support an
efficient way to pack a repository for a fetch/clone client when the
source repository is stored in a high-latency storage system.
Processing the want/have lists is now done in bulk, to remove
the high costs associated with common ancestor negotiation.
PackWriter already performs object reuse selection in bulk, but it
now can also do the object size lookup and object counting phases
with higher efficiency. Actual object reuse, deltification, and
final output are still doing sequential lookups, making them a bit
more expensive to perform.
Change-Id: I4c966f84917482598012074c370b9831451404ee
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
By giving the reader information about the roots of a revision
traversal, some readers may be able to prefetch information from
their backing store using background threads in order to reduce
data access latency. However this isn't typically necessary so
the default reader implementation doesn't react to the advice.
Change-Id: I72c6cbd05cff7d8506826015f50d9f57d5cda77e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
ObjectReader implementations may wish to use multiple threads in
order to evaluate object reuse faster. Let the reader make that
decision by passing the iteration down into the reader.
Because the work is pushed into the reader, it may need to locate a
given ObjectToPack given its ObjectId. This can easily occur if the
reader has sent a list of ObjectIds to the object database and gets
back information keyed only by ObjectId, without the ObjectToPack
handle. Expose lookup using the PackWriter's own internal map,
so the reader doesn't need to build a redundant copy to track the
assocation of ObjectId back to ObjectToPack.
Change-Id: I0c536405a55034881fb5db92a2d2a99534faed34
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When the output stream is deeply buffered (e.g. 1 MiB or more in
an HTTP servlet on some containers) trying to kick out the header
earlier will prevent the client from stalling hard while the first
1 MiB is received and it can process the pack header. Forcing a
flush here lets the client see the header and start its progress
monitor for "Receiving objects: (1/N)" so the user knows there
is still activity occurring, even though the buffering may cause
there to be some lag as the buffer fills up on the sending side.
Change-Id: I3edf39e8f703fe87a738dc236d426b194db85e3a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>