Pass through the addAll request to our underlying ArrayList.
This way the underlying ArrayList grows no more than once during the
call, which may be important if the list was originally allocated
at the default size of 16, but 64 Edits are being added.
Change-Id: I31c3261e895766f82c3c832b251a09f6e37e8860
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
FetchCommand was missing the ability to set dry run and thin
preferences on the transport operation.
Change-Id: I0bef388a9b8f2e3a01ecc9e7782aaed7f9ac82ce
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Optional inCore parameter to Resolver/Strategy will
instruct it to perform all the operations in memory
and avoid modifying working folder even if there is one.
Change-Id: I5b873dead3682f79110f58d7806e43f50bcc5045
The hash code returned by RawTextComparator (or that is used
by the SimilarityIndex) play an important role in the speed of
any algorithm that is based upon them. The lower the number of
collisions produced by the hash function, the shorter the hash
chains within hash tables will be, and the less likely we are to
fall into O(N^2) runtime behaviors for algorithms like PatienceDiff.
Our prior hash function was absolutely horrid, so replace it with
the proper definition of the DJB hash that was originally published
by Professor Daniel J. Bernstein.
To support this assertion, below is a table listing the maximum
number of collisions that result when hashing the unique lines in
each source code file of 3 randomly chosen projects:
test_jgit: 931 files; 122 avg. unique lines/file
Algorithm | Collisions
-------------+-----------
prior_hash 418
djb 5
sha1 6
string_hash31 11
test_linux26: 30198 files; 258 avg. unique lines/file
Algorithm | Collisions
-------------+-----------
prior_hash 8675
djb 32
sha1 8
string_hash31 32
test_frameworks_base: 8381 files; 184 avg. unique lines/file
Algorithm | Collisions
-------------+-----------
prior_hash 4615
djb 10
sha1 6
string_hash31 13
We can clearly see that prior_hash performed very poorly, resulting
in 8,675 collisions (elements in the same hash bucket) for at least
one file in the Linux kernel repository. This leads to some very
bad O(N) style insertion and lookup performance, even though the
hash table was sized to be the next power-of-2 larger than the
total number of unique lines in the file.
The djb hash we are replacing prior_hash with performs closer to
SHA-1 in terms of having very few collisions. This indicates it
provides a reasonably distributed output for this type of input,
despite being a much simpler algorithm (and therefore will be much
faster to execute).
The string_hash31 function is provided just to compare results with,
it is the algorithm commonly used by java.lang.String hashCode().
However, life isn't quite this simple.
djb produces a 32 bit hash code, but our hash tables are always
smaller than 2^32 buckets. Mashing the 32 bit code into an array
index used to be done by simply taking the lower bits of the hash
code by a bitwise and operator. This unfortuntely still produces
many collisions, e.g. 32 on the linux-2.6 repository files.
From [1] we can apply a final "cleanup" step to the hash code to
mix the bits together a little better, and give priority to the
higher order bits as they include data from more bytes of input:
test_jgit: 931 files; 122 avg. unique lines/file
Algorithm | Collisions
-------------+-----------
prior_hash 418
djb 5
djb + cleanup 6
test_linux26: 30198 files; 258 avg. unique lines/file
Algorithm | Collisions
-------------+-----------
prior_hash 8675
djb 32
djb + cleanup 7
test_frameworks_base: 8381 files; 184 avg. unique lines/file
Algorithm | Collisions
-------------+-----------
prior_hash 4615
djb 10
djb + cleanup 7
This is a massive improvement, as the number of collisions for
common inputs drops to acceptable levels, and we haven't really
made the hash functions any more complex than they were before.
[1] http://lkml.org/lkml/2009/10/27/404
Change-Id: Ia753b695de9526a157ddba265824240bd05dead1
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
As it turns out, every single diff algorithm we might try to
implement can benfit from using the SequenceComparator's native
concept of the simple reduceCommonStartEnd() step. For most inputs,
there can be a significant number of elements that can be removed
from the space the DiffAlgorithm needs to consider, which will
reduce the overall running time for the final solution.
Pool this logic inside of DiffAlgorithm itself as a default, but
permit a specific algorithm to override it when necessary.
Convert MyersDiff to use this reduction to reduce the space it
needs to search, making it perform slightly better on common inputs.
Change-Id: I14004d771117e4a4ab2a02cace8deaeda9814bc1
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
PatienceDiff always uses a HashedSequence, which promises to provide
constant time access for hash codes during the equals method and
aborts fast if the hash codes don't match. Therefore we don't need
to cache the hash codes inside of the index, saving us memory.
Change-Id: I80bf1e95094b7670e6c0acc26546364a1012d60e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Most diff implementations really want to use cached hash codes for
elements, rather than element equality, as they need to perform many
compares and unique hash codes for elements can really speed that
process up.
To make it easier to define element hash functions, move the caching
of hash codes into a wrapper sequence type, so that individual
sequence types like RawText don't need to do this themselves. This
has a nice property of also allowing the sequence to no longer care
about the specific SequenceComparator that is going to be used, and
permits the caching to only examine the middle region that isn't
common to the two inputs.
Change-Id: If8623556da9419117b07c5073e8bce39de02570e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This is a faster exact match based form that tries to improve
performance for the common case of the header and trailer of
a text file not changing at all. After this fast path we use
the slower path based on the super class' using equals() to
allow for whitespace ignore modes to still work.
Some simple performance testing showed a major improvement over the
older implementation for a common edit we see in JGit. The test
compared blob 29a89bc and 372a978, which is the ObjectDirectory.java
file difference in commit 41dd9ed1c0.
The two text files are approximately 22 KiB in size.
DEFAULT old 203900 ns
DEFAULT new 100400 ns
This new version is 2x faster for the DEFAULT comparator, which does
not treat space specially. This is because we can now examine a
larger swath of text with fewer instructions per byte compared. The
older algorithm had to stop at each line break and recompute how to
examine the next line, while the new algorithm only stops when the
first difference is found.
WS_IGNORE_ALL old 298500 ns
WS_IGNORE_ALL new 63300 ns
Its 4.7x faster for the whitespace ignore comparator, as the common
header and footer do not have a whitespace difference. Avoiding the
special case handling for whitespace on each byte considered saves a
lot of time.
Since most edits to source code (and other text like files) appears in
the interior of the file, fast elimination of common header/footer
means faster diff throughput. In the less common case of an actual
header or footer edit, the common header/footer elimination is stopped
rather quickly either way, so there is very little downside to the
optimiation applied here.
Change-Id: I1d501b4c3ff80ed086b20bf12faf51ae62167db7
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
DiffAlgorithm implementations may find it useful to construct an Edit
and use that to later subsequence the two base sequences, so define
two new utility methods a() and b() to construct the A and B ranges.
Once a subsequence has had Edits created for it the indexes are
within the space of the subsequence. These must be shifted back to
the original base sequence's indexes. Define toBase() as a utility
method to perform that shifting work in-place, so DiffAlgorithm
implementations have an efficient way to convert back to the caller's
original space.
Change-Id: I8d788e4d158b9f466fa9cb4a40865fb806376aee
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Adds API for performing git fetch operations.
Change-Id: Idd95664fd4e3bca03211e4ffda3e354849f92a35
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
If a thin pack has a large delta we need to be able to open
its cached copy from the loose object directory through the
CachedObjectDatabase handle. Unfortunately that did not support the
openObject2 method, which the LargePackedDeltaObject used directly
to bypass looking at the pack files.
Bug: 324868
Change-Id: I1d5886a6c3254c6dea2852d50b8614c31a93e615
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When running IndexPack we use a CachedObjectDirectory, which
knows what objects are loose and tries to avoid stat(2) calls for
objects that do not exist in the repository, as stat(2) on Win32
is very slow.
However large delta objects found in a pack file are expanded into
a loose object, in order to avoid costly delta chain processing
when that object is used as a base for another delta.
If this expand occurs while working with the CachedObjectDirectory,
we need to update the cached directory data to include this new
object, otherwise it won't be available when we try to open it
during the object verify phase.
Bug: 324868
Change-Id: Idf0c76d4849d69aa415ead32e46a435622395d68
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When creating a new FileRepository, probe the capability of the
local filesystem and set core.filemode based on how it reacts.
We can't just rely on FS.supportsExecute() because a POSIX system
(which usually does support execute) might be storing the repository
on a partition that doesn't have execute support (e.g. plain FAT-32).
Creating a temporary file, setting both states, checking we get
the desired results will let us set the variable correctly on
all systems.
Change-Id: I551488ea8d352d2179c7b244f474d2e3d02567a2
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
In PlotCommitList.enter() commits are positioned on lanes for visual
presentation. This implementation was buggy: commits without
children (often the starting points for the RevWalk) are not positioned
on separate lanes.
The problem was that when handling commits with multiple children
(that's where branches fork out) it was not handled that some of the
children may not have been positioned on a lane yet. I fixed that and
added a number of tests which specifically test the layout of commits
on lanes.
Bug: 300282
Bug: 320263
Change-Id: I267b97ecccb5251cec54cec90207e075ab50503e
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
A diff algorithm may find this type useful if it wants to delegate a
particular range of elements to another algorithm, without changing
the underlying sequence types.
Change-Id: I4544467781233e21ac8b35081304b2bad7db00f6
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This makes it easier to parametrize DiffFormatter with a different
implementation, as we later plan to add PatienceDiff to JGit.
Change-Id: Id35ef478d5fa20fe10a1ba297f9436fd7adde9ce
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
git allows remotes to be relative paths, but the regex
validating urls wouldn't accept anything starting with "..".
Other functionality works fine with these paths.
Bug: 311300
Change-Id: Ib74de0450a1c602b22884e19d994ce2f52634c77
Instead of spooling large delta bases into temporary files and then
immediately deleting them afterwards, spool the large delta out to
a normal loose object. Later any requests for that large delta can
be answered by reading from the loose object, which is much easier
to stream efficiently for readers.
Since the object is now duplicated, once in the pack as a delta and
again as a loose object, any future prune-packed will automatically
delete the loose object variant, releasing the wasted disk space.
As prune-packed is run automatically during either repack or gc, and
gc --auto triggers automatically based on the number of loose objects,
we get automatic cache management for free. Large objects that were
unpacked will be periodically cleared out, and will simply be restored
later if they are needed again.
After a short offline discussion with Junio Hamano today, we may want
to propose a change to prune-packed to hold onto larger loose objects
which also exist in pack files as deltas, if the loose object was
recently accessed or modified in the last 2 days.
Change-Id: I3668a3967c807010f48cd69f994dcbaaf582337c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Recently created objects are usually what branches point to, and
are usually written out as loose objects. But due to the high cost
of asking the operating system if a file exists, these are the last
thing that ObjectDirectory examines when looking for an object by
its ObjectId.
Caching recently seen loose objects permits the opening code to
jump directly to the loose object, accelerating lookup for branch
heads that are accessed often.
To avoid exploding the cache its limited to approximately 2048
entries. When more ids are added, the table is simply cleared
and reset in size.
Change-Id: I18f483217412b102f754ffd496c87061d592e535
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This class is used only to cache the unpacked form of an object that
was used as a base for another object. The theory goes that if an
object is used as a delta base for A, it will probably also be a
delta base for B, C, D, E, etc. and therefore having an unpacked copy
of it on hand will make delta resolution for the others very fast.
However since objects are usually only accessed once, we don't want
to cache everything we unpack, just things that we are likely to
need again. The only things we need again are the delta bases.
Hence, its a delta base cache.
This gets us the class name UnpackedObjectCache back, so we can
use it to actually create a cache of unpacked object information.
Change-Id: I121f356cf4eca7b80126497264eac22bd5825a1d
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The core.autocrlf variable can take on three values: false, true,
and input. Parsing it as a boolean is wrong, we instead need to
parse a tri-state enumeration.
Add support for parsing and setting enum values from Java from and
to the text based configuration file, and use that to handle the
autocrlf variable.
Bug: 301775
Change-Id: I81b9e33087a33d2ef2eac89ba93b9e83b7ecc223
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of making the sequence itself responsible for the equivalence
function, use an external function that is supplied by the caller.
This cleans up the code because we now say cmp.equals(a, ai, b, bi)
instead of a.equals(ai, b, bi).
This refactoring also removes the odd concept of creating different
types of sequences to have different behaviors for whitespace
ignoring. Instead DiffComparator now supports singleton functions
that apply a particular equivalence algorithm to a type of sequence.
Change-Id: I559f494d81cdc6f06bfb4208f60780c0ae251df9
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When checkReferencedIsReachable is set in ReceivePack we are trying
to prove that the push client is permitted to access an object that
it did not send to us, but that the received objects link to either
via a link inside of an object (e.g. commit parent pointer or tree
member) or by a delta base reference.
To do this check we are making a list of every potential delta base,
and then ensuring that every delta base used appears on this list.
If a delta base does not appear on this list, we abort with an error,
letting the client know we are missing a particular object.
Preventing spurious errors about missing delta base objects requires
us to use the exact same list of potential delta bases as the remote
push client used. This means we must use TOPO ordering, and we
need to enable BOUNDARY sorting so that ObjectWalk will correctly
include any trees found during the enumeration back to the common
merge base between the interesting and uninteresting heads.
To ensure JGit's own push client matches this same potential delta
base list, we need to undo 60aae90d4d ("Disable topological
sorting in PackWriter") and switch back to using the conventional
TOPO ordering for commits in a pack file. This ensures that our
own push client will use the same potential base object list as
checkReferencedIsReachable uses on the receiving side.
Change-Id: I14d0a326deb62a43f987b375cfe519711031e172
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Since we are only checking the links between objects we don't need
to hold onto commit messages after their headers have been parsed
by the walker. Dropping them saves a bit of memory, which is always
good when accepting huge pack files.
Change-Id: I378920409b6acf04a35cdf24f81567b1ce030e36
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the copy instruction was larger than the input buffer given to us,
we copied the wrong part of the base stream during the next read().
This occurred on really big binary files where a copy instruction
of 64k wasn't unreasonable, but the caller's buffer was only 8192
bytes long. We copied the first 8192 bytes correctly, but then
reseeked the base stream back to the start of the copy region on
the second read of 8192 bytes. Instead of a sequence like ABCD
being read into the caller, we read AAAA.
Change-Id: I240a3f722a3eda1ce8ef5db93b380e3bceb1e201
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
As ObjectStreams are supposed to be buffered, most implementors will
be wrapping their underlying stream inside of a BufferedInputStream
in order to satisfy this requirement. Because developers are by
nature lazy, they will use the default buffer size rather than
specify their own.
The OpenJDk JRE implementations use 8192 as the default buffer
size, and when the higher level reader uses the same buffer size
the buffers "stack" nicely by avoiding a copy to the internal
buffer array. As OpenJDK is a popular virtual machine, we should
try to benefit from this nice stacking property during copyTo().
Change-Id: I69d53f273b870b841ced2be2e9debdfd987d98f4
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We can slightly optimize this method by removing some compares
based on knowledge of how the orderings have to work. This way
a getType() invocation requires at most 2 int compares for any
result, vs. the 6 required to find REPLACE before.
Change-Id: I62a04cc513a6d28c300d1c1496a8608d5df4efa6
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Exposing isEmpty, getLengthA, getLengthB make it easier to examine
the state of an edit and work with it from higher level code.
The before and after cut routines make it easy to split an edit
that contains another edit, such as to decompose a REPLACE that
contains a common sequence within it.
Change-Id: Id63d6476a7a6b23acb7ab237d414a0a1a7200290
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We shouldn't escape non-special ASCII characters such as '@' or '~'.
These are valid in a path name on POSIX systems, and may appear as
part of a path in a GNU or Git style patch script. Escaping them
into octal just obfuscates the user's intent, with no gain.
When parsing an escaped octal sequence, we must parse no more
than 3 digits. That is, "\1002" is actually "@2", not the Unicode
character \u0202.
Change-Id: I3a849a0d318e69b654f03fd559f5d7f99dd63e5c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
QuotedString.GIT_PATH returns the input reference exactly if
the string does not require quoting, otherwise it returns a
copy that contains the quotes on either end, plus escapes in
the middle where necessary to meet conventions.
Testing the return against '"' + name + '"' is always false,
because GIT_PATH will never return it that way. The only way
we have quotes on either end is if there is an escape in the
middle, in which case the string isn't equal anyway.
Change-Id: I4d21d8e5c7da0d7df9792c01ce719548fa2df16b
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If we get an exception while indexing the incoming pack, its likely
a stream corruption. We already report an error to the client, but
we eat the stack trace, which makes debugging issues related to a
bug inside of JGit nearly impossible. Rethrow it under a new type
UnpackException, so embedding servers or applications can catch the
error and provide it to a human who might be able to forward such
traces onto a JGit developer for evaluation.
Change-Id: Icad41148bbc0c76f284c7033a195a6b51911beab
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of getting the limit from CoreConfig, use the larger of the
reader's limit or 5 MiB, under the assumption that any annotated tag
or commit of interest should be under 5 MiB. But if a repository
was really insane and had bigger objects, the reader implementation
can set its streaming limit higher in order to allow RevWalk to
still process it.
Change-Id: If2c15235daa3e2d1f7167e781aa83fedb5af9a30
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
IDEs like Eclipse offer up the settings in WindowCacheConfig to the
user as a global set of options that are configured for the entire
JVM process, not per-repository, as the cache is shared across the
entire JVM. The limit on how much we are willing to allocate for
an object buffer is similar to the limit on how much we can use for
data caches, allocating that much space impacts the entire JVM and
not just a single repository, so it should be a global limit.
Change-Id: I22eafb3e223bf8dea57ece82cd5df8bfe5badebc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the iterators passed into a diff formatter are working tree
iterators, we should enable ignoring files that are ignored, as
well as actually pull up the current content from the working tree
rather than getting it from the repository.
Because we abstract away the working directory access logic,
we can now actually support rename detection between the working
directory and the local repository when using a DiffFormatter.
This means its possible for an application to show an unstaged
delete-add pair as a rename if the add path is not ignored.
(Because the ignored file wouldn't show up in our difference output.)
Even more interesting is we can now do rename detection between any
two working trees, if both input iterators are WorkingTreeIterators.
Unfortunately we don't (yet) optimize for comparing the working
tree with the index involved so we can take advantage of cached
stat data to rule out non-dirty paths.
Change-Id: I4c0598afe48d8f99257266bf447a0ecd23ca7f5e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When comparing a DirCache and a WorkingTree using ANY_DIFF we
sometimes didn't recursive into a subtree of both sides gave us
zeroId() back for the identity of a subtree. This happens when the
DirCache doesn't have a valid cache tree for the subtree, as then
it uses zeroId() for the ObjectId of the subtree, which then appears
to be equal to the zeroId() of the WorkingTreeIterator's subtree.
We work around this by adding a hasId() method that returns true
only if this iterator has a valid ObjectId. The idEquals method
on TreeWalk than only performs a compare between two iterators if
both iterators have a valid id.
Change-Id: I695f7fafbeb452e8c0703a05c02921fae0822d3f
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Applications just want a quick way to configure our diff
implementation, and then just want to use it without a lot of fuss.
Move all of the rename detection logic and path following logic
out of our pgm package and into DiffFormatter itself, making it
much easier for a GUI to take advantage of the features without
duplicating a lot of code.
Change-Id: I4b54e987bb6dc804fb270cbc495fe4cae26c7b0e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
canResetHead now returns true.
Resetting mixed / hard works in EGit in merging state.
Change-Id: I1512145bbd831bb9734528ce8b71b1701e3e6aa9
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
Sorting the array can be useful when its being used as a map of pairs
that are appended into the array and then later merge-joined against
another array of similar semantics.
Change-Id: I2e346ef5c99ed1347ec0345b44cda0bc29d03e90
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We know exactly how many lines we need by the time we compute our
per-line hashes, as we have already built the lines IntList to give
us the starting position of each line in the buffer. Using that
we can properly size the array, and don't need the dynamic growing
feature of IntList. So drop the indirection and just use a fixed
size array.
Change-Id: I5c8c592514692a8abff51e5928aedcf71e100365
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Create a new 'org.eclipse.jgit.api.errors' package to contain
exceptions related to using the Git porcelain API.
Change-Id: Iac1781bd74fbd520dffac9d347616c3334994470
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>