Pass through the addAll request to our underlying ArrayList.
This way the underlying ArrayList grows no more than once during the
call, which may be important if the list was originally allocated
at the default size of 16, but 64 Edits are being added.
Change-Id: I31c3261e895766f82c3c832b251a09f6e37e8860
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This is the test suite I was using to help understand why we had
such a high collision rate with RawTextComparator, and to select
a replacement function.
Since its not something we will run very often, lets make it a
program in the debug package rather than a JUnit test. This way
we can run it on demand against any corpus of files we choose,
but we aren't bottlenecking our daily builds running tests with
no assertions.
Adding a new hash function to this suite is simple, just define
a new instance member of type "Hash" with the logic applied to
the region passed in.
Change-Id: Iec0b176adb464cf95b06cda157932b79c0b59886
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The hash code returned by RawTextComparator (or that is used
by the SimilarityIndex) play an important role in the speed of
any algorithm that is based upon them. The lower the number of
collisions produced by the hash function, the shorter the hash
chains within hash tables will be, and the less likely we are to
fall into O(N^2) runtime behaviors for algorithms like PatienceDiff.
Our prior hash function was absolutely horrid, so replace it with
the proper definition of the DJB hash that was originally published
by Professor Daniel J. Bernstein.
To support this assertion, below is a table listing the maximum
number of collisions that result when hashing the unique lines in
each source code file of 3 randomly chosen projects:
test_jgit: 931 files; 122 avg. unique lines/file
Algorithm | Collisions
-------------+-----------
prior_hash 418
djb 5
sha1 6
string_hash31 11
test_linux26: 30198 files; 258 avg. unique lines/file
Algorithm | Collisions
-------------+-----------
prior_hash 8675
djb 32
sha1 8
string_hash31 32
test_frameworks_base: 8381 files; 184 avg. unique lines/file
Algorithm | Collisions
-------------+-----------
prior_hash 4615
djb 10
sha1 6
string_hash31 13
We can clearly see that prior_hash performed very poorly, resulting
in 8,675 collisions (elements in the same hash bucket) for at least
one file in the Linux kernel repository. This leads to some very
bad O(N) style insertion and lookup performance, even though the
hash table was sized to be the next power-of-2 larger than the
total number of unique lines in the file.
The djb hash we are replacing prior_hash with performs closer to
SHA-1 in terms of having very few collisions. This indicates it
provides a reasonably distributed output for this type of input,
despite being a much simpler algorithm (and therefore will be much
faster to execute).
The string_hash31 function is provided just to compare results with,
it is the algorithm commonly used by java.lang.String hashCode().
However, life isn't quite this simple.
djb produces a 32 bit hash code, but our hash tables are always
smaller than 2^32 buckets. Mashing the 32 bit code into an array
index used to be done by simply taking the lower bits of the hash
code by a bitwise and operator. This unfortuntely still produces
many collisions, e.g. 32 on the linux-2.6 repository files.
From [1] we can apply a final "cleanup" step to the hash code to
mix the bits together a little better, and give priority to the
higher order bits as they include data from more bytes of input:
test_jgit: 931 files; 122 avg. unique lines/file
Algorithm | Collisions
-------------+-----------
prior_hash 418
djb 5
djb + cleanup 6
test_linux26: 30198 files; 258 avg. unique lines/file
Algorithm | Collisions
-------------+-----------
prior_hash 8675
djb 32
djb + cleanup 7
test_frameworks_base: 8381 files; 184 avg. unique lines/file
Algorithm | Collisions
-------------+-----------
prior_hash 4615
djb 10
djb + cleanup 7
This is a massive improvement, as the number of collisions for
common inputs drops to acceptable levels, and we haven't really
made the hash functions any more complex than they were before.
[1] http://lkml.org/lkml/2009/10/27/404
Change-Id: Ia753b695de9526a157ddba265824240bd05dead1
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Because PatienceDiff works by looking for common unique lines within
the region, the DiffTestDataGenerator needs to be modified to produce
a unique character for each region. If we don't give PatienceDiff
a few unique points, it will just offer back a single REPLACE edit
that covers the entire files, and this doesn't tell us very much.
Change-Id: I5129faea1e763c74739118ca20d86bd62e0deaef
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
For certain tiny input sequences, every DiffAlgorithm should produce
exactly the same results, as there should be no ambiguity. Package
these up in an abstract TestCase that algorithms can extend from in
order to perform basic validation of their implementation.
Since these tests are more complete than what we used to have for
the MyersDiff algorithm, throw away Johannes' tests and only use
this new package.
Change-Id: I9a044330887c849ad4c78aa5c7aa04c783c10252
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
As it turns out, every single diff algorithm we might try to
implement can benfit from using the SequenceComparator's native
concept of the simple reduceCommonStartEnd() step. For most inputs,
there can be a significant number of elements that can be removed
from the space the DiffAlgorithm needs to consider, which will
reduce the overall running time for the final solution.
Pool this logic inside of DiffAlgorithm itself as a default, but
permit a specific algorithm to override it when necessary.
Convert MyersDiff to use this reduction to reduce the space it
needs to search, making it perform slightly better on common inputs.
Change-Id: I14004d771117e4a4ab2a02cace8deaeda9814bc1
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
PatienceDiff always uses a HashedSequence, which promises to provide
constant time access for hash codes during the equals method and
aborts fast if the hash codes don't match. Therefore we don't need
to cache the hash codes inside of the index, saving us memory.
Change-Id: I80bf1e95094b7670e6c0acc26546364a1012d60e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Most diff implementations really want to use cached hash codes for
elements, rather than element equality, as they need to perform many
compares and unique hash codes for elements can really speed that
process up.
To make it easier to define element hash functions, move the caching
of hash codes into a wrapper sequence type, so that individual
sequence types like RawText don't need to do this themselves. This
has a nice property of also allowing the sequence to no longer care
about the specific SequenceComparator that is going to be used, and
permits the caching to only examine the middle region that isn't
common to the two inputs.
Change-Id: If8623556da9419117b07c5073e8bce39de02570e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This is a faster exact match based form that tries to improve
performance for the common case of the header and trailer of
a text file not changing at all. After this fast path we use
the slower path based on the super class' using equals() to
allow for whitespace ignore modes to still work.
Some simple performance testing showed a major improvement over the
older implementation for a common edit we see in JGit. The test
compared blob 29a89bc and 372a978, which is the ObjectDirectory.java
file difference in commit 41dd9ed1c0.
The two text files are approximately 22 KiB in size.
DEFAULT old 203900 ns
DEFAULT new 100400 ns
This new version is 2x faster for the DEFAULT comparator, which does
not treat space specially. This is because we can now examine a
larger swath of text with fewer instructions per byte compared. The
older algorithm had to stop at each line break and recompute how to
examine the next line, while the new algorithm only stops when the
first difference is found.
WS_IGNORE_ALL old 298500 ns
WS_IGNORE_ALL new 63300 ns
Its 4.7x faster for the whitespace ignore comparator, as the common
header and footer do not have a whitespace difference. Avoiding the
special case handling for whitespace on each byte considered saves a
lot of time.
Since most edits to source code (and other text like files) appears in
the interior of the file, fast elimination of common header/footer
means faster diff throughput. In the less common case of an actual
header or footer edit, the common header/footer elimination is stopped
rather quickly either way, so there is very little downside to the
optimiation applied here.
Change-Id: I1d501b4c3ff80ed086b20bf12faf51ae62167db7
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
DiffAlgorithm implementations may find it useful to construct an Edit
and use that to later subsequence the two base sequences, so define
two new utility methods a() and b() to construct the A and B ranges.
Once a subsequence has had Edits created for it the indexes are
within the space of the subsequence. These must be shifted back to
the original base sequence's indexes. Define toBase() as a utility
method to perform that shifting work in-place, so DiffAlgorithm
implementations have an efficient way to convert back to the caller's
original space.
Change-Id: I8d788e4d158b9f466fa9cb4a40865fb806376aee
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Missed to update version from 0.9.4 to 0.10.0
Change-Id: I50e4955141ef9dd0e1293f8c8c2c0dc7c3c7fd3f
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Adds API for performing git fetch operations.
Change-Id: Idd95664fd4e3bca03211e4ffda3e354849f92a35
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
If a thin pack has a large delta we need to be able to open
its cached copy from the loose object directory through the
CachedObjectDatabase handle. Unfortunately that did not support the
openObject2 method, which the LargePackedDeltaObject used directly
to bypass looking at the pack files.
Bug: 324868
Change-Id: I1d5886a6c3254c6dea2852d50b8614c31a93e615
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Method test006_readCaseInsensitive in TestConfig already does the
same thing, and doesn't require an OS specific test for the value being
asserted.
This is additionally a fast fix for the failing JUnit test after
change 3fe5276.
Change-Id: I96d2794dbc7db55bdd0fbfcf675aabb15cc8419f
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
* stable-0.9:
Qualify post-0.9.3 builds
JGit 0.9.3
clone: Correct formatting of init message
Fix cloning of repositories with big objects
Qualify post-0.9.1 builds
JGit 0.9.1
Fix PlotCommitList to set lanes on child-less commits
Allow our command line commands like Glog, Log to accept the
--all option to walk all known refs.
Change-Id: I6a0c84fc19e7fa80ddaa2315851c58ba89d43ca5
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
We used the wrong format method, which lead to this confusing output:
$ ./jgit clone git://...
Initialized empty Git repository in {0}
remote: Counting objects: 201783
...
remote: {0}
We need to use MessageFormat.format() as the message translations
use {0} syntax and not %s syntax for placeholders.
Change-Id: I8bf0fd3f7dbecf9edf47419c46aed0493d405f9e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When running IndexPack we use a CachedObjectDirectory, which
knows what objects are loose and tries to avoid stat(2) calls for
objects that do not exist in the repository, as stat(2) on Win32
is very slow.
However large delta objects found in a pack file are expanded into
a loose object, in order to avoid costly delta chain processing
when that object is used as a base for another delta.
If this expand occurs while working with the CachedObjectDirectory,
we need to update the cached directory data to include this new
object, otherwise it won't be available when we try to open it
during the object verify phase.
Bug: 324868
Change-Id: Idf0c76d4849d69aa415ead32e46a435622395d68
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When creating a new FileRepository, probe the capability of the
local filesystem and set core.filemode based on how it reacts.
We can't just rely on FS.supportsExecute() because a POSIX system
(which usually does support execute) might be storing the repository
on a partition that doesn't have execute support (e.g. plain FAT-32).
Creating a temporary file, setting both states, checking we get
the desired results will let us set the variable correctly on
all systems.
Change-Id: I551488ea8d352d2179c7b244f474d2e3d02567a2
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
In PlotCommitList.enter() commits are positioned on lanes for visual
presentation. This implementation was buggy: commits without
children (often the starting points for the RevWalk) are not positioned
on separate lanes.
The problem was that when handling commits with multiple children
(that's where branches fork out) it was not handled that some of the
children may not have been positioned on a lane yet. I fixed that and
added a number of tests which specifically test the layout of commits
on lanes.
Bug: 300282
Bug: 320263
Change-Id: I267b97ecccb5251cec54cec90207e075ab50503e
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
A diff algorithm may find this type useful if it wants to delegate a
particular range of elements to another algorithm, without changing
the underlying sequence types.
Change-Id: I4544467781233e21ac8b35081304b2bad7db00f6
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This makes it easier to parametrize DiffFormatter with a different
implementation, as we later plan to add PatienceDiff to JGit.
Change-Id: Id35ef478d5fa20fe10a1ba297f9436fd7adde9ce
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
git allows remotes to be relative paths, but the regex
validating urls wouldn't accept anything starting with "..".
Other functionality works fine with these paths.
Bug: 311300
Change-Id: Ib74de0450a1c602b22884e19d994ce2f52634c77
Instead of spooling large delta bases into temporary files and then
immediately deleting them afterwards, spool the large delta out to
a normal loose object. Later any requests for that large delta can
be answered by reading from the loose object, which is much easier
to stream efficiently for readers.
Since the object is now duplicated, once in the pack as a delta and
again as a loose object, any future prune-packed will automatically
delete the loose object variant, releasing the wasted disk space.
As prune-packed is run automatically during either repack or gc, and
gc --auto triggers automatically based on the number of loose objects,
we get automatic cache management for free. Large objects that were
unpacked will be periodically cleared out, and will simply be restored
later if they are needed again.
After a short offline discussion with Junio Hamano today, we may want
to propose a change to prune-packed to hold onto larger loose objects
which also exist in pack files as deltas, if the loose object was
recently accessed or modified in the last 2 days.
Change-Id: I3668a3967c807010f48cd69f994dcbaaf582337c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Recently created objects are usually what branches point to, and
are usually written out as loose objects. But due to the high cost
of asking the operating system if a file exists, these are the last
thing that ObjectDirectory examines when looking for an object by
its ObjectId.
Caching recently seen loose objects permits the opening code to
jump directly to the loose object, accelerating lookup for branch
heads that are accessed often.
To avoid exploding the cache its limited to approximately 2048
entries. When more ids are added, the table is simply cleared
and reset in size.
Change-Id: I18f483217412b102f754ffd496c87061d592e535
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This class is used only to cache the unpacked form of an object that
was used as a base for another object. The theory goes that if an
object is used as a delta base for A, it will probably also be a
delta base for B, C, D, E, etc. and therefore having an unpacked copy
of it on hand will make delta resolution for the others very fast.
However since objects are usually only accessed once, we don't want
to cache everything we unpack, just things that we are likely to
need again. The only things we need again are the delta bases.
Hence, its a delta base cache.
This gets us the class name UnpackedObjectCache back, so we can
use it to actually create a cache of unpacked object information.
Change-Id: I121f356cf4eca7b80126497264eac22bd5825a1d
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The core.autocrlf variable can take on three values: false, true,
and input. Parsing it as a boolean is wrong, we instead need to
parse a tri-state enumeration.
Add support for parsing and setting enum values from Java from and
to the text based configuration file, and use that to handle the
autocrlf variable.
Bug: 301775
Change-Id: I81b9e33087a33d2ef2eac89ba93b9e83b7ecc223
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of making the sequence itself responsible for the equivalence
function, use an external function that is supplied by the caller.
This cleans up the code because we now say cmp.equals(a, ai, b, bi)
instead of a.equals(ai, b, bi).
This refactoring also removes the odd concept of creating different
types of sequences to have different behaviors for whitespace
ignoring. Instead DiffComparator now supports singleton functions
that apply a particular equivalence algorithm to a type of sequence.
Change-Id: I559f494d81cdc6f06bfb4208f60780c0ae251df9
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When checkReferencedIsReachable is set in ReceivePack we are trying
to prove that the push client is permitted to access an object that
it did not send to us, but that the received objects link to either
via a link inside of an object (e.g. commit parent pointer or tree
member) or by a delta base reference.
To do this check we are making a list of every potential delta base,
and then ensuring that every delta base used appears on this list.
If a delta base does not appear on this list, we abort with an error,
letting the client know we are missing a particular object.
Preventing spurious errors about missing delta base objects requires
us to use the exact same list of potential delta bases as the remote
push client used. This means we must use TOPO ordering, and we
need to enable BOUNDARY sorting so that ObjectWalk will correctly
include any trees found during the enumeration back to the common
merge base between the interesting and uninteresting heads.
To ensure JGit's own push client matches this same potential delta
base list, we need to undo 60aae90d4d ("Disable topological
sorting in PackWriter") and switch back to using the conventional
TOPO ordering for commits in a pack file. This ensures that our
own push client will use the same potential base object list as
checkReferencedIsReachable uses on the receiving side.
Change-Id: I14d0a326deb62a43f987b375cfe519711031e172
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Since we are only checking the links between objects we don't need
to hold onto commit messages after their headers have been parsed
by the walker. Dropping them saves a bit of memory, which is always
good when accepting huge pack files.
Change-Id: I378920409b6acf04a35cdf24f81567b1ce030e36
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the copy instruction was larger than the input buffer given to us,
we copied the wrong part of the base stream during the next read().
This occurred on really big binary files where a copy instruction
of 64k wasn't unreasonable, but the caller's buffer was only 8192
bytes long. We copied the first 8192 bytes correctly, but then
reseeked the base stream back to the start of the copy region on
the second read of 8192 bytes. Instead of a sequence like ABCD
being read into the caller, we read AAAA.
Change-Id: I240a3f722a3eda1ce8ef5db93b380e3bceb1e201
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
As ObjectStreams are supposed to be buffered, most implementors will
be wrapping their underlying stream inside of a BufferedInputStream
in order to satisfy this requirement. Because developers are by
nature lazy, they will use the default buffer size rather than
specify their own.
The OpenJDk JRE implementations use 8192 as the default buffer
size, and when the higher level reader uses the same buffer size
the buffers "stack" nicely by avoiding a copy to the internal
buffer array. As OpenJDK is a popular virtual machine, we should
try to benefit from this nice stacking property during copyTo().
Change-Id: I69d53f273b870b841ced2be2e9debdfd987d98f4
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>