Setting the array elements to -1 is more expensive than relying on
the allocator to zero the array for us first. Shifting the code to
always add 1 to the size (so an empty file is actually 1 byte long)
allows us to detect an unloaded size by comparing to 0, thus saving
the array fill calls.
Change-Id: Iad859e910655675b53ba70de8e6fceaef7cfcdd1
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the only file added is really small, and all of the deleted
files are really big, none of the permutations will match up due
to the sizes being too far apart to fit the current rename score.
Avoid allocating the really big deleted SimilarityIndex by deferring
its construction until at least one add along that row has a
reasonable chance of matching it.
This avoids expending a lot of CPU time looking at big deleted
binary files when a small modified text file was broken due to a
high percentage of changed lines.
Change-Id: I11ae37edb80a7be1eef8cc01d79412017c2fc075
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If a file fails to index the first time the loop encounters it, the
file is likely to fail to index again on the next row. Rather than
wasting a huge amount of CPU to index it again and fail, remember
which destination files failed to index and skip over them on each
subsequent row.
Because this condition is very unlikely, avoid allocating the BitSet
until its actually needed. This keeps the memory usage unaffected
for the common case.
Change-Id: I93509b28b61a9bba8f681a7b4df4c6127bca2a09
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The counter portion of each pair is only 32 bits wide, but is part
of a larger 64 bit integer. If the file size was larger than 4 GB
the counter could overflow and impact the key, changing the hash,
and later resulting in an incorrect similarity score.
Guard against this overflow condition by capping the count for each
record at 2^32-1. If any record contains more than that many bytes
the table aborts hashing and throws TableFullException.
This permits the index to scan and work on files that exceed 4 GB
in size, but only if the file contains more than one unique block.
The index throws TableFullException on a 4 GB file containing all
zeros, but should succeed on a 6 GB file containing unique lines.
The index now uses a 64 bit accumulator during the common scoring
algorithm, possibly resulting in slower summations. However this
index is already heavily dependent upon 64 bit integer operations
being efficient, so increasing from 32 bits to 64 bits allows us
to correctly handle 6 GB files.
Change-Id: I14e6dbc88d54ead19336a4c0c25eae18e73e6ec2
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Files bigger than 8 MB (2^23 bytes) tended to overflow the internal
hashtable, as the table was capped in size to 2^17 records. If a
file contained 2^17 unique data blocks/lines, the table insertion
got stuck in an infinite loop as the able couldn't grow, and there
was no open slot for the new item.
Remove the artifical 2^17 table limit and instead allow the table
to grow to be as big as 2^30. With a 64 byte block size, this
permits hashing inputs as large as 64 GB.
If the table reaches 2^30 (or cannot be allocated) hashing is
aborted. RenameDetector no longer tries to break a modify file pair,
and it does not try to match the file for rename or copy detection.
Change-Id: Ibb4d756844f4667e181e24a34a468dc3655863ac
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This comment was wrong, due to a copy-and-paste error. Here the
code is looking at records of dst that do not exist in src, and
are skipping past them to find another match.
Change-Id: I07c1fba7dee093a1eeffcf7e0c7ec85446777ffb
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
In order to safely edit a notes tree, NoteMap needs to retain any
non-note tree entries it read from the source tree and put them
back out into the modified tree when it commits a new version of
the note branch.
Remember any tree entries that didn't look like a note during
the parsing of the tree, so they can be put into a TreeFormatter
later when the tree writes to the repository.
Change-Id: Ia284af7e7866da35db35374c6c5869f00c857944
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of reading a note tree recursively up front when the NoteMap
is loaded, read only the root tree and load subtrees on demand when
they are accessed by the application. This gives a lower latency
to read a note for the recent commits on a branch, as only the paths
that are needed get read.
Given a 2/38 style fanout, the tree will fully load when 256 objects
have been accessed by the application. But unlike the prior version
of NoteMap, the NoteMap will load faster and answer lookups sooner,
as the loading time for all 256 levels is spread out across each of
the get() requests.
Given a 2/2/36 style fanout, the tree won't need to fully load until
about 65,536 objects are accessed.
To simplify the implementation we only support the flat layout (all
notes in the top level tree), or a 2/38, 2/2/36, 2/2/2/34, through
2/.../2 style fanout. Unlike C Git we don't support reading the old
experimental 4/36 fanout. This is sufficient because C Git won't
create the 4/36 style fanout when creating or updating a notes tree,
and there really aren't any in the wild today.
Change-Id: I6099b35916a8404762f31e9c11f632e43e0c1bfd
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The NoteMap makes it easy to read a small notes tree as created by
the `git notes` command in C Git. To make the initial implementation
simple a notes tree is read recursively into a map in memory.
This is reasonable if the application will need to access all notes,
or if there are less than 256 notes in the tree, but doesn't behave
well when the number of notes exceeds 256 and the application
doesn't need to access all of them.
We can later add support for lazily loading different subpaths,
thus fixing the large note tree problem described above.
Currently the implementation only supports reading. Writing notes
is more complex because trees need to be expanded or collapsed at
the exact 256 entry cut-off in order to retain the same tree SHA-1
that C Git would use for the same content. It also needs to retain
non-note tree entries such as ".gitignore" or ".gitattribute" files
that might randomly appear within a notes tree. We can also add
writing support later.
Change-Id: I93704bd84ebf650d51de34da3f1577ef0f7a9144
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
When setting up an SSH connection, use the caller supplied
CredentialsProvider, if one has been given to the Transport
or was defined as the default.
The CredentialsProvider is re-wrapped as a JSch UserInfo,
allowing the connection to use this for user interactive
prompts. This give a unified API for authentication on
any transport type.
Change-Id: Id3b4cf5bfd27a23207cdfb188bae3b78e71e02c0
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This permits applications to set their preferred credentials UI
implementation once, rather than needing to define it on every
single Transport instance they open.
Change-Id: I010550de1a6becab27f7aa5a9901df5a1c7e74bd
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This change is based on http://egit.eclipse.org/r/#change,1652
by David Green. The change adds the concept of a CredentialsProvider
which can be registered for git transports and which is
responsible to return credential-related data like passwords and
usernames. Whenenver the transports detects that an authentication
with certain credentials has to be done it will ask the
CredentialsProvider for this data. Foreseen implementations for
such a Provider may be a EGitCredentialsProvider (caching
credential data entered e.g. in the Clone-Wizzard) or a NetRcProvider
(gathering data out of ~/.netrc file).
Bug: 296201
Change-Id: Ibe13e546b45eed3e193c09ecb414bbec2971d362
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: David Green <dgreen99@gmail.com>
The auth-scheme token (like "Basic" or "Digest") is not specified in a
case sensitive way. RFC2617 (http://tools.ietf.org/html/rfc2617) specifies
in section 1.2 the use of a "case-insensitive token to identify the
authentication scheme". Jetty, for example, uses "basic" as token.
Change-Id: I635a94eb0a741abcb3e68195da6913753bdbd889
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
The ObjectId (for a ref) can be easily reformatted into a temporary
byte[] and then passed off to write(byte[]), removing the duplicated
code that existed in both write methods.
Change-Id: I09740658e070d5f22682333a2e0d325fd1c4a6cb
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We stopped handling URIs such as "example.com:/some/p ath", because
this was confused with the Windows absolute path syntax of "c:/path".
Support absolute style scp URIs again, but only when the host name
is more than 2 characters long.
Change-Id: I9ab049bc9aad2d8d42a78c7ab34fa317a28efc1a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This reverts commit 1e510ec20e.
Instead work around the warning by defining our constant by
constructing it through a StringBuilder.
Change-Id: If139509e769d649609c62eff359ebaea5dd286b2
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Matthias Sohn <matthias.sohn@sap.com>
CC: Chris Aniszczyk <caniszczyk@gmail.com>
IndexDiff was extended to detect files which are both removed from the
index and untracked. Before this change these files were only added
to the removed collection.
Change-Id: I971d8261d2e8932039fce462b59c12e143f79f90
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The code analyzer can't know that passing a value known to be null is
not a problem. Hence better pass null explicitly instead of the
parameters being null.
Change-Id: I8db6f8014de6c00dd95974d60f61ecc66191e6d4
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
There was a bug in ResolveMerger which is one reason for
bug 328841. If a merge was failing because of conflicts
deletions where not handled correctly. Files which have
to be deleted (because there was a non-conflicting deletion
coming in from THEIRS) are not deleted. In the
non-conflicting case we also forgot to delete the file but
in this case we explicitly checkout in the end these files
get deleted during that checkout.
This is fixed by handling incoming deletions explicitly.
Bug: 328841
Change-Id: I7f4c94ab54138e1b2f3fcdf34fb803d68e209ad0
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
The automatically generated commit message of a merge should have the
same structure as in C Git for consistency (as per git fmt-merge-msg).
Before this change:
merging refs/heads/a into refs/heads/master
After:
Merge branch 'a'
Plurals, "into" and joining by "," and "and" also work.
Change-Id: I9658ce2817adc90d2df1060e8ac508d7bd0571cb
This mirrors the getByte() API in ObjectId and allows the caller to
modify a single byte, which is useful when updating it as part of a
loop walking through 0x00..0xff inside of a range of objects.
Change-Id: I57fa8420011fe5ed5fc6bfeb26f87a02b3197dab
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Processing git notes requires random access to part of the raw data
of each ObjectId... which isn't easy because ObjectIds are stored
with an internal representation of 5 ints. Expose random access
to the individual data bytes through new methods, avoiding the
need to convert first to a byte[20].
Change-Id: I99e64700b27fc0c95aa14ef8ad46a0e8832d4441
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of hiding this logic inside of DirCacheTree and the legacy
Tree type, pull it into a common place where we can reuse it by
creating tree records in a buffer that can be passed directly into
the ObjectInserter. This allows us to avoid some copying, as the
inserter can be given the internal buffer of the formatter.
Because we trust these two callers to feed us records in the proper
order, without '/' in the names, and without duplicate names in the
same tree, we don't do any validation inside of the formatter itself.
To protect themselves from making ordering errors, developers should
continue to use DirCache to process edits to source code trees.
Change-Id: Idf7f10e736d4a44ccdf8afe060535d7b0554a92f
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
JGit merge algorithm behaved differently from C Git when
we had adjacent modifications. If line 9 was modified by
OURS and line 10 by theirs then C Git will return a
conflict while JGit was seeing this as independent
modifications. This change is not only there to achieve
compatibility, but there where also some really wrong
merge results produced by JGit in the area of adjacent
modifications.
Change-Id: I8d77cb59e82638214e45b3cf9ce3a1f1e9b35c70
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
When adding a new method near the end of the sequence we want to
show the full method inserted, and not tear the prior method due
to the common trailing curly brace being consumed as part of the
common end region of the sequences.
Bug: 328895
Change-Id: I233bc40445fb5452863f5fb082bc3097433a8da6
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
HistogramDiff failed on cases where the initial element for the LCS
was actually very common (e.g. has 20 occurrences), and the first
element of the inserted region after the LCS was also common but
had fewer occurrences (e.g. 10), while the LCS also contained a
unique element (1 occurrence).
This happens often in Java source code. The initial element for
the LCS might be the empty line ("\n"), and the inserted but common
element might be "\t/**\n", with the LCS being a large span of
lines that contains unique method declarations. Even though "/**"
occurs less often than the empty line its not a better LCS if the
LCS we already have contains a unique element.
The logic in HistogramDiff would normally have worked fine, except I
tried to optimize scanning of B by making tryLongestCommonSequence
return the end of the region when there are matching elements
found in A. This allows us to skip over the current LCS region,
as it has already been examined, but caused us to fail to identify
an element that had a lower occurrence count within the region.
The solution used here is to trade space-for-time by keeping a
table of A positions to their occurrence counts. This allows the
matching logic to always use the smallest count for this region,
even if the smallest count doesn't appear on the initial element.
The new unit test testEdit_LcsContainsUnique() verifies this new
behavior works as expected.
Bug: 328895
Change-Id: Id170783b891f645b6a8cf6f133c6682b8de40aaf
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Fixes the "Method ignores results of InputStream.read()" warning.
This is the only place where read() was used instead of readFully()
and the return value was not checked. So it was either an oversight
or should be documented. This change assumes it was an oversight.
Change-Id: I859404a7d80449c538a552427787f3e57d7c92b4
The value was accessed every time in the loop body with get(),
so use the more efficient entrySet().
Change-Id: I91d90cbd0b0d03ca4a3db986c58b8d80d80f40a4
This was already disabled in the Eclipse preferences for the project.
With this, Hudson should also ignore it.
Change-Id: I7a6b9a20451dc5ba9a61553248b5f4b6c6c7a78b
As described in Bug 328551 there was a bug that the merge algorithm
was not always reporting conflicts when the same line was deleted
and modified. This problem was introduced during commit
0c017188b4 when reported conflicts have
been checked for common pre- and suffixes.
This was fixed here by better determining whether after stripping
off common prefixes and suffixes from a conflicting region there
is still some conflicting part left.
I also added a unit test to test this situation.
Bug: 328551
Change-Id: Iec6c9055d00e5049938484a27ab98dda2577afc4
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
When creating a local branch based on another local branch, the
upstream configuration contains "." as origin and the source branch
as "merge". The PullCommand should support this by skipping the
fetch step altogether and use the base branch to merge with.
Change-Id: I260a1771aeeffca5b0161d1494fd63c672ecc2a6
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
AmbiguousObjectException contains an AbbreviatedObjectId and is
supposed to be serializable, so it should be serializable as well.
Change-Id: I8056e78aee20fdd3cb9600b52cd8ed988544293d
It's probably not possible that these numbers are negative in the
algorithm, but it's cleaner this way and gets rid of three more
FindBugs warnings.
Change-Id: Ifbce4e2c787fb9a7cd309c605e8d86211ef8a352
Don't permit transient worker threads to access the underlying output
stream of a ProgressMonitor, as they might get marked as the stream's
writer thread. Instead proxy update events from the workers back onto
the application's real work thread. This ensures that the stream only
sees a single thread, and its the thread that will remain alive for
the entire life cycle of the operation.
This fixes IOException("Write end dead") during local repository fetch
when threaded delta search is enabled. One of the transient delta
search threads became the designated writer for the pipe, and when it
terminated the reader end thought the writer was dead, even though the
main writer thread was still executing in PackWriter.
Bug: 326557
Change-Id: I01d1b20a3d7be1c0b480c7fb5c9773c161fe5c15
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
RefsDirectory fires a RefsChangedEvent when it detect that one
ref changed (new, modified, deleted). But there was a potential
of wrong events beeing fired leading to a endless loop in EGit.
Problem is that when calling getRefs(ALL) we don't want to report
additional refs and by that we remove the additional refs from
the list of "refs reported upwards last time". We fire an
RefsChangedEvent because we think that the special refs are not
there anymore.
I fixed this by removing eventing for the additional refs. Another
alternative would be to always scan also for additional refs and
put them in the list of refs. But getRefs(ALL) would then remove
the additional refs again. I didn't do that for performance reasons
and also because I am not sure whether we want evnting for
additional refs.
Change-Id: Icb9398b55a8c6bbf03e38f6670feb67754ce91e0
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
When checking out a tree, files that are identical to the file in
the current index and working directory don't need to be updated.
Change-Id: I9e025a53facd42410796eae821baaeff684a25c5
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
IndexDiff now allows to set an additional filter. This can be used
e.g. for restricting the tree walk to a given set of files.
Change-Id: I642de17e74b997fa0c5878c90631f6640ed70bdd
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
The RefDirectory class was not returning FETCH_HEAD and
MERGE_HEAD when trying to get all refs via getRefs(RefDatabase.ALL).
This fix adds constants for FETCH_HEAD and ORIG_HEAD and adds a
new getter getAdditionalRefs() to get these additional refs.
To be compatible with c git the getRefs(ALL) method will not return
FETCH_HEAD, MERGE_HEAD and ORIG_HEAD.
Change-Id: Ie114ca92e9d5e7d61d892f4413ade65acdc08c32
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>