An attempt to re-implement not well documented Git CLI behavior for
patterns with backslashes.
It looks like Git silently ignores all \ characters in ignore rules, if
they are NOT covered by 3 cases described in [1]:
{quote}
1) ... Put a backslash ("\") in front of the first hash for patterns
that begin with a hash.
...
2) Trailing spaces are ignored unless they are quoted with backslash
("\").
...
3) Put a backslash ("\") in front of the first "!" for patterns that
begin with a literal "!", for example, "\!important!.txt".
{quote}
Undocumented also is the fact that backslash itself can be escaped by
backslash.
So \h\e\l\l\o\.t\x\t rule matches hello.txt and a\\\\b a\b in Git CLI.
Additionally, the glob parser [2] knows special meaning of backslash:
{quote}
One can remove the special meaning of '?', '*' and '[' by preceding
them by a backslash, or, in case this is part of a shell command
line, enclosing them in quotes. Between brackets these characters
stand for themselves. Thus, "[[?*\]" matches the four characters
'[', '?', '*' and '\'.
{quote}
[1] https://www.kernel.org/pub/software/scm/git/docs/gitignore.html
[2] http://man7.org/linux/man-pages/man7/glob.7.html
Bug: 478065
Change-Id: I3dc973475d1943c5622103701fa8cb3ea0684e3e
Signed-off-by: Andrey Loskutov <loskutov@gmx.de>
Currently we fail to properly recognize character group if the pattern
before character group contains opening bracket.
See comment from Sebastien Arod on https://git.eclipse.org/r/56678/
Change-Id: I70d3657a2a328818ea2bdc1409d18ecb3a85825b
Signed-off-by: Andrey Loskutov <loskutov@gmx.de>
Current DiffFormat behavior regarding submodules (aka git links) is
incorrect. The "Subproject commit <sha1>" appears as part of the diff
header, rather than as its own hunk.
--> From JGit implementation
diff --git a/plugins/cookbook-plugin b/plugins/cookbook-plugin
index b9d3ca8..ec6ed89 160000
--- a/plugins/cookbook-plugin
+++ b/plugins/cookbook-plugin
-Subproject commit b9d3ca8a65030071e28be19296ba867ab424fbbf
+Subproject commit ec6ed89c47ba7223f82d9cb512926a6c5081343e
--> From C Git 2.5.2
diff --git a/plugins/cookbook-plugin b/plugins/cookbook-plugin
index b9d3ca8..ec6ed89 160000
--- a/plugins/cookbook-plugin
+++ b/plugins/cookbook-plugin
@@ -1 +1 @@
-Subproject commit b9d3ca8a65030071e28be19296ba867ab424fbbf
+Subproject commit ec6ed89c47ba7223f82d9cb512926a6c5081343e
The current way of processing submodules results in no hunk header and
includes the contents of the hunk as part of the headers. To fix this, we
can't just have our writeGitLinkDiffText output the hunk header. We have
to change the flow so that the raw text gets parsed as a diff. The easiest
way to do this is to fake the RawText in the FormatResult when we have a
GITLINK.
It should be noted that it seems possible for there to be a difference
between a GITLINK and a non-GITLINK, but I don't think this can happen in
practice, so I don't think we need to worry too much about it.
This patch also fixes up the test for GitLink headers, as the test was
for the old behavior. My setup has 3 other failing tests which may or
may not be the result of environmental changes. However, the same tests
fail without this commit, so I do not believe they are related.
Bug: 477759
Change-Id: If13f7b406904fad814416c93ed09ea47ef183337
Signed-off-by: Jacob Keller <jacob.keller@gmail.com>
Ignore rules should escape $^(){}+| chars if using regular expressions,
because they should be treated literally if they aren't part of a
character group.
Bug: 478055
Change-Id: Ic7276442d7f8f02594b85eae1ef697362e62d3bd
Signed-off-by: Andrey Loskutov <loskutov@gmx.de>
Fix the unit tests to not do boxing by using assertEquals(int, int)
instead of assertThat with a matcher.
Change-Id: I5412fe2f72c8ea0227b9ff3a3352ccb555e22231
Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>
There was this warning because private assertEquals(Object, Object)
method was shadowing JUnit assertEquals methods.
Change-Id: I889bfe1d8c48210d9a42147a523c4829c5b5d1e3
Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>
If a client mistakenly tries to send a tag object as a shallow line
JGit blindly assumes this is a commit and tries to parse the tag
buffer using the commit parser. This can cause an obtuse error like:
InvalidObjectIdException: Invalid id: t c0ff331234...
The "t" comes from the "object c0ff331234..." line of the tag tring
to be parsed as though it where the "tree" line of a commit.
Run any client supplied shallow lines through the RevWalk to lookup
the object types. Fail fast with a protocol exception if any of them
are non-commit.
Skip objects not known to this repository. This matches behavior
with git-core's upload-pack, which sliently skips over any shallow
line object named by the client but not known by the server.
Change-Id: Ic6c57a90a42813164ce65c2244705fc42e84d700
Properties.containsKey() is the correct call here; contains() was testing
if a value is present but the key is what was meant.
Change-Id: Ice72c9f4388583e18cf8aca6e837cc4299fd07fd
When we have a URI that contains an empty path component (that is
it only contains a "/") we want to fall back to the host as
humanish name.
This change is according to the behavior of upstream git, which
falls back on the hostname when guessing directory names for
newly cloned repositories (see [1] for the discussion).
[1] http://article.gmane.org/gmane.comp.version-control.git/274669
Change-Id: I44400c6ab72a2722d2155d53d63671bd867d6c44
Signed-off-by: Patrick Steinhardt <ps@pks.im>
- update org.apache.httpcomponents.httpcore to 4.3.3
- update org.apache.httpcomponents.httpclient to 4.3.6, 4.3.5 and later
are reported to fix vulnerability CVE-2014-3577
CQ: 9220
CQ: 9221
Bug: 470523
Change-Id: I024448c941e81f7c1dc1cc2394329df90e9b3048
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
If no refs match the input list and we are writing to a batch,
the returned new commit from write() will match the current commit.
Adding a command to the batch for this case is harmless as it will
succeed, but it's more straightforward to just skip adding a command
in this case.
Add tests or the combination of saving matching refs and saving to a
batch.
Change-Id: I6837389b08e6c80bc2d4c9e9c506d07293ea5fb2
This header was removed unintentionally from some bundles in
3a4a5a4e57. Restore it to ensure lazy
activation of bundles.
Change-Id: I1f841f978fb93278e3ec0533a01f1363510dd976
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
In Bug 476164 it was reported that EGit doesn't start when the platform
comes with jsch 0.1.51 while this version of EGit/JGit brings jsch
0.1.53. This could be caused by outdated uses-clauses. Hence recompute
them using PDE tooling.
Bug: 476164
Change-Id: I185ba097884ead9cd034eba842bd3bf34181a99b
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
It was reported in [1] that 197e3393a5 led
to a performance regression in a BFG benchmark. Analysis showed that
this is caused by the exists() method in FS_POSIX, now overriding the
default implementation in FS. The default implementation of FS.exists()
uses java.io.File.exists(), while the new implementation in FS_POSIX
uses java.nio.file.Files.exists() - by simply removing the override in
FS_POSIX, performance was restored.
Profiling showed that java.nio.file.Files.exists() is substantially
slower than java.io.File.exists(), to the point where the exists() call
doubles the average cost of a call to
ObjectDirectory.insertUnpackedObject() - which the BFG uses a lot,
because it's rewriting history. Average times measured on Ubuntu were:
java.io.File.exists() - 4 microseconds
java.nio.file.Files.exists() - 60 microseconds
The loose object exists test should be using java.io.File and not FS.
ObjectDirectory uses FS.resolve() to traverse symlinks to objects but
then once inside objects all 256 sharded directories should be real
directories, and the object files should be real files, not dangling
symlinks. java.io.File.exists() is sufficient here, and faster.
Change ObjectDirectory to use File.exists() once its computed the File
handle.
This does mean JGit cannot run ObjectDirectory code on an abstract
virtual filesystem plugged into NIO2. If you really want to run JGit on
an esoteric non-standard filesystem like "in memory" you should look at
the DFS storage backend, which has fewer abstraction points to deal
with. Or write your own from scratch.
[1] https://dev.eclipse.org/mhonarc/lists/jgit-dev/msg02954.html
Change-Id: I74684dc3957ae1ca52a7097f83a6c420aa24310f
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
- add @since annotation for new API method
- silence non-externalized String warning
Change-Id: I864176ced64e9569e7f2cdf8f777720655bfc578
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
On a local filesystem the packed-refs file will be orphaned if it is
replaced by another client while the current client is reading the old
one. However, since NFS servers do not keep track of open files, instead
of orphaning the old packed-refs file, such a replacement will cause the
old file to be garbage collected instead. A stale file handle exception
will be raised on NFS servers if the file is garbage collected (deleted)
on the server while it is being read. Since we no longer have access to
the old file in these cases, the previous code would just fail. However,
in these cases, reopening the file and rereading it will succeed (since
it will reopen the new replacement file). So retrying the read is a
viable strategy to deal with stale file handles on the packed-refs file,
implement such a strategy.
Since it is possible that the packed-refs file could be replaced again
while rereading it (multiple consecutive updates can easily occur with
ref deletions), loop on stale file handle exceptions, up to 5 extra
times, trying to read the packed-refs file again, until we either read
the new file, or find that the file no longer exists. The limit of 5 is
arbitrary, and provides a safe upper bounds to prevent infinite loops
consuming resources in a potential unforeseen persistent error
condition.
Change-Id: I085c472bafa6e2f32f610a33ddc8368bb4ab1814
Signed-off-by: Martin Fick<mfick@codeaurora.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Add a public API to the FileUtils to determine if an IOException is a
stale NFS file handle exception. This will make it easier to detect
such errors, and interpret them consistently throughout the codebase.
This new API is a bit more lenient in its detection than the previous
detection, and should be able to detect some errors which previously
were not identified as stale file handle exceptions because they had the
word NFS in the error message. Adjust the packfile handling code to use
this new API for detection.
Change-Id: I21f80014546ba1afec7335890e5ae79e7f521412
Signed-off-by: Martin Fick<mfick@codeaurora.org>
Signaling the need to flush() only via the interrupted status of a
copying thread doesn't work realiably with jsch. The write() method of
com.jcraft.jsch.Session catches the InterruptedException in several
places. As a result StreamCopyThread can easily miss the interrupt if it
was interrupted during the dst.write() or dst.flush() call. When it
happens, StreamCopyThread will not send some data to the remote side and
will not get the response back, because remote side will wait for more
data from us.
The flushCount field incremented during flush() method guarantees we
don't miss flush() even if jsch catches InterruptedException in
dst.write() or dst.flush() calls.
Checking the flushCount after dst.write() is needed because dst.write()
can clear interrupt status, in this case the next blocking src.read()
won't throw an exception and we will not call flush().
Flush is performed only after src.read() was blocked and thrown an
InterruptedIOException exception, this guarantees that we flush all the
data available in src so far (src.read() doesn't block while more is
available).
FlushCount is reset to 0 only when there were no flush() calls since
last blocked read, that means we flushed all data available in src. If
there were flush() calls, the interrupt status is restored, so next
blocked read will throw InterruptedException and we will flush()
again.
Change-Id: I692b226edaff502f06235ec05da9052b5fe6478a
Signed-off-by: Dmitry Neverov <dmitry.neverov@gmail.com>
There should be no functional change, the logic updated only to make
code simple so that compiler can understand what is going for. Removed
all @SuppressWarnings("null") annotations since they cannot be used if
"org.eclipse.jdt.core.compiler.problem.potentialNullReference" option is
set to the "error" level.
Bug: 470647
Change-Id: Ie93c249fa46e792198d362e531d5cbabaf41fdc4
Signed-off-by: Andrey Loskutov <loskutov@gmx.de>
Update target platform to Orbit M20150818205559 for Mars in order to
update com.jcraft.jsch to 0.1.53. Also update pom.xml to use Mars target
platform profile by default.
CQ: 10045
Bug: 463580
Change-Id: I1bf151fbee7b00c7bd38cf1236c9bad50e3c64bd
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
- use NIO2's Files.move() to reimplement rename()
- provide a second method accepting CopyOptions which can be used to
request atomic move.
Change-Id: Ibcf722978e65745218a1ccda45344ca295911659
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Root commits are commits with zero parents. If a commmit has no
parents it is the first commit in the repository. In general the root
commits should be unique for any given project, as the first commit
will be created at a different time, by a different user with its own
message. These root commits can be used as a "fingerprint" to
identify disjoint histories.
Change-Id: Id891dbc1f17c816cea404569578bb7635ff85cdb
The FS class and the subclasses FS_POSIX assumed in the findHook()
method that every repository has a valid gitDir. But in tests when using
in-memory-repositories this is not true and this method was generating
NPEs. Change the method to return null if no repository directory can be
determined.
Change-Id: I38a4d36dc6452b5dacae3d0dbf562b569ca3c19b
DfsGarbageCollector asks PackWriter for the set of objects packed
after the bitmap index is written out. This is now null as it was
cleared to release memory. Instead use PackBitmapIndexBuilder to
build the set as it also has the objects.
Reduce memory in PackBitmapIndexBuilder by fully discarding the
ObjectToPack instances. This was the original intent of commit
4bb523475d ("PackWriter: shed memory while creating bitmaps")
but failed as the instances were still held live here.
Switch to BlockList instead of ObjectToPack[]. This allows the
JVM to allocate many smaller arrays instead of one contiguous
array with 5.2M reference pointers. In a tight heap the smaller
allocations are more feasible.
Reduce the initial EWAHCompressedBitmaps for the 4 type maps. On
average a typical repository is 30% commits, 30% trees and 30% blobs.
These bitmaps are typically very dense. PackWriter orders objects by
commit, tree, blob when writing the file so these should always be a
very dense run of 1s with some 0s before and after. So even the 1/3rd
allocation is likely too large, but the later trim() will reduce the
internal buffer anyway.
Change-Id: If0b80a31cb00894f1485ff8f53ef7ae5a759a046
After jgit moved to Java 7 there is no need in an extra
Java7BasicAttributes class. Also all fields of Attributes can be made
final now.
Change-Id: I0be6daf7758189b0eecc4e26294bd278ed8bf7a0
Signed-off-by: Andrey Loskutov <loskutov@gmx.de>
Once bitmap creation begins the internal maps required for packing are
no longer necessary. On a repository with 5.2M objects this can save
more than 438 MiB of memory by allowing the ObjectToPack instances to
get garbage collected away.
Downside is the PackWriter cannot be used for any further opertions
except to write the bitmap index. This is an acceptable trade-off as
in practice nobody uses the PackWriter after the bitmaps are built.
Change-Id: Ibfaf84b22fa0590896a398ff659a91fcf03d7128
For construction performance each new EWAHBitmap is allocated at the
roughly worst-case size the bitmap would need if all of the words must
be literal and no run length compression is available. In practice
this is far larger than is required, wasting heap memory while the
bitmaps are computed.
Trim down each bitmap to its minimum required size. This copies the
internal array to a new smaller array, allowing the GC to reclaim the
prior larger array for reuse.
A single bitmap of 5.2M bits is only 79 KiB of memory without this
trim call but 15,000 such bitmaps is 1.1 GiB. Trimming can help fit
a larger number of bitmaps during processing.
Change-Id: I2bd19a786189db5b01c4c96f209b83de50e10c3b
The bitmap preparer only needs commit graph topology; it does not use
the message body. Allow the RevWalk to free the body after the commit
has been parsed to save memory.
Change-Id: I97d4a440c9fc313873fd224bd05b9d9e3dc575db
The WorkingTreeIterator.isEntryIgnored() should use originally requested
file mode while descending to the file tree root and checking ignore
rules. Original code asking isEntryIgnored() on a file was using
directory mode instead if the .gitignore was not located in the same
directory.
Bug: 473506
Change-Id: I9f16ba714c3ea9e6585e9c11623270dbdf4fb1df
Signed-off-by: Andrey Loskutov <loskutov@gmx.de>