On a local non-NFS filesystem the .git/config file will be orphaned if
it is replaced by a new process while the current process is reading the
old file. The current process successfully continues to read the
orphaned file until it closes the file handle.
Since NFS servers do not keep track of open files, instead of orphaning
the old .git/config file, such a replacement on an NFS filesystem will
instead cause the old file to be garbage collected (deleted). A stale
file handle exception will be raised on NFS clients if the file is
garbage collected (deleted) on the server while it is being read. Since
we no longer have access to the old file in these cases, the previous
code would just fail. However, in these cases, reopening the file and
rereading it will succeed (since it will open the new replacement file).
Since retrying the read is a viable strategy to deal with stale file
handles on the .git/config file, implement such a strategy.
Since it is possible that the .git/config file could be replaced again
while rereading it, loop on stale file handle exceptions, up to 5 extra
times, trying to read the .git/config file again, until we either read
the new file, or find that the file no longer exists. The limit of 5 is
arbitrary, and provides a safe upper bounds to prevent infinite loops
consuming resources in a potential unforeseen persistent error
condition.
Change-Id: I6901157b9dfdbd3013360ebe3eb40af147a8c626
Signed-off-by: Nasser Grainawi <nasser@codeaurora.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Teach UploadPack to advertise the filter capability and support a
"filter" line in the request, accepting blob sizes only, if the
configuration variable "uploadpack.allowfilter" is true. This feature is
currently in the "master" branch of Git, and as of the time of writing,
this feature is to be released in Git 2.17.
This is incomplete in that the filter-by-sparse-specification feature
also supported by Git is not included in this patch.
If a JGit server were to be patched with this commit, and a repository
on that server configured with RequestPolicy.ANY or
RequestPolicy.REACHABLE_COMMIT_TIP, a Git client built from the "master"
branch would be able to perform a partial clone.
Change-Id: If72b4b422c06ab432137e9e5272d353b14b73259
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Respect merge=lfs and diff=lfs attributes where required to replace (in
memory) the content of LFS pointers with the actual blob content from
the LFS storage (and vice versa when staging/merging).
Does not implement general support for merge/diff attributes for any
other use case apart from LFS.
Change-Id: Ibad8875de1e0bee8fe3a1dffb1add93111534cae
Signed-off-by: Markus Duft <markus.duft@ssi-schaefer.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
If JGit built in LFS support is enabled for the current repository (or
user/system), any existing pre-push hook will cause an exception for the
time beeing, as only a single pre-push hook is supported.
Thus either native pre-push hooks OR JGit built-in LFS support may be
enabled currently, but not both.
Change-Id: Ie7d2b90e26e948d9cca3d05a7a19489488c75895
Signed-off-by: Markus Duft <markus.duft@ssi-schaefer.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* Fix "can not" -> "cannot" in two messages
* Re-word "Cannot mkdir" to "Cannot create directory"
Change-Id: Ide0cec55eeeebd23bccc136257c80f47638ba858
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
* Fix "can not" -> "cannot" in two messages
* Re-word "Cannot mkdir" to "Cannot create directory"
Change-Id: Ide0cec55eeeebd23bccc136257c80f47638ba858
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
* Section and key names in git config files are case-insensitive.
* If an include directive is invalid, include the line in the
exception message.
* If inclusion of the included file fails, put the file name into
the exception message so that the user knows in which file the
problem is.
Change-Id: If920943af7ff93f5321b3d315dfec5222091256c
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
The reason for the change is LFS: when using a lot of LFS files,
checkout can take quite some time on larger repositories. To avoid
"hanging" UI, provide progress reporting.
Also implement (partial) progress reporting for cherry-pick, reset,
revert which are using checkout internally.
The feature is also useful without LFS, so it is independent of it.
Change-Id: I021e764241f3c107eaf2771f6b5785245b146b42
Signed-off-by: Markus Duft <markus.duft@ssi-schaefer.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Change-Id: Iaaefc2cbafbf083d6ab158b1c378ec69cc76d282
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When a submodule is moved, the "name" field remains the same, while
the "path" field changes. Git uses the "name" field in .git/config
when a submodule is initialized, so this patch makes JGit do so too.
Change-Id: I48d8e89f706447b860c0162822a8e68170aae42b
Signed-off-by: David Turner <dturner@twosigma.com>
Previously, Config was using the same method for both escaping and
parsing subsection names and config values. The goal was presumably code
savings, but unfortunately, these two pieces of the git config format
are simply different.
In git v2.15.1, Documentation/config.txt says the following about
subsection names:
"Subsection names are case sensitive and can contain any characters
except newline (doublequote `"` and backslash can be included by
escaping them as `\"` and `\\`, respectively). Section headers cannot
span multiple lines. Variables may belong directly to a section or to
a given subsection."
And, later in the same documentation section, about values:
"A line that defines a value can be continued to the next line by
ending it with a `\`; the backquote and the end-of-line are stripped.
Leading whitespaces after 'name =', the remainder of the line after
the first comment character '#' or ';', and trailing whitespaces of
the line are discarded unless they are enclosed in double quotes.
Internal whitespaces within the value are retained verbatim.
Inside double quotes, double quote `"` and backslash `\` characters
must be escaped: use `\"` for `"` and `\\` for `\`.
The following escape sequences (beside `\"` and `\\`) are recognized:
`\n` for newline character (NL), `\t` for horizontal tabulation (HT,
TAB) and `\b` for backspace (BS). Other char escape sequences
(including octal escape sequences) are invalid."
The main important differences are that subsection names have a limited
set of supported escape sequences, and do not support newlines at all,
either escaped or unescaped. Arguably, it would be easy to support
escaped newlines, but C git simply does not:
$ git config -f foo.config $'foo.bar\nbaz.quux' value
error: invalid key (newline): foo.bar
baz.quux
I468106ac was an attempt to fix one bug in escapeValue, around leading
whitespace, without having to rewrite the whole escaping/parsing code.
Unfortunately, because escapeValue was used for escaping subsection
names as well, this made it possible to write invalid config files, any
time Config#toText is called with a subsection name with trailing
whitespace, like {foo }.
Rather than pile hacks on top of hacks, fix it for real by largely
rewriting the escaping and parsing code.
In addition to fixing escape sequences, fix (and write tests for) a few
more issues in the old implementation:
* Now that we can properly parse it, always emit newlines as "\n" from
escapeValue, rather than the weird (but still supported) syntax with a
non-quoted trailing literal "\n\" before the newline. In addition to
producing more readable output and matching the behavior of C git,
this makes the escaping code much simpler.
* Disallow '\0' entirely within both subsection names and values, since
due to Unix command line argument conventions it is impossible to pass
such values to "git config".
* Properly preserve intra-value whitespace when parsing, rather than
collapsing it all to a single space.
Change-Id: I304f626b9d0ad1592c4e4e449a11b136c0f8b3e3
If some process executed by FS#readPipe lived for a while after
closing stderr, FS#GobblerThread#run failed with an
IllegalThreadStateException exception when accessing p.exitValue()
for the process which is still alive.
Add Process#waitFor calls to wait for the process completion.
Bug: 528335
Change-Id: I87e0b6f9ad0b995dbce46ddfb877e33eaf3ae5a6
Signed-off-by: Dmitry Pavlenko <pavlenko@tmatesoft.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
JSch unconditionally overrides the user name given in the connection
URI by the one found in ~/.ssh/config (if that does specify one for
the used host). If the SSH config file has a different user name,
we'll end up using the wrong name, which typically results in an
authentication failure or in Eclipse/EGit asking for a password for
the wrong user.
Unfortunately there is no way to prevent or circumvent this Jsch
behavior up front; it occurs already in the Session constructor at
com.jcraft.jsch.Session() and the Session.applyConfig() method. And
while there is a Session.setUserName() that would enable us to correct
this, that latter method has package visibility only.
So resort to reflection to invoke that setUserName() method to ensure
that Jsch uses the user name from the URI, if there is one.
Bug: 526778
Change-Id: Ia327099b5210a037380b2750a7fd76ff25c41a5a
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Allow creating symbolic references with link, and deleting them or
switching to ObjectId with unlink. How this happens is up to the
individual RefDatabase.
The default implementation detaches RefUpdate if a symbolic reference
is involved, supporting these command instances on RefDirectory.
Unfortunately the packed-refs file does not support storing symrefs,
so atomic transactions involving more than one symref command are
failed early.
Updating InMemoryRepository is deferred until reftable lands, as I
plan to switch InMemoryRepository to use reftable for its internal
storage representation.
Change-Id: Ibcae068b17a2fc6d958f767f402a570ad88d9151
Signed-off-by: Minh Thai <mthai@google.com>
Signed-off-by: Terry Parker <tparker@google.com>
The ReflogWriter constructor just took a Repository and called
getDirectory() on it to figure out the reflog dirs, but not all
Repository instances use this storage format for reflogs, so it's
incorrect to attempt to use ReflogWriter when there is not a
RefDirectory directly involved. In practice, ReflogWriter was mostly
only used by the implementation of RefDirectory, so enforcing this is
mostly just shuffling around calls in the same internal package.
The one exception is StashDropCommand, which writes to a reflog lock
file directly. This was a reasonable implementation decision, because
there is no general reflog interface in JGit beyond using
(Batch)RefUpdate to write new entries to the reflog. So to implement
"git stash drop <N>", which removes an arbitrary element from the
reflog, it's fair to fall back to the RefDirectory implementation.
Creating and using a more general interface is well beyond the scope of
this change.
That said, the old behavior of writing out the reflog file even if
that's not the reflog format used by the given Repository is clearly
wrong. Fail fast in this case instead.
Change-Id: I9bd4b047bc3e28a5607fd346ec2400dde9151730
When a https connection could not be established because the SSL
handshake was unsuccessful, TransportHttp would unconditionally
throw a TransportException.
Other https clients like web browsers or also some SVN clients
handle this more gracefully. If there's a problem with the server
certificate, they inform the user and give him a possibility to
connect to the server all the same.
In git, this would correspond to dynamically setting http.sslVerify
to false for the server.
Implement this using the CredentialsProvider to inform and ask the
user. We offer three choices:
1. skip SSL verification for the current git operation, or
2. skip SSL verification for the server always from now on for
requests originating from the current repository, or
3. always skip SSL verification for the server from now on.
For (1), we just suppress SSL verification for the current instance of
TransportHttp.
For (2), we store a http.<uri>.sslVerify = false setting for the
original URI in the repo config.
For (3), we store the http.<uri>.sslVerify setting in the git user
config.
Adapt the SmartClientSmartServerSslTest such that it uses this
mechanism instead of setting http.sslVerify up front.
Improve SimpleHttpServer to enable setting it up also with HTTPS
support in anticipation of an EGit SWTbot UI test verifying that
cloning via HTTPS from a server that has a certificate that doesn't
validate pops up the correct dialog, and that cloning subsequently
proceeds successfully if the user decides to skip SSL verification.
Bug: 374703
Change-Id: Ie1abada9a3d389ad4d8d52c2d5265d2764e3fb0e
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Git has a rather elaborate mechanism to specify HTTP configuration
options per URL, based on pattern matching the URL against "http"
subsection names.[1] The URLs used for this matching are always the
original URLs; redirected URLs do not participate.
* Scheme and host must match exactly case-insensitively.
* An optional user name must match exactly.
* Ports must match exactly after default ports have been filled in.
* The path of a subsection, if any, must match a segment prefix of
the path of the URL.
* Matches with user name take precedence over equal-length path
matches without, but longer path matches are preferred over
shorter matches with user name.
Implement this for JGit. Factor out the HttpConfig from TransportHttp
and implement the matching and override mechanism.
The set of supported settings is still the same; JGit currently
supports only followRedirects, postBuffer, and sslVerify, plus the
JGit-specific maxRedirects key.
Add tests for path normalization and prefix matching only on segment
separators, and use the new mechanism in SmartClientSmartServerSslTest
to disable sslVerify selectively for only the test server URLs.
Compare also bug 374703 and bug 465492. With this commit it would be
possible to set sslVerify to false for only the git server using a
self-signed certificate instead of having to switch it off globally
via http.sslVerify.
[1] https://git-scm.com/docs/git-config
Change-Id: I42a3c2399cb937cd7884116a2a32fcaa7a418fcb
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
The addition of "tooManyRedirects" in commit 7ac1bfc ("Do
authentication re-tries on HTTP POST") was an error I didn't
catch after rebasing that change. That message had been renamed
in the earlier commit e17bfc9 ("Add support to follow HTTP
redirects") to "redirectLimitExceeded".
Also make sure we always use the TransportException(URIish, ...)
constructor; it'll prefix the message given with the sanitized URI.
Change messages to remove the explicit mention of that URI inside the
message. Adapt tests that check the expected exception message text.
For the info logging of redirects, remove a potentially present
password component in the URI to avoid leaking it into the log.
Change-Id: I517112404757a9a947e92aaace743c6541dce6aa
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
There is at least one git server out there (GOGS) that does
not require authentication on the initial GET for
info/refs?service=git-receive-pack but that _does_ require
authentication for the subsequent POST to actually do the push.
This occurs on GOGS with public repositories; for private
repositories it wants authentication up front.
Handle this behavior by adding 401 handling to our POST request.
Note that this is suboptimal; we'll re-send the push data at
least twice if an authentication failure on POST occurs. It
would be much better if the server required authentication
up-front in the GET request.
Added authentication unit tests (using BASIC auth) to the
SmartClientSmartServerTest:
- clone with authentication
- clone with authentication but lacking CredentialsProvider
- clone with authentication and wrong password
- clone with authentication after redirect
- clone with authentication only on POST, but not on GET
Also tested manually in the wild using repositories at try.gogs.io.
That server offers only BASIC auth, so the other paths
(DIGEST, NEGOTIATE, fall back from DIGEST to BASIC) are untested
and I have no way to test them.
* public repository: GET unauthenticated, POST authenticated
Also tested after clearing the credentials and then entering a
wrong password: correctly asks three times during the HTTP
POST for user name and password, then gives up.
* private repository: authentication already on GET; then gets
applied correctly initially to the POST request, which succeeds.
Also fix the authentication to use the credentials for the redirected
URI if redirects had occurred. We must not present the credentials
for the original URI in that case. Consider a malicious redirect A->B:
this would allow server B to harvest the user credentials for server
A. The unit test for authentication after a redirect also tests for
this.
Bug: 513043
Change-Id: I97ee5058569efa1545a6c6f6edfd2b357c40592a
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
ReftableReader provides sequential scanning support over all
references, a range of references within a subtree (such as
"refs/heads/"), and lookup of a single reference. Reads can be
accelerated by an index block, if it was created by the writer.
The BlockSource interface provides an abstraction to read from the
reftable's backing storage, supporting a future commit to connect
to JGit DFS and the DfsBlockCache.
Change-Id: Ib0dc5fa937d0c735f2a9ff4439d55c457fea7aa8
This is a simple writer to create reftable formatted files. Follow-up
commits will add support for reading from reftable, debugging
utilities, and tests.
Change-Id: I3d520c3515c580144490b0b45433ea175a3e6e11
git-core follows HTTP redirects so JGit should also provide this.
Implement config setting http.followRedirects with possible values
"false" (= never), "true" (= always), and "initial" (only on GET, but
not on POST).[1]
We must do our own redirect handling and cannot rely on the support
that the underlying real connection may offer. At least the JDK's
HttpURLConnection has two features that get in the way:
* it does not allow cross-protocol redirects and thus fails on
http->https redirects (for instance, on Github).
* it translates a redirect after a POST to a GET unless the system
property "http.strictPostRedirect" is set to true. We don't want
to manipulate that system setting nor require it.
Additionally, git has its own rules about what redirects it accepts;[2]
for instance, it does not allow a redirect that adds query arguments.
We handle response codes 301, 302, 303, and 307 as per RFC 2616.[3]
On POST we do not handle 303, and we follow redirects only if
http.followRedirects == true.
Redirects are followed only a certain number of times. There are two
ways to control that limit:
* by default, the limit is given by the http.maxRedirects system
property that is also used by the JDK. If the system property is
not set, the default is 5. (This is much lower than the JDK default
of 20, but I don't see the value of following so many redirects.)
* this can be overwritten by a http.maxRedirects git config setting.
The JGit http.* git config settings are currently all global; JGit has
no support yet for URI-specific settings "http.<pattern>.name". Adding
support for that is well beyond the scope of this change.
Like git-core, we log every redirect attempt (LOG.info) so that users
may know about the redirection having occurred.
Extends the test framework to configure an AppServer with HTTPS support
so that we can test cloning via HTTPS and redirections involving HTTPS.
[1] https://git-scm.com/docs/git-config
[2] 6628eb41db
[3] https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
CQ: 13987
Bug: 465167
Change-Id: I86518cb76842f7d326b51f8715e3bbf8ada89859
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
JGit already had some fsck-like classes like ObjectChecker which can
check for an individual object.
The read-only FsckPackParser which will parse all objects within a pack
file and check it with ObjectChecker. It will also check the pack index
file against the object information from the pack parser.
Change-Id: Ifd8e0d28eb68ff0b8edd2b51b2fa3a50a544c855
Signed-off-by: Zhen Chen <czhen@google.com>
Some downstream code checks whether a ReceiveCommand is a create or a
delete based on the type field. Other downstream code (in particular a
good chunk of Gerrit code I wrote) checks the same thing by comparing
oldId/newId to zeroId. Unfortunately, there were no strict checks in the
constructor that ensures that zeroId is only set for oldId/newId if the
type argument corresponds, so a caller that passed mismatched IDs and
types would observe completely undefined behavior as a result. This is
and always has been a misuse of the API; throw IllegalArgumentException
so the caller knows that it is a misuse.
Similarly, throw from the constructor if oldId/newId are null. The
non-nullness requirement was already documented. Fix RefDirectoryTest to
not do the wrong thing.
Change-Id: Ie2d0bfed8a2d89e807a41925d548f0f0ce243ecf
When running an automatic GC on a FileRepository, when the caller
passes a NullProgressMonitor, run the GC in a background thread. Use a
thread pool of size 1 to limit the number of background threads spawned
for background gc in the same application. In the next minor release we
can make the thread pool configurable.
In some cases, the auto GC limit is lower than the true number of
unreachable loose objects, so auto GC will run after every (e.g) fetch
operation. This leads to the appearance of poor fetch performance.
Since these GCs will never make progress (until either the objects
become referenced, or the two week timeout expires), blocking on them
simply reduces throughput.
In the event that an auto GC would make progress, it's still OK if it
runs in the background. The progress will still happen.
This matches the behavior of regular git.
Git (and now jgit) uses the lock file for gc.log to prevent simultaneous
runs of background gc. Further, it writes errors to gc.log, and won't
run background gc if that file is present and recent. If gc.log is too
old (according to the config gc.logexpiry), it will be ignored.
Change-Id: I3870cadb4a0a6763feff252e6eaef99f4aa8d0df
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Android wants them to work, and we're only interested in them for bare
repos, so add them just for that.
Make sure to use symlinks instead of just using the copyfile
implementation. Some scripts look up where they're actually located in
order to find related files, so they need the link back to their
project.
Change-Id: I929b69b2505f03036f69e25a55daf93842871f30
Signed-off-by: Dan Willemsen <dwillemsen@google.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Jeff Gaston <jeffrygaston@google.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
In the repo manifest documentation [1] the fetch attribute is marked
as "#REQUIRED".
If the fetch attribute is not specified, this would previously result in
NullPointerException. Throw a SAXException instead.
[1] https://gerrit.googlesource.com/git-repo/+/master/docs/manifest-format.txt
Change-Id: Ib8ed8cee6074fe6bf8f9ac6fc7a1664a547d2d49
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
All that's really required to run a merge operation is a single
ObjectInserter, from which we can construct a RevWalk, plus a Config
that declares a diff algorithm. Provide some factory methods that don't
take Repository.
Change-Id: Ib884dce2528424b5bcbbbbfc043baec1886b9bbd
This fixes Bazel build:
in srcs attribute of java_library rule //org.eclipse.jgit:jgit:
file '//org.eclipse.jgit:src/org/eclipse/jgit/util/sha1/SHA1.recompress'
is misplaced here (expected .java, .srcjar or .properties).
Another option that was considered is to exclude the non source files.
Change-Id: I7083f27a4a49bf6681c85c7cf7b08a83c9a70c77
Signed-off-by: David Ostrovsky <david@ostrovsky.org>
The FileNotFoundException is typically raised in three conditions:
1. file doesn't exist
2. incompatible read vs. read/write open modes
3. filesystem locking
4. temporary lack of resources (e.g. too many open files)
1. is already managed, 2. would never happen as packs are not
overwritten while with 3. and 4. it is worth logging the exception and
retrying to read the pack again.
Log transient errors using an exponential backoff strategy to avoid
flooding the logs with the same error if consecutive retries to access
the pack fail repeatedly.
Bug: 513435
Change-Id: I03c6f6891de3c343d3d517092eaa75dba282c0cd
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Update SHA1 class to include a Java port of sha1dc[1]'s ubc_check,
which can detect the attack pattern used by the SHAttered[2] authors.
Given the shattered example files that have the same SHA-1, this
modified implementation can identify there is risk of collision given
only one file in the pair:
$ jgit ...
[main] WARN org.eclipse.jgit.util.sha1.SHA1 - SHA-1 collision 38762cf7f55934b34d179ae6a4c80cadccbb7f0a
When JGit detects probability of a collision the SHA1 class now warns
on the logger, reporting the object's SHA-1 hash, and then throws a
Sha1CollisionException to the caller.
From the paper[3] by Marc Stevens, the probability of a false positive
identification of a collision is about 14 * 2^(-160), sufficiently low
enough for any detected collision to likely be a real collision.
git-core[4] may adopt sha1dc before the system migrates to an entirely
new hash function. This commit enables JGit to remain compatible with
that move to sha1dc, and help protect users by warning if similar
attacks as SHAttered are identified.
Performance declined about 8% (detection off), now:
MessageDigest 238.41 MiB/s
MessageDigest 244.52 MiB/s
MessageDigest 244.06 MiB/s
MessageDigest 242.58 MiB/s
SHA1 216.77 MiB/s (was ~240.83 MiB/s)
SHA1 220.98 MiB/s
SHA1 221.76 MiB/s
SHA1 221.34 MiB/s
This decline in throughput is attributed to the step loop unrolling in
compress(), which was necessary to easily fit the UbcCheck logic into
the hash function. Using helper functions s1-s4 reduces the code
explosion, providing acceptable throughput.
With detection enabled (default):
SHA1 detectCollision 180.12 MiB/s
SHA1 detectCollision 181.59 MiB/s
SHA1 detectCollision 181.64 MiB/s
SHA1 detectCollision 182.24 MiB/s
sha1dc (native C) ~206.28 MiB/s
sha1dc (native C) ~204.47 MiB/s
sha1dc (native C) ~203.74 MiB/s
Average time across 100,000 calls to hash 4100 bytes (such as a commit
or tree) for the various algorithms available to JGit also shows SHA1
is slower than MessageDigest, but by an acceptable margin:
MessageDigest 17 usec
SHA1 18 usec
SHA1 detectCollision 22 usec
Time to index-pack for git.git (217982 objects, 69 MiB) has increased:
MessageDigest SHA1 w/ detectCollision
------------- -----------------------
20.12s 25.25s
19.87s 25.48s
20.04s 25.26s
avg 20.01s 25.33s +26%
Being implemented in Java with these additional safety checks is
clearly a penalty, but throughput is still acceptable given the
increased security against object name collisions.
[1] https://github.com/cr-marcstevens/sha1collisiondetection
[2] https://shattered.it/
[3] https://marc-stevens.nl/research/papers/C13-S.pdf
[4] https://public-inbox.org/git/20170223230621.43anex65ndoqbgnf@sigill.intra.peff.net/
Change-Id: I9fe4c6d8fc5e5a661af72cd3246c9e67b1b9fee6
Place a configurable upper bound on the amount of command data
received from clients during `git push`. The limit is applied to the
encoded wire protocol format, not the JGit in-memory representation.
This allows clients to flexibly use the limit; shorter reference names
allow for more commands, longer reference names permit fewer commands
per batch.
Based on data gathered from many repositories at $DAY_JOB, the average
reference name is well under 200 bytes when encoded in UTF-8 (the wire
encoding). The new 3 MiB default receive.maxCommandBytes allows about
11,155 references in a single `git push` invocation. A Gerrit Code
Review system with six-digit change numbers could still encode 29,399
references in the 3 MiB maxCommandBytes limit.
Change-Id: I84317d396d25ab1b46820e43ae2b73943646032c
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Logging the repository name makes it easier to track down what is
incorrectly closing a repository.
Change-Id: I42a8bdf766c0e67f100adbf76d9616584e367ac2
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
ObjectDirectory.getShallowCommits should throw an IOException
instead of an InvalidArgumentException if invalid SHAs are present
in .git/shallow (as this file is usually edited by a human).
Change-Id: Ia3a39d38f7aec4282109c7698438f0795fbec905
Signed-off-by: Marc Strapetz <marc.strapetz@syntevo.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
MonotonicClock can be implemented to provide more certainity about
time than the standard System.currentTimeMillis() can provide. This
can be used by classes such as PersonIdent and Ketch to rely on
more certainity about time moving in a strictly ascending order.
Gerrit Code Review can also leverage this interface through its
embedding of JGit and use MonotonicClock and ProposedTimestamp to
provide stronger assurance that NoteDb time is moving forward.
Change-Id: I1a3cbd49a39b150a0d49b36d572da113ca83a786
This method pair allows the caller to read and modify the description
file that is traditionally used by gitweb and cgit when rendering a
repository on the web.
Gerrit Code Review has offered this feature for years as part of
its GitRepositoryManager interface, but its fundamentally a feature
of JGit and its Repository abstraction.
git-core typically initializes a repository with a default value
inside the description file. During getDescription() this string
is converted to null as it is never a useful description.
Change-Id: I0a457026c74e9c73ea27e6f070d5fbaca3439be5
Work around issues with JSch not handling interrupts by
isolating the JSch interactions onto another thread.
Run write and flush on a single threaded Executor using
simple Callable operations wrapping the method calls,
waiting on the future to determine the outcome before
allowing the caller to continue.
If any operation was interrupted the state of the stream
becomes fuzzy at close time. The implementation tries to
interrupt the pending write or flush, but this is very
likely to corrupt the stream object, so exceptions are
ignored during such a dirty close.
Change-Id: I42e3ba3d8c35a2e40aad340580037ebefbb99b53
In case a value is used which isn’t a power of 2 there will be a high
chance of java.lang.ArrayIndexOutBoundsException and
org.eclipse.jgit.errors.CorruptObjectException due to a mismatching
assumption for the DfsBlockCache#blockSizeShift parameter.
Change-Id: Ib348b3704edf10b5f93a3ffab4fa6f09cbbae231
Signed-off-by: Philipp Marx <smigfu@googlemail.com>
This SIOOBE happens reproducibly when trying to access
a repository containing Cygwin symlinks
Change-Id: I25f103fcc723bac7bfaaeee333a86f11627a92c7
Signed-off-by: Marc Strapetz <marc.strapetz@syntevo.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Git barfs on these (and they don't make any sense), so we certainly
shouldn't write them.
Change-Id: I3faf8554a05f0fd147be2e63fbe55987d3f88099
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
JGit supports smudge filters defined in repository configuration. The
filters are implemented as external programs filtering content by
accepting the original content (as seen in git's object database) on
stdin and which emit the filtered content on stdout. This content is
then written to the file in the working tree. To run such a filter JGit
has to start an external process and pump data into/from this process.
This commit adds support for built-in smudge filters which are
implemented in Java and which are executed by jgit's main thread. When a
filter is defined in the configuration as
"jgit://builtin/<filterDriverName>/smudge" then JGit will lookup in a
static map whether a builtin filter is registered under this name. If
found such a filter is called to do the filtering.
The functionality in this commit requires that a program using JGit
explicitly calls the JGit API to register built-in implementations for
specific smudge filters. In follow-up commits configuration parameters
will be added which trigger such registrations.
Change-Id: Ia743aa0dbed795e71e5792f35ae55660e0eb3c24
Change-Id: I6691b454404dd4db3c690ecfc7515de765bc2ef7
Signed-off-by: Martin Goellnitz <m.goellnitz@outlook.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
- enhance FS.readPipe to throw an exception if the external command
fails to enable the caller to handle the command failure
- reduce log level to warning if system git config does not exist
- improve log message
Bug: 476639
Change-Id: I94ae3caec22150dde81f1ea8e1e665df55290d42
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
cgit changed the --depth parameter to mean the total depth of history
rather than the depth of ancestors to be returned [1]. JGit still uses
the latter meaning, so update it to match cgit.
depth=0 still means a non-shallow clone. depth=1 now means only the
wants rather than the wants and their direct parents.
This is accomplished by changing the semantic meaning of "depth" in
UploadPack and PackWriter to mean the total depth of history desired,
while keeping "depth" in DepthWalk.{RevWalk,ObjectWalk} to mean
the depth of traversal. Thus UploadPack and PackWriter always
initialize their DepthWalks with "depth-1".
[1] upload-pack: fix off-by-one depth calculation in shallow clone
https://code.googlesource.com/git/+/682c7d2f1a2d1a5443777237450505738af2ff1a
Change-Id: I87ed3c0f56c37e3491e367a41f5e555c4207ff44
Signed-off-by: Terry Parker <tparker@google.com>