Do not add an artificial line break to the message, since it may become
much wider due to the embedded exception messages anyway.
The layout shall be controlled by the egit supplied message dialog using
layout constraints.
Bug: 540537
Change-Id: I4257b52e5e59689dfcbab47bd7c075b3fd031837
Signed-off-by: Michael Keppler <Michael.Keppler@gmx.de>
The only reference to this externalized text was deleted in c88d34b0.
Change-Id: Iecc7cc89192d69431dddb6550a02f66f0b09accc
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
In C Git, when a client fetches with "git fetch --shallow-since=<date>
origin <ref>", and all commits reachable from <ref> are older than
<date>, the server dies with a message "no commits selected for shallow
requests". That is, (1) the --shallow-since filter applies to the commit
pointed to by the ref itself, and (2) there is a check that at least one
commit is not filtered out. (The pack-protocol.txt documentation does
not describe this, but the C implementation does this.)
The implementation in commit 1bb430dc21 ("UploadPack: support
deepen-since in protocol v2", 2018-09-27) does neither (1) nor (2), so
do both of these.
Change-Id: I9946327a71627626ecce34ca2d017d2add8867fc
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
First-want line parsing accepts lines with an optional whitespace, when
the spec is strict requiring a white space.
Validate the line enforcing that there is a white space between oid and
capabilities list.
Change-Id: I45ada67030e0720f9b402c298be18c7518c799b1
Signed-off-by: Ivan Frade <ifrade@google.com>
Previous commits block the addition to the repo of dangerous .gitmodules
files, but some could have been committed before those safeguards where
in place.
Add a check in DfsFsck to validate the .gitmodules files in the repo.
Use the same validation than the ReceivePack, translating the
results to FsckErrors.
Note that *all* .gitmodules files in the storage will be checked, not
only the latest version.
Change-Id: I040cf1f31a779419aad0292ba5e6e76eb7f32b66
Signed-off-by: Ivan Frade <ifrade@google.com>
The main concern are submodule urls starting with '-' that could pass as
options to an unguarded tool.
Pass through the parser the ids of blobs identified as .gitmodules
files in the ObjectChecker. Load the blobs and parse/validate them
in SubmoduleValidator.
Change-Id: Ia0cc32ce020d288f995bf7bc68041fda36be1963
Signed-off-by: Ivan Frade <ifrade@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
In C git versions before 2.19.1, the submodule is fetched by running
"git clone <uri> <path>". A URI starting with "-" would be interpreted
as an option, causing security problems. See CVE-2018-17456.
Refuse to add submodules with URIs, names or paths starting with "-",
that could be confused with command line arguments.
[jn: backported to JGit 4.7.y, bringing portions of Masaya Suzuki's
dotdot check code in v5.1.0.201808281540-m3~57 (Add API to specify
the submodule name, 2018-07-12) along for the ride]
Change-Id: I2607c3acc480b75ab2b13386fe2cac435839f017
Signed-off-by: Ivan Frade <ifrade@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Change-Id: Icec16c01853a3f5ea016d454b3d48624498efcce
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
(cherry picked from commit 5e68fe245f)
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
FS_POSIX.createNewFile(File) failed to properly implement atomic file
creation on NFS using the algorithm [1]:
- name of the hard link must be unique to prevent that two processes
using different NFS clients try to create the same link. This would
render nlink useless to detect if there was a race.
- the hard link must be retained for the lifetime of the file since we
don't know when the state of the involved NFS clients will be
synchronized. This depends on NFS configuration options.
To fix these issues we need to change the signature of createNewFile
which would break API. Hence deprecate the old method
FS.createNewFile(File) and add a new method createNewFileAtomic(File).
The new method returns a LockToken which needs to be retained by the
caller (LockFile) until all involved NFS clients synchronized their
state. Since we don't know when the NFS caches are synchronized we need
to retain the token until the corresponding file is no longer needed.
The LockToken must be closed after the LockFile using it has been
committed or unlocked. On Posix, if core.supportsAtomicCreateNewFile =
false this will delete the hard link which guarded the atomic creation
of the file. When acquiring the lock fails ensure that the hard link is
removed.
[1] https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html
also see file creation flag O_EXCL in
http://man7.org/linux/man-pages/man2/open.2.html
Change-Id: I84fcb16143a5f877e9b08c6ee0ff8fa4ea68a90d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
If progress monitor is cancelled break loops in rename detection by
throwing a CanceledException.
Bug: 536324
Change-Id: Ia3511fb749d2a5d45005e72c156b874ab7a0da26
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Remove completely the empty directories under refs/<namespace>
including the first level partition of the changes, when they are
completely empty.
Bug: 536777
Change-Id: I88304d34cc42435919c2d1480258684d993dfdca
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Currently SubmoduleAddCommand always uses the path as submodule name.
This patch lets the caller specify a submodule name.
SubmoduleUpdateCommand still does not make use of the submodule name
(see bug 535027) but Git does. To avoid triggering CVE-2018-11235,
do some validation on the name to avoid '..' path components.
[jn: fleshed out commit message, mostly to work around flaky CI]
Change-Id: I6879c043c6d7973556e2080387f23c246e3d76a5
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Signed-off-by: Jonathan Nieder <jrn@google.com>
Try to give as much information as possible. The connection's
response message might contain additional hints as to why the
connection could not be established.
Bug: 536541
Change-Id: I7230e4e0be9417be8cedeb8aaab35186fcbf00a5
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
When SshSupport.runSshCommand fails since the executed external ssh
command failed throw a CommandFailedException.
If discovery of LFS server fails due to failure of the
git-lfs-authenticate command chain the CommandFailureException to the
LfsConfigInvalidException in order to allow root cause analysis in the
application using that.
Change-Id: I2f9ea2be11274549f6d845937164c248b3d840b2
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Add support for the "shallow" and "deepen" parameters in the "fetch"
command in the fetch-pack/upload-pack protocol v2. Advertise support for
this in the capability advertisement.
TODO: implement deepen-relative, deepen-since, deepen-not
Change-Id: I7ffd80d6c38872f9d713ac7d6e0412106b3766d7
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Jonathan Nieder <jrn@google.com>
On a local non-NFS filesystem the .git/config file will be orphaned if
it is replaced by a new process while the current process is reading the
old file. The current process successfully continues to read the
orphaned file until it closes the file handle.
Since NFS servers do not keep track of open files, instead of orphaning
the old .git/config file, such a replacement on an NFS filesystem will
instead cause the old file to be garbage collected (deleted). A stale
file handle exception will be raised on NFS clients if the file is
garbage collected (deleted) on the server while it is being read. Since
we no longer have access to the old file in these cases, the previous
code would just fail. However, in these cases, reopening the file and
rereading it will succeed (since it will open the new replacement file).
Since retrying the read is a viable strategy to deal with stale file
handles on the .git/config file, implement such a strategy.
Since it is possible that the .git/config file could be replaced again
while rereading it, loop on stale file handle exceptions, up to 5 extra
times, trying to read the .git/config file again, until we either read
the new file, or find that the file no longer exists. The limit of 5 is
arbitrary, and provides a safe upper bounds to prevent infinite loops
consuming resources in a potential unforeseen persistent error
condition.
Change-Id: I6901157b9dfdbd3013360ebe3eb40af147a8c626
Signed-off-by: Nasser Grainawi <nasser@codeaurora.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Teach UploadPack to advertise the filter capability and support a
"filter" line in the request, accepting blob sizes only, if the
configuration variable "uploadpack.allowfilter" is true. This feature is
currently in the "master" branch of Git, and as of the time of writing,
this feature is to be released in Git 2.17.
This is incomplete in that the filter-by-sparse-specification feature
also supported by Git is not included in this patch.
If a JGit server were to be patched with this commit, and a repository
on that server configured with RequestPolicy.ANY or
RequestPolicy.REACHABLE_COMMIT_TIP, a Git client built from the "master"
branch would be able to perform a partial clone.
Change-Id: If72b4b422c06ab432137e9e5272d353b14b73259
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Respect merge=lfs and diff=lfs attributes where required to replace (in
memory) the content of LFS pointers with the actual blob content from
the LFS storage (and vice versa when staging/merging).
Does not implement general support for merge/diff attributes for any
other use case apart from LFS.
Change-Id: Ibad8875de1e0bee8fe3a1dffb1add93111534cae
Signed-off-by: Markus Duft <markus.duft@ssi-schaefer.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
If JGit built in LFS support is enabled for the current repository (or
user/system), any existing pre-push hook will cause an exception for the
time beeing, as only a single pre-push hook is supported.
Thus either native pre-push hooks OR JGit built-in LFS support may be
enabled currently, but not both.
Change-Id: Ie7d2b90e26e948d9cca3d05a7a19489488c75895
Signed-off-by: Markus Duft <markus.duft@ssi-schaefer.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* Fix "can not" -> "cannot" in two messages
* Re-word "Cannot mkdir" to "Cannot create directory"
Change-Id: Ide0cec55eeeebd23bccc136257c80f47638ba858
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
* Fix "can not" -> "cannot" in two messages
* Re-word "Cannot mkdir" to "Cannot create directory"
Change-Id: Ide0cec55eeeebd23bccc136257c80f47638ba858
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
* Section and key names in git config files are case-insensitive.
* If an include directive is invalid, include the line in the
exception message.
* If inclusion of the included file fails, put the file name into
the exception message so that the user knows in which file the
problem is.
Change-Id: If920943af7ff93f5321b3d315dfec5222091256c
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
The reason for the change is LFS: when using a lot of LFS files,
checkout can take quite some time on larger repositories. To avoid
"hanging" UI, provide progress reporting.
Also implement (partial) progress reporting for cherry-pick, reset,
revert which are using checkout internally.
The feature is also useful without LFS, so it is independent of it.
Change-Id: I021e764241f3c107eaf2771f6b5785245b146b42
Signed-off-by: Markus Duft <markus.duft@ssi-schaefer.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Change-Id: Iaaefc2cbafbf083d6ab158b1c378ec69cc76d282
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When a submodule is moved, the "name" field remains the same, while
the "path" field changes. Git uses the "name" field in .git/config
when a submodule is initialized, so this patch makes JGit do so too.
Change-Id: I48d8e89f706447b860c0162822a8e68170aae42b
Signed-off-by: David Turner <dturner@twosigma.com>
Previously, Config was using the same method for both escaping and
parsing subsection names and config values. The goal was presumably code
savings, but unfortunately, these two pieces of the git config format
are simply different.
In git v2.15.1, Documentation/config.txt says the following about
subsection names:
"Subsection names are case sensitive and can contain any characters
except newline (doublequote `"` and backslash can be included by
escaping them as `\"` and `\\`, respectively). Section headers cannot
span multiple lines. Variables may belong directly to a section or to
a given subsection."
And, later in the same documentation section, about values:
"A line that defines a value can be continued to the next line by
ending it with a `\`; the backquote and the end-of-line are stripped.
Leading whitespaces after 'name =', the remainder of the line after
the first comment character '#' or ';', and trailing whitespaces of
the line are discarded unless they are enclosed in double quotes.
Internal whitespaces within the value are retained verbatim.
Inside double quotes, double quote `"` and backslash `\` characters
must be escaped: use `\"` for `"` and `\\` for `\`.
The following escape sequences (beside `\"` and `\\`) are recognized:
`\n` for newline character (NL), `\t` for horizontal tabulation (HT,
TAB) and `\b` for backspace (BS). Other char escape sequences
(including octal escape sequences) are invalid."
The main important differences are that subsection names have a limited
set of supported escape sequences, and do not support newlines at all,
either escaped or unescaped. Arguably, it would be easy to support
escaped newlines, but C git simply does not:
$ git config -f foo.config $'foo.bar\nbaz.quux' value
error: invalid key (newline): foo.bar
baz.quux
I468106ac was an attempt to fix one bug in escapeValue, around leading
whitespace, without having to rewrite the whole escaping/parsing code.
Unfortunately, because escapeValue was used for escaping subsection
names as well, this made it possible to write invalid config files, any
time Config#toText is called with a subsection name with trailing
whitespace, like {foo }.
Rather than pile hacks on top of hacks, fix it for real by largely
rewriting the escaping and parsing code.
In addition to fixing escape sequences, fix (and write tests for) a few
more issues in the old implementation:
* Now that we can properly parse it, always emit newlines as "\n" from
escapeValue, rather than the weird (but still supported) syntax with a
non-quoted trailing literal "\n\" before the newline. In addition to
producing more readable output and matching the behavior of C git,
this makes the escaping code much simpler.
* Disallow '\0' entirely within both subsection names and values, since
due to Unix command line argument conventions it is impossible to pass
such values to "git config".
* Properly preserve intra-value whitespace when parsing, rather than
collapsing it all to a single space.
Change-Id: I304f626b9d0ad1592c4e4e449a11b136c0f8b3e3
If some process executed by FS#readPipe lived for a while after
closing stderr, FS#GobblerThread#run failed with an
IllegalThreadStateException exception when accessing p.exitValue()
for the process which is still alive.
Add Process#waitFor calls to wait for the process completion.
Bug: 528335
Change-Id: I87e0b6f9ad0b995dbce46ddfb877e33eaf3ae5a6
Signed-off-by: Dmitry Pavlenko <pavlenko@tmatesoft.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
JSch unconditionally overrides the user name given in the connection
URI by the one found in ~/.ssh/config (if that does specify one for
the used host). If the SSH config file has a different user name,
we'll end up using the wrong name, which typically results in an
authentication failure or in Eclipse/EGit asking for a password for
the wrong user.
Unfortunately there is no way to prevent or circumvent this Jsch
behavior up front; it occurs already in the Session constructor at
com.jcraft.jsch.Session() and the Session.applyConfig() method. And
while there is a Session.setUserName() that would enable us to correct
this, that latter method has package visibility only.
So resort to reflection to invoke that setUserName() method to ensure
that Jsch uses the user name from the URI, if there is one.
Bug: 526778
Change-Id: Ia327099b5210a037380b2750a7fd76ff25c41a5a
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Allow creating symbolic references with link, and deleting them or
switching to ObjectId with unlink. How this happens is up to the
individual RefDatabase.
The default implementation detaches RefUpdate if a symbolic reference
is involved, supporting these command instances on RefDirectory.
Unfortunately the packed-refs file does not support storing symrefs,
so atomic transactions involving more than one symref command are
failed early.
Updating InMemoryRepository is deferred until reftable lands, as I
plan to switch InMemoryRepository to use reftable for its internal
storage representation.
Change-Id: Ibcae068b17a2fc6d958f767f402a570ad88d9151
Signed-off-by: Minh Thai <mthai@google.com>
Signed-off-by: Terry Parker <tparker@google.com>
The ReflogWriter constructor just took a Repository and called
getDirectory() on it to figure out the reflog dirs, but not all
Repository instances use this storage format for reflogs, so it's
incorrect to attempt to use ReflogWriter when there is not a
RefDirectory directly involved. In practice, ReflogWriter was mostly
only used by the implementation of RefDirectory, so enforcing this is
mostly just shuffling around calls in the same internal package.
The one exception is StashDropCommand, which writes to a reflog lock
file directly. This was a reasonable implementation decision, because
there is no general reflog interface in JGit beyond using
(Batch)RefUpdate to write new entries to the reflog. So to implement
"git stash drop <N>", which removes an arbitrary element from the
reflog, it's fair to fall back to the RefDirectory implementation.
Creating and using a more general interface is well beyond the scope of
this change.
That said, the old behavior of writing out the reflog file even if
that's not the reflog format used by the given Repository is clearly
wrong. Fail fast in this case instead.
Change-Id: I9bd4b047bc3e28a5607fd346ec2400dde9151730
When a https connection could not be established because the SSL
handshake was unsuccessful, TransportHttp would unconditionally
throw a TransportException.
Other https clients like web browsers or also some SVN clients
handle this more gracefully. If there's a problem with the server
certificate, they inform the user and give him a possibility to
connect to the server all the same.
In git, this would correspond to dynamically setting http.sslVerify
to false for the server.
Implement this using the CredentialsProvider to inform and ask the
user. We offer three choices:
1. skip SSL verification for the current git operation, or
2. skip SSL verification for the server always from now on for
requests originating from the current repository, or
3. always skip SSL verification for the server from now on.
For (1), we just suppress SSL verification for the current instance of
TransportHttp.
For (2), we store a http.<uri>.sslVerify = false setting for the
original URI in the repo config.
For (3), we store the http.<uri>.sslVerify setting in the git user
config.
Adapt the SmartClientSmartServerSslTest such that it uses this
mechanism instead of setting http.sslVerify up front.
Improve SimpleHttpServer to enable setting it up also with HTTPS
support in anticipation of an EGit SWTbot UI test verifying that
cloning via HTTPS from a server that has a certificate that doesn't
validate pops up the correct dialog, and that cloning subsequently
proceeds successfully if the user decides to skip SSL verification.
Bug: 374703
Change-Id: Ie1abada9a3d389ad4d8d52c2d5265d2764e3fb0e
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Git has a rather elaborate mechanism to specify HTTP configuration
options per URL, based on pattern matching the URL against "http"
subsection names.[1] The URLs used for this matching are always the
original URLs; redirected URLs do not participate.
* Scheme and host must match exactly case-insensitively.
* An optional user name must match exactly.
* Ports must match exactly after default ports have been filled in.
* The path of a subsection, if any, must match a segment prefix of
the path of the URL.
* Matches with user name take precedence over equal-length path
matches without, but longer path matches are preferred over
shorter matches with user name.
Implement this for JGit. Factor out the HttpConfig from TransportHttp
and implement the matching and override mechanism.
The set of supported settings is still the same; JGit currently
supports only followRedirects, postBuffer, and sslVerify, plus the
JGit-specific maxRedirects key.
Add tests for path normalization and prefix matching only on segment
separators, and use the new mechanism in SmartClientSmartServerSslTest
to disable sslVerify selectively for only the test server URLs.
Compare also bug 374703 and bug 465492. With this commit it would be
possible to set sslVerify to false for only the git server using a
self-signed certificate instead of having to switch it off globally
via http.sslVerify.
[1] https://git-scm.com/docs/git-config
Change-Id: I42a3c2399cb937cd7884116a2a32fcaa7a418fcb
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
The addition of "tooManyRedirects" in commit 7ac1bfc ("Do
authentication re-tries on HTTP POST") was an error I didn't
catch after rebasing that change. That message had been renamed
in the earlier commit e17bfc9 ("Add support to follow HTTP
redirects") to "redirectLimitExceeded".
Also make sure we always use the TransportException(URIish, ...)
constructor; it'll prefix the message given with the sanitized URI.
Change messages to remove the explicit mention of that URI inside the
message. Adapt tests that check the expected exception message text.
For the info logging of redirects, remove a potentially present
password component in the URI to avoid leaking it into the log.
Change-Id: I517112404757a9a947e92aaace743c6541dce6aa
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
There is at least one git server out there (GOGS) that does
not require authentication on the initial GET for
info/refs?service=git-receive-pack but that _does_ require
authentication for the subsequent POST to actually do the push.
This occurs on GOGS with public repositories; for private
repositories it wants authentication up front.
Handle this behavior by adding 401 handling to our POST request.
Note that this is suboptimal; we'll re-send the push data at
least twice if an authentication failure on POST occurs. It
would be much better if the server required authentication
up-front in the GET request.
Added authentication unit tests (using BASIC auth) to the
SmartClientSmartServerTest:
- clone with authentication
- clone with authentication but lacking CredentialsProvider
- clone with authentication and wrong password
- clone with authentication after redirect
- clone with authentication only on POST, but not on GET
Also tested manually in the wild using repositories at try.gogs.io.
That server offers only BASIC auth, so the other paths
(DIGEST, NEGOTIATE, fall back from DIGEST to BASIC) are untested
and I have no way to test them.
* public repository: GET unauthenticated, POST authenticated
Also tested after clearing the credentials and then entering a
wrong password: correctly asks three times during the HTTP
POST for user name and password, then gives up.
* private repository: authentication already on GET; then gets
applied correctly initially to the POST request, which succeeds.
Also fix the authentication to use the credentials for the redirected
URI if redirects had occurred. We must not present the credentials
for the original URI in that case. Consider a malicious redirect A->B:
this would allow server B to harvest the user credentials for server
A. The unit test for authentication after a redirect also tests for
this.
Bug: 513043
Change-Id: I97ee5058569efa1545a6c6f6edfd2b357c40592a
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
ReftableReader provides sequential scanning support over all
references, a range of references within a subtree (such as
"refs/heads/"), and lookup of a single reference. Reads can be
accelerated by an index block, if it was created by the writer.
The BlockSource interface provides an abstraction to read from the
reftable's backing storage, supporting a future commit to connect
to JGit DFS and the DfsBlockCache.
Change-Id: Ib0dc5fa937d0c735f2a9ff4439d55c457fea7aa8
This is a simple writer to create reftable formatted files. Follow-up
commits will add support for reading from reftable, debugging
utilities, and tests.
Change-Id: I3d520c3515c580144490b0b45433ea175a3e6e11
git-core follows HTTP redirects so JGit should also provide this.
Implement config setting http.followRedirects with possible values
"false" (= never), "true" (= always), and "initial" (only on GET, but
not on POST).[1]
We must do our own redirect handling and cannot rely on the support
that the underlying real connection may offer. At least the JDK's
HttpURLConnection has two features that get in the way:
* it does not allow cross-protocol redirects and thus fails on
http->https redirects (for instance, on Github).
* it translates a redirect after a POST to a GET unless the system
property "http.strictPostRedirect" is set to true. We don't want
to manipulate that system setting nor require it.
Additionally, git has its own rules about what redirects it accepts;[2]
for instance, it does not allow a redirect that adds query arguments.
We handle response codes 301, 302, 303, and 307 as per RFC 2616.[3]
On POST we do not handle 303, and we follow redirects only if
http.followRedirects == true.
Redirects are followed only a certain number of times. There are two
ways to control that limit:
* by default, the limit is given by the http.maxRedirects system
property that is also used by the JDK. If the system property is
not set, the default is 5. (This is much lower than the JDK default
of 20, but I don't see the value of following so many redirects.)
* this can be overwritten by a http.maxRedirects git config setting.
The JGit http.* git config settings are currently all global; JGit has
no support yet for URI-specific settings "http.<pattern>.name". Adding
support for that is well beyond the scope of this change.
Like git-core, we log every redirect attempt (LOG.info) so that users
may know about the redirection having occurred.
Extends the test framework to configure an AppServer with HTTPS support
so that we can test cloning via HTTPS and redirections involving HTTPS.
[1] https://git-scm.com/docs/git-config
[2] 6628eb41db
[3] https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
CQ: 13987
Bug: 465167
Change-Id: I86518cb76842f7d326b51f8715e3bbf8ada89859
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
JGit already had some fsck-like classes like ObjectChecker which can
check for an individual object.
The read-only FsckPackParser which will parse all objects within a pack
file and check it with ObjectChecker. It will also check the pack index
file against the object information from the pack parser.
Change-Id: Ifd8e0d28eb68ff0b8edd2b51b2fa3a50a544c855
Signed-off-by: Zhen Chen <czhen@google.com>
Some downstream code checks whether a ReceiveCommand is a create or a
delete based on the type field. Other downstream code (in particular a
good chunk of Gerrit code I wrote) checks the same thing by comparing
oldId/newId to zeroId. Unfortunately, there were no strict checks in the
constructor that ensures that zeroId is only set for oldId/newId if the
type argument corresponds, so a caller that passed mismatched IDs and
types would observe completely undefined behavior as a result. This is
and always has been a misuse of the API; throw IllegalArgumentException
so the caller knows that it is a misuse.
Similarly, throw from the constructor if oldId/newId are null. The
non-nullness requirement was already documented. Fix RefDirectoryTest to
not do the wrong thing.
Change-Id: Ie2d0bfed8a2d89e807a41925d548f0f0ce243ecf
When running an automatic GC on a FileRepository, when the caller
passes a NullProgressMonitor, run the GC in a background thread. Use a
thread pool of size 1 to limit the number of background threads spawned
for background gc in the same application. In the next minor release we
can make the thread pool configurable.
In some cases, the auto GC limit is lower than the true number of
unreachable loose objects, so auto GC will run after every (e.g) fetch
operation. This leads to the appearance of poor fetch performance.
Since these GCs will never make progress (until either the objects
become referenced, or the two week timeout expires), blocking on them
simply reduces throughput.
In the event that an auto GC would make progress, it's still OK if it
runs in the background. The progress will still happen.
This matches the behavior of regular git.
Git (and now jgit) uses the lock file for gc.log to prevent simultaneous
runs of background gc. Further, it writes errors to gc.log, and won't
run background gc if that file is present and recent. If gc.log is too
old (according to the config gc.logexpiry), it will be ignored.
Change-Id: I3870cadb4a0a6763feff252e6eaef99f4aa8d0df
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Android wants them to work, and we're only interested in them for bare
repos, so add them just for that.
Make sure to use symlinks instead of just using the copyfile
implementation. Some scripts look up where they're actually located in
order to find related files, so they need the link back to their
project.
Change-Id: I929b69b2505f03036f69e25a55daf93842871f30
Signed-off-by: Dan Willemsen <dwillemsen@google.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Jeff Gaston <jeffrygaston@google.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
In the repo manifest documentation [1] the fetch attribute is marked
as "#REQUIRED".
If the fetch attribute is not specified, this would previously result in
NullPointerException. Throw a SAXException instead.
[1] https://gerrit.googlesource.com/git-repo/+/master/docs/manifest-format.txt
Change-Id: Ib8ed8cee6074fe6bf8f9ac6fc7a1664a547d2d49
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
All that's really required to run a merge operation is a single
ObjectInserter, from which we can construct a RevWalk, plus a Config
that declares a diff algorithm. Provide some factory methods that don't
take Repository.
Change-Id: Ib884dce2528424b5bcbbbbfc043baec1886b9bbd