Hard-coding ~/.gnupg for the GPG directory doesn't work on Windows,
where GnuPG uses %APPDATA%\gnupg by default. Make the determination
of the directory platform-dependent.
Bug: 544797
Change-Id: Id4bfd39a981ef7c5b39fbde46fce9a7524418709
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Instead of a new "unexpectedNlinkValue" message use the already
existing "failedAtomicFileCreation". Remove a stray double quote
from the latter.
Change-Id: I1ba5e9ea48d3f7615354b2ace2575883070b3206
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
With native git, .git/rebase-merge/rewritten exists actually in two
different cases:
* as a file in git rebase --merge recording OIDs for copying notes
* as a directory in git rebase --preserve-merges
Add a comment, and check for isDirectory() instead of exists().
Bug: 511487
Change-Id: I6a3317b4234d4f41c41b3004cdc7ea0abf2c6223
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
ONTO_NAME must be "onto_name", not "onto-name".
For native git, --preserve-merges is an interactive mode. Create the
INTERACTIVE marker file, otherwise a native git rebase --continue
will fall back into rebase --merge mode before git 2.19.0 since it
started looking for the REWRITTEN directory to make the distinction
only then.[1]
This allows a JGit interactive rebase to be continued via native git
rebase --continue.
[1] https://github.com/git/git/commit/6d98d0c0
Bug: 511487
Change-Id: I13850e0fd96ac77d03fbb581c8790d76648dbbc6
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Aborting a rebase used ORIG_HEAD to reset. Strictly speaking this is
not correct, since other commands run during the rebase (for instance,
when the rebase stopped on a conflict) might have changed ORIG_HEAD.
Prefer the OID recorded in the orig-head file, falling back to the
older "head" file if "orig-head" doesn't exist, and use ORIG_HEAD only
if neither exists.
Bug: 511487
Change-Id: Ifa99221bb33e4e4754377f9b8f46e76c8936e072
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
With text=auto or core.autocrlf=true, git does not normalize upon
check-in if the file in the index contains already CR/LFs. The
documentation says: "When text is set to "auto", the path is
marked for automatic end-of-line conversion. If Git decides that
the content is text, its line endings are converted to LF on
checkin. When the file has been committed with CRLF, no conversion
is done."[1]
Implement the last bit as in canonical git: check the blob in the
index for CR/LFs. For very large files, we check only the first 8000
bytes, like RawText.isBinary() and AutoLFInputStream do.
In Auto(CR)LFInputStream, ensure that the buffer is filled as much as
possible for the isBinary() check.
Regarding these content checks, there are a number of inconsistencies:
* Canonical git considers files containing lone CRs as binary.
* RawText checks the first 8000 bytes.
* Auto(CR)LFInputStream checks the first 8096 (not 8192!) bytes.
None of these are changed with this commit. It appears that canonical
git will check the whole blob, not just the first 8k bytes. Also
note: the check for CR/LF text won't work with LFS (neither in JGit
nor in git) since the blob data is not run through the smudge filter.
C.f. [2].
Two tests in AddCommandTest actually tested that normalization was
done even if the file was already committed with CR/LF.These tests
had to be adapted. I find the git documentation unclear about the
case where core.autocrlf=input, but from [3] it looks as if this
non-normalization also applies in this case.
Add new tests in CommitCommandTest testing this for the case where
the index entry is for a merge conflict. In this case, canonical git
uses the "ours" version.[4] Do the same.
[1] https://git-scm.com/docs/gitattributes
[2] https://github.com/git/git/blob/3434569fc/convert.c#L225
[3] https://github.com/git/git/blob/3434569fc/convert.c#L529
[4] https://github.com/git/git/blob/f2b6aa98b/read-cache.c#L3281
Bug: 470643
Change-Id: Ie7310539fbe6c737d78b1dcc29e34735d4616b88
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Since 2011-02-10 (i.e., git 1.7.6)[1] native git uses "orig-head" for
REBASE_HEAD. JGit was still using "head". Currently native git has a
legacy fall-back for reading this, but for how long? Let's write to
both. Note that JGit never reads this file.
[1] https://github.com/git/git/commit/84df4560
Bug: 511487
Change-Id: Id3742bf9bbc0001d850e801b26cc8880e646abfc
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
The non-externalized warning message says there is a "possible SHA-1
collision" but then the Sha1CollisionException is always thrown.
Replace the message with the existing externalised string that does
not say "possible".
Change-Id: I9773ec76b416c356e234a658fb119f98d33eac83
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
DfsBlockCache.Ref might get cleared out if the JVM is running out of
memory. As a result, the index is not persisted for the request and
will be reloaded unnecessarily.
Change-Id: I3b57ad5e6673f77f2dc66177a649ac412a97fe20
Signed-off-by: Minh Thai <mthai@google.com>
Deprecate the method in favor of setEncoding(Charset).
Update the only caller in the code base that was still using
the deprecated variant.
Change-Id: I6357f2d0c727007013c72e9d5b7c72a3f5f3f2b1
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Display extra logging, including the exception with the associated
stacktrace, whenever a packFile can't be read and thus removed
from the packlist.
Change-Id: I97a4e31dc427bfcc0baae438dcbe2dcd4704b824
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Bug caused the pack to be 12 bytes short when cold cache. Also added
test for copyPackAsIs method.
Change-Id: Idf8fb0e50d1215245d4b032e2e00df4b218c115f
Signed-off-by: Minh Thai <mthai@google.com>
Android for instance forbids hard linking via a SELinux
policy. If we can't hard link, the NFS work-around for
atomic file creation cannot work at all. In this case,
fall back to not using the hard-linking mechanism.
Android throws an AccessDeniedException, so we catch that.
The javadoc on Files.createLink() indicates that another
possibility might be a SecurityException, so catch that,
too.
Bug: 543956
Change-Id: I551b7a45f7b2fbbd8cf94f0b7233dbd8a200520e
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
This method tried to iterate spurious files which may exist in the
.git/refs folder, e.g. on Mac a .DS_Store may have been created there by
inspecting the folder using the finder application. This led to a
NotDirectoryException when deleteEmptyRefsFolders tried to create an
iterator for such a file entry. Skip files contained in the refs folder
to ensure the method only tries to iterate contained folders but not
files.
Change-Id: I5f31e733072a35db1e93908a9c69a8891ae5c206
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Single-branch-clone should be able to clone a single tag. Enhance
CloneCommand to accept also full refs of tags in setBranchesToClone().
Make sure we also include fetch ref specs for the fetch command for
tags. This mimics the behavior of native git's single-branch clone:
git clone --branch <tag> --single-branch <URI>
Bug: 542611
Change-Id: I285cf043751d9b0ba71258ee8214c0e5d1191428
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Several problems:
* The command didn't specify whether it expected short or full names.
* For the new name, it expected a short name, but then got confused
if tags or both local and remote branches with the same name existed.
* For the old name, it accepted either a short or a full name, but
again got confused if a short name was given and a tag with the
same name existed.
With such an interface, one cannot use Repository.findRef() to
reliably find the branch to rename. Use exactRef() for the new
name as by the time the Ref is needed its full name is known.
For determining the old Ref from the name, do the resolution
explicitly: first try exactRef (assuming the old name is a full
name); if that doesn't find anything, try "refs/heads/<old>" and
"refs/remotes/<old>" explicitly. Throw an exception if the name
is ambiguous, or if exactRef returned something that is not a
branch (refs/tags/... or also refs/notes/...).
Document in the javadoc what kind of names are valid, and add tests.
A user can still shoot himself in the foot if he chooses exceptionally
stupid branch names. For instance, it is still possible to rename a
branch to "refs/heads/foo" (full name "refs/heads/refs/heads/foo"),
but it cannot be renamed further using the new short name if a branch
with the full name "refs/heads/foo" exists. Similar edge cases exist
for other dumb branch names, like a branch with the short name
"refs/tags/foo". Renaming using the full name is always possible.
Bug: 542446
Change-Id: I34ac91c80c0a00c79a384d16ce1e727c550d54e9
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
The new API is intended for UIs to check if signing will be possible or
would fail
Bug: 543579
Change-Id: I6ce1fd4210e46d49dcdf420c99d08c93e022136c
Signed-off-by: Gunnar Wagenknecht <gunnar@wagenknecht.org>
BundleFetchConnection.readLine() must abort on EOF, otherwise
it gets stuck in an endless loop.
Bug: 543390
Change-Id: I4cb3428560277888af114b928950d620bb6564f9
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
The import is only needed because of a reference to it in the Javadoc,
and can be avoided by explicitly specifying the package instead, which
is how it's referenced in other cases (Constants, FileHeader).
Change-Id: I0c6254a9adf1f52fb8f2c04a858b11696ad264f5
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
This also includes a change to generating the jgit CLI jar. Shading is
no longer possible because it breaks the signature of BouncyCastle.
Instead, the Spring Boot Loader Maven plug-in is now used to generate an
executable jar.
Bug: 382212
Change-Id: I35ee3d4b06d9d479475ab2e51b29bed49661bbdc
Also-by: Gunnar Wagenknecht <gunnar@wagenknecht.org>
Signed-off-by: Gunnar Wagenknecht <gunnar@wagenknecht.org>
Signed-off-by: Medha Bhargav Prabhala <mprabhala@salesforce.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This change introduces the concept of a GpgSigner which will sign
commits. The GpgSigner will be of a specific implementation (eg.,
Bouncycastle or OpenPgP executable). The actual implementation is not
part of this change.
Bug: 382212
Change-Id: Iea5da1e885c039e06bc8d679d46b124cbe504c8e
Also-by: Medha Bhargav Prabhala <mprabhala@salesforce.com>
Signed-off-by: Medha Bhargav Prabhala <mprabhala@salesforce.com>
Signed-off-by: Gunnar Wagenknecht <gunnar@wagenknecht.org>
As explained in 'The "Double-Checked Locking is Broken"
Declaration'[*], Java's memory model does not support double-checked
locking:
class LazyReadableChannel {
private ReachableChannel rc = null;
public ReadableChannel get() {
if (rc == null) {
synchronized (this) {
if (rc == null) {
rc = new ReadableChannel();
}
}
}
return rc;
}
}
With JDK 5 and newer, there is a formal memory model that ensures this
works if "rc" is volatile, but it is still not thread-safe without
that.
Fortunately, this ReadableChannelSupplier is never passed between
threads, so it does not need to be thread-safe. Simplify by removing
the synchronization.
[*] https://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html
Change-Id: I0698ee6618d734fc129dd4f63fc047c1c17c94a9
Signed-off-by: Jonathan Nieder <jrn@google.com>
This allows scanning through refs once instead of once per ref, which
should make the lookup less expensive for some RefDatabase
implementations.
Change-Id: I1434f834186cc9a6b4e52659e692b1000c926995
Signed-off-by: Jonathan Nieder <jrn@google.com>
To avoid opening the readable channel in case of DfsBlockCache
hits. Also cleaning up typos around DfsBlockCache.
Change-Id: I615e349cb4838387c1e6743cdc384d1b81b54369
Signed-off-by: Minh Thai <mthai@google.com>
Opening a readable channel can be expensive and the number of channels
can be limited in DFS. Ensure that caller of
BlockBasedFile.readOneBlock() is responsible for opening and closing
the file, and that the ReadableChannel is reused in the request. As a side
effect, this makes the code easier to read, with better use of
try-with-resources.
The downside is that this means a readable channel is always opened, even
when the entire pack is already available for copying from cache. This
should be an acceptable cost: it's rare enough not to overload the server
and from a client latency perspective, the latency cost is in the noise
relative to the data transfer cost involved in a clone. If this turns out
to be a problem in practice, we can reintroduce that optimization in a
followup change.
Change-Id: I340428ee4bacd2dce019d5616ef12339a0c85f0b
Signed-off-by: Minh Thai <mthai@google.com>
Signed-off-by: Jonathan Nieder <jrn@google.com>
We see the same index being loaded by multiple threads. Each is
hundreds of MB and takes several seconds to load, causing server to
run out of memory. This change introduces a lock to avoid these
duplicate works. It uses a new set of locks similar in implementation
to the loadLocks for getOrLoad of blocks. The locks are kept separate
to prevent long-running index loading from blocking out fast block
loading. The cache instance can be configured with a consumer to
monitor the wait time of the new locks.
Change-Id: I44962fe84093456962d5981545e3f7851ecb6e43
Signed-off-by: Minh Thai <mthai@google.com>
CheckoutCommand had a setForce() method. But this didn't correspond
to native git's 'git checkout -f' option. Deprecate the old setForce()
method and move its implementation to a new method setForceRefUpdate()
and use it to implement the -B option in the CLI class Checkout.
Add a setForced() method and use it to fix the associated '-f' option of
the CLI Checkout class to behave like native git's 'git checkout -f'
which overwrites dirty worktree files during checkout.
This is still not fully matching native git's behavior: updating
additionally dirty index entries is not done yet.
Bug: 530771
Change-Id: I776b78eb623b6ea0aca42f681788f2e4b1667f15
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
To avoid breaking ABI, take the opportunity to give these setters
(hopefully sometimes better) names and deprecate their old names.
Change-Id: Ib45011678c3d941f8ecc1a1e0fdf4c09cdc337e3
Signed-off-by: Mario Molina <mmolimar@gmail.com>
Signed-off-by: Jonathan Nieder <jrn@google.com>
Swallowing intermittent errors and trying to recover from them
makes JGit's behavior hard to predict and difficult to debug.
Propagate the errors instead. This doesn't violate JGit's usual
backward compatibility promise for clients because in these
contexts an IOException indicates either repository corruption or
a true I/O error. Let's consider these cases one at a time.
In the case of repository corruption, falling back e.g. to an empty
set of refs or a missing ref will not serve a caller well. The
fallback does not indicate the nature of the corruption, so they are
not in a good place to recover from the error. This is analogous to
Git, which attempts to provide sufficient support to recover from
corruption (by ensuring commands like "git branch -D" cope with
corruption) but little else.
In the case of an I/O error, the best we can do is to propagate the
error so that the user sees a dialog and has an opportunity to try
again. As in the corruption case, the fallback behavior does not
provide enough information for a caller to rely on the current error
handling, and callers such as EGit already need to be able to handle
runtime exceptions.
To be conservative, keep the existing behavior for the deprecated
Repository#peel method. In this example, the fallback behavior is to
return an unpeeled ref, which is distinguishable from the ref not
existing and should thus at least be possible to debug.
Change-Id: I0eb58eb8c77519df7f50d21d1742016b978e67a3
Signed-off-by: Jonathan Nieder <jrn@google.com>
Its implementation contains
} catch (IOException e) {
// Legacy API, assume error means "no"
return false;
}
Better to use ObjectDatabase#has, which throws IOException to report
errors.
Change-Id: I7de02f7ceb8f57b2a8ebdb16d2aa4376775ff933
Signed-off-by: Jonathan Nieder <jrn@google.com>
That constant is just a redirection to a java standard constant
meanwhile. It is not referenced anymore in jgit code (and egit is just
removing it). Clients can use the redirection target directly.
Change-Id: I058d013f61da8d7b771c499d8743aafb8faa5ea8
Signed-off-by: Michael Keppler <Michael.Keppler@gmx.de>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
A checkout done directly after cloning (the "initial
checkout") has a different semantic as a default
checkout. That is defined in the documentation of
"git read-tree" [1]. JGit was detecting that it is
doing an initial checkout differently from native
git: jgit used to check that the index is empty
but native git required that the index file does
not exist [2]. Teach JGit to behave like native
git.
[1] https://github.com/git/git/blob/master/Documentation/git-read-tree.txt#L187
[2] https://marc.info/?t=154150811200001&r=1&w=2
Change-Id: I1dd0f1ede7cd7ea60d28607916d0165269a9f628
and allow package org.eclipse.jgit.http.server to use package
org.eclipse.jgit.internal.transport.parser.
Change-Id: Ief330c3e75a735853d0a5a265a9ff56fb5128b99
Signed-off-by: Michael Keppler <Michael.Keppler@gmx.de>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Another step toward merging BaseReceivePack into ReceivePack.
Change-Id: If861e28ce512f556e574352fa7d4a0df0984693f
Signed-off-by: Jonathan Nieder <jrn@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Another step toward merging BaseReceivePack into ReceivePack.
Change-Id: I43cf2e36e2d5b0cd85bf23c81469909c14757b63
Signed-off-by: Jonathan Nieder <jrn@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Another step toward eliminating BaseReceivePack as a separate API.
Change-Id: If7b7d5c65a043607a2424211adb479fa33a9077b
Signed-off-by: Jonathan Nieder <jrn@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This is a first step toward eliminating the BaseReceivePack API.
Inspired by a larger change by Dan Wang <dwwang@google.com>.
Change-Id: I5c876a67d8db24bf808823d9ea44d991b1ce5277
Signed-off-by: Jonathan Nieder <jrn@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>