When reading from a packfile, make sure that is valid
and has a non-null file-descriptor.
Because of concurrency between a thread invalidating a packfile
and another trying to read it, the read() may result into a NPE
that won't be able to be automatically recovered.
Throwing a PackInvalidException would instead cause the packlist
to be refreshed and the read to eventually succeed.
Bug: 544199
Change-Id: I27788b3db759d93ec3212de35c0094ecaafc2434
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
When a packfile is invalid, throw an exception explicitly
outside any catch scope, so that is not accidentally caught
by the generic catch-all cause, which would set the packfile
as valid again.
Flagging an invalid packfile as valid again would have
dangerous consequences such as the corruption of the in-memory
packlist.
Bug: 544199
Change-Id: If7a3188a68d7985776b509d636d5ddf432bec798
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Do not redundantly call File.lastModified() for extracting the
timestamp of the PackFile but rather use consistently the FileSnapshot
which reads all file attributes in a single bulk call.
Change-Id: I932675ae4fe56dcd3833dac249816f097303bb09
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Due to finite filesystem timestamp resolution the last modified
timestamp of files cannot detect file changes which happened in the
immediate past (less than one filesystem timer tick ago).
Read and consider file size also, so that differing file size can help
to more accurately detect file changes without reading the file content.
Use bulk read to avoid multiple stat calls to retrieve file attributes.
Change-Id: I974288fff78ac78c52245d9218b5639603f67a46
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The pack reload mechanism from the filesystem works only by name
and does not check the actual last modified date of the packfile.
This lead to concurrency issues where multiple threads were loading
and removing from each other list of packfiles when one of those
was failing the checksum.
Rely on FileSnapshot rather than directly checking lastModified
timestamp so that more checks can be performed.
Bug: 544199
Change-Id: I173328f29d9914007fd5eae3b4c07296ab292390
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
In case of concurrent pack file access, threads may wait on the idx()
function even for already open files. This happens especially with a
slow file system.
Performance numbers are listed in the bug report.
Bug: 543739
Change-Id: Iff328d347fa65ae07ecce3267d44184161248978
Signed-off-by: Juergen Denner <j.denner@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
That method can easily be invoked with a null argument (e.g.
isId(repo.getFullBranch()), therefore it should handle null arguments.
Change was suggested in https://git.eclipse.org/r/#/c/137918/, which
tried to fix the same in egit only.
Bug:544982
Change-Id: I32d1df6e9b2946ab324eda7008721159019316b3
Signed-off-by: Michael Keppler <Michael.Keppler@gmx.de>
The commit body contains the commit message, which is not needed for
reachability checks.
Change-Id: Ie209c3b3f022579942f05b8b5d0625ce26400a5d
Signed-off-by: Jonathan Nieder <jrn@google.com>
The visibility change was merged after 5.3 release. The @since line
is wrong.
Fixing it in a new commit looks like the easiest way forward.
Change-Id: I6f72a9e684e99a4440cda1eb532c22b4b7bbdea5
Signed-off-by: Ivan Frade <ifrade@google.com>
Commit body contains the message that is not needed for reachability checks, and
takes up memory unnecessarily.
Change-Id: I0c7f6da249bf9c4fda9dc9e62e809322c68effce
Signed-off-by: Minh Thai <mthai@google.com>
This can help track down poor long tail performance that isn't accounted
for in the readIdxMicros or readBlockMicros metrics.
Change-Id: I701b9cfcc124f4ddb860d1766a11ea3557e604ce
Signed-off-by: Terry Parker <tparker@google.com>
When the packfile checksum does not match the expected one
report the correct checksum error instead of reporting that
the number of objects is incorrect.
Change-Id: I040f36dacc4152ae05453e7acbf8dfccceb46e0d
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
(cherry picked from commit 436c99ce59)
Externalize the message and log the pack file with absolute path.
Change-Id: I019052dfae8fd96ab67da08b3287d699287004cb
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
(cherry picked from commit 9665d86ba1)
Display extra logging, including the exception with the associated
stacktrace, whenever a packFile can't be read and thus removed
from the packlist.
Change-Id: I97a4e31dc427bfcc0baae438dcbe2dcd4704b824
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
(cherry picked from commit 962babc4b2)
When the packfile checksum does not match the expected one
report the correct checksum error instead of reporting that
the number of objects is incorrect.
Change-Id: I040f36dacc4152ae05453e7acbf8dfccceb46e0d
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Externalize the message and log the pack file with absolute path.
Change-Id: I019052dfae8fd96ab67da08b3287d699287004cb
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Previously, we called destroy() to delete the temp file on failure only
when catching an IOException, not a RuntimeException. Use a slightly
different construction with a finally block to ensure it's always
deleted on error (assuming the JVM is still healthy enough).
Change-Id: Ie201f3cfc81099ee1cafed066632da76223cef1f
There are valid cases where a hook, invoked by ProtocolV2Hook
and probably implemented in a different package, is interested
in knowing the wanted refs in the request.
Increase visibility to public in the wanted-ref method.
Change-Id: I5da085ac7af4c396c1cb85e630f40a57fc70b33e
Signed-off-by: Ivan Frade <ifrade@google.com>
Similar to UploadPack.getDepth() to know the shallow clone depth, expose
the user-specified filter blob limit for partial clones.
Change-Id: I04bde06862a1cf8a9862d950c15023c49d16a2a6
Signed-off-by: Terry Parker <tparker@google.com>
Hard-coding ~/.gnupg for the GPG directory doesn't work on Windows,
where GnuPG uses %APPDATA%\gnupg by default. Make the determination
of the directory platform-dependent.
Bug: 544797
Change-Id: Id4bfd39a981ef7c5b39fbde46fce9a7524418709
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Instead of a new "unexpectedNlinkValue" message use the already
existing "failedAtomicFileCreation". Remove a stray double quote
from the latter.
Change-Id: I1ba5e9ea48d3f7615354b2ace2575883070b3206
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
With native git, .git/rebase-merge/rewritten exists actually in two
different cases:
* as a file in git rebase --merge recording OIDs for copying notes
* as a directory in git rebase --preserve-merges
Add a comment, and check for isDirectory() instead of exists().
Bug: 511487
Change-Id: I6a3317b4234d4f41c41b3004cdc7ea0abf2c6223
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
ONTO_NAME must be "onto_name", not "onto-name".
For native git, --preserve-merges is an interactive mode. Create the
INTERACTIVE marker file, otherwise a native git rebase --continue
will fall back into rebase --merge mode before git 2.19.0 since it
started looking for the REWRITTEN directory to make the distinction
only then.[1]
This allows a JGit interactive rebase to be continued via native git
rebase --continue.
[1] https://github.com/git/git/commit/6d98d0c0
Bug: 511487
Change-Id: I13850e0fd96ac77d03fbb581c8790d76648dbbc6
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Aborting a rebase used ORIG_HEAD to reset. Strictly speaking this is
not correct, since other commands run during the rebase (for instance,
when the rebase stopped on a conflict) might have changed ORIG_HEAD.
Prefer the OID recorded in the orig-head file, falling back to the
older "head" file if "orig-head" doesn't exist, and use ORIG_HEAD only
if neither exists.
Bug: 511487
Change-Id: Ifa99221bb33e4e4754377f9b8f46e76c8936e072
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
With text=auto or core.autocrlf=true, git does not normalize upon
check-in if the file in the index contains already CR/LFs. The
documentation says: "When text is set to "auto", the path is
marked for automatic end-of-line conversion. If Git decides that
the content is text, its line endings are converted to LF on
checkin. When the file has been committed with CRLF, no conversion
is done."[1]
Implement the last bit as in canonical git: check the blob in the
index for CR/LFs. For very large files, we check only the first 8000
bytes, like RawText.isBinary() and AutoLFInputStream do.
In Auto(CR)LFInputStream, ensure that the buffer is filled as much as
possible for the isBinary() check.
Regarding these content checks, there are a number of inconsistencies:
* Canonical git considers files containing lone CRs as binary.
* RawText checks the first 8000 bytes.
* Auto(CR)LFInputStream checks the first 8096 (not 8192!) bytes.
None of these are changed with this commit. It appears that canonical
git will check the whole blob, not just the first 8k bytes. Also
note: the check for CR/LF text won't work with LFS (neither in JGit
nor in git) since the blob data is not run through the smudge filter.
C.f. [2].
Two tests in AddCommandTest actually tested that normalization was
done even if the file was already committed with CR/LF.These tests
had to be adapted. I find the git documentation unclear about the
case where core.autocrlf=input, but from [3] it looks as if this
non-normalization also applies in this case.
Add new tests in CommitCommandTest testing this for the case where
the index entry is for a merge conflict. In this case, canonical git
uses the "ours" version.[4] Do the same.
[1] https://git-scm.com/docs/gitattributes
[2] https://github.com/git/git/blob/3434569fc/convert.c#L225
[3] https://github.com/git/git/blob/3434569fc/convert.c#L529
[4] https://github.com/git/git/blob/f2b6aa98b/read-cache.c#L3281
Bug: 470643
Change-Id: Ie7310539fbe6c737d78b1dcc29e34735d4616b88
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Since 2011-02-10 (i.e., git 1.7.6)[1] native git uses "orig-head" for
REBASE_HEAD. JGit was still using "head". Currently native git has a
legacy fall-back for reading this, but for how long? Let's write to
both. Note that JGit never reads this file.
[1] https://github.com/git/git/commit/84df4560
Bug: 511487
Change-Id: Id3742bf9bbc0001d850e801b26cc8880e646abfc
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
The non-externalized warning message says there is a "possible SHA-1
collision" but then the Sha1CollisionException is always thrown.
Replace the message with the existing externalised string that does
not say "possible".
Change-Id: I9773ec76b416c356e234a658fb119f98d33eac83
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
DfsBlockCache.Ref might get cleared out if the JVM is running out of
memory. As a result, the index is not persisted for the request and
will be reloaded unnecessarily.
Change-Id: I3b57ad5e6673f77f2dc66177a649ac412a97fe20
Signed-off-by: Minh Thai <mthai@google.com>
Deprecate the method in favor of setEncoding(Charset).
Update the only caller in the code base that was still using
the deprecated variant.
Change-Id: I6357f2d0c727007013c72e9d5b7c72a3f5f3f2b1
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Display extra logging, including the exception with the associated
stacktrace, whenever a packFile can't be read and thus removed
from the packlist.
Change-Id: I97a4e31dc427bfcc0baae438dcbe2dcd4704b824
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Bug caused the pack to be 12 bytes short when cold cache. Also added
test for copyPackAsIs method.
Change-Id: Idf8fb0e50d1215245d4b032e2e00df4b218c115f
Signed-off-by: Minh Thai <mthai@google.com>
Android for instance forbids hard linking via a SELinux
policy. If we can't hard link, the NFS work-around for
atomic file creation cannot work at all. In this case,
fall back to not using the hard-linking mechanism.
Android throws an AccessDeniedException, so we catch that.
The javadoc on Files.createLink() indicates that another
possibility might be a SecurityException, so catch that,
too.
Bug: 543956
Change-Id: I551b7a45f7b2fbbd8cf94f0b7233dbd8a200520e
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
This method tried to iterate spurious files which may exist in the
.git/refs folder, e.g. on Mac a .DS_Store may have been created there by
inspecting the folder using the finder application. This led to a
NotDirectoryException when deleteEmptyRefsFolders tried to create an
iterator for such a file entry. Skip files contained in the refs folder
to ensure the method only tries to iterate contained folders but not
files.
Change-Id: I5f31e733072a35db1e93908a9c69a8891ae5c206
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Single-branch-clone should be able to clone a single tag. Enhance
CloneCommand to accept also full refs of tags in setBranchesToClone().
Make sure we also include fetch ref specs for the fetch command for
tags. This mimics the behavior of native git's single-branch clone:
git clone --branch <tag> --single-branch <URI>
Bug: 542611
Change-Id: I285cf043751d9b0ba71258ee8214c0e5d1191428
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>