FS_POSIX.createNewFile(File) failed to properly implement atomic file
creation on NFS using the algorithm [1]:
- name of the hard link must be unique to prevent that two processes
using different NFS clients try to create the same link. This would
render nlink useless to detect if there was a race.
- the hard link must be retained for the lifetime of the file since we
don't know when the state of the involved NFS clients will be
synchronized. This depends on NFS configuration options.
To fix these issues we need to change the signature of createNewFile
which would break API. Hence deprecate the old method
FS.createNewFile(File) and add a new method createNewFileAtomic(File).
The new method returns a LockToken which needs to be retained by the
caller (LockFile) until all involved NFS clients synchronized their
state. Since we don't know when the NFS caches are synchronized we need
to retain the token until the corresponding file is no longer needed.
The LockToken must be closed after the LockFile using it has been
committed or unlocked. On Posix, if core.supportsAtomicCreateNewFile =
false this will delete the hard link which guarded the atomic creation
of the file. When acquiring the lock fails ensure that the hard link is
removed.
[1] https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html
also see file creation flag O_EXCL in
http://man7.org/linux/man-pages/man2/open.2.html
Change-Id: I84fcb16143a5f877e9b08c6ee0ff8fa4ea68a90d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When core.supportsAtomicCreateNewFile was set to false and the
repository was located on a filesystem which doesn't support the file
attribute "unix:nlink" then FS_POSIX#createNewFile may report an error
even if everything was ok. Modify FS_POSIX#createNewFile to silently
ignore this situation. An example of such a filesystem is sshfs where
reading "unix:nlink" always returns 1 (instead of throwing a exception).
Bug: 537969
Change-Id: I6deda7672fa7945efa8706ea1cd652272604ff19
Also-by: Thomas Wolf <thomas.wolf@paranor.ch>
I88304d34c and Ia555bce00 modified the way errors are handled when
trying to delete non-empty reference folders. Before, this error was
silently ignored as it was considered an expected output. Now, every
failed folder delete is logged which can be noisy.
Ignore the DirectoryNotEmptyException but log any other error avoiding
deletion of an eligible folder.
Signed-off-by: Hector Oswaldo Caballero <hector.caballero@ericsson.com>
Change-Id: I194512f67885231d62c03976ae683e5cc450ec7c
This exception is thrown in GC.deleteTempPacksIdx() if the repository
has no packs.
Bug: 538286
Change-Id: Ieb482be751226baf0843068a0f847e0cdc6e0cb6
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The following commits introduced in stable-4.5 and stable-4.9
introduced some minor API additions in service releases.
f7ceeaa2 FileRepository: Add pack-based inserter implementation
085d1f95 Make PackInserter public
10e65cb4 Fix LockFile semantics when running on NFS
Change-Id: I4afed7e0395cf93d828e671080e3ec9ddf20987d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This is actually a fairly common occurrence; deleting the parent
directories can work only if the file deleted was the last one
in the directory.
Bug: 537872
Change-Id: I86d1d45e1e2631332025ff24af8dfd46c9725711
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
If packed refs are used, duplicate updates result in an exception
because JGit tries to lock the same lock file twice. With non-atomic
ref updates, this used to work, since the same ref would simply be
locked and updated twice in succession.
Let's be more lenient in this case and remove duplicates before
trying to do the ref updates. Silently skip duplicate updates
for the same ref, if they both would update the ref to the same
object ID. (If they don't, behavior is undefined anyway, and we
still throw an exception.)
Add a test that results in a duplicate ref update for a tag.
Bug: 529400
Change-Id: Ide97f20b219646ac24c22e28de0c194a29cb62a5
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Bug: 529314
Change-Id: I91eaeda8a988d4786908fba6de00478cfc47a2a2
Signed-off-by: Marc Strapetz <marc.strapetz@syntevo.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Since I3870cadb4, GC task was always delegated to an executor even when
background option was set to false. This was an issue because if more
than one GC object was instantiated and executed in parallel, only one GC
was actually running because of the single thread executor.
Change-Id: I8c587d22d63c1601b7d75914692644a385cd86d6
Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>
Without these registrations, JSch's up-front checks which algorithms
are available at all fail if the ssh config explicitly sets only these
algorithms.
Bug: 537790
Change-Id: Idb0431190a7f101913363ee95af6c8fcbda6c923
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Remove completely the empty directories under refs/<namespace>
including the first level partition of the changes, when they are
completely empty.
Bug: 536777
Change-Id: I88304d34cc42435919c2d1480258684d993dfdca
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Get the full IOException of the reason why a directory
cannot be removed during GC.
Change-Id: Ia555bce009fa48087a73d677f1ce3b9c0b685b57
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
After packaging references, the folders containing these references are
not deleted. In a busy repository, this causes operations to slow down
as traversing the references tree becomes longer.
Delete empty reference folders after the loose references have been
packed.
To avoid deleting a folder that was just created by another concurrent
operation, only delete folders that were not modified in the last 30
seconds.
Signed-off-by: Hector Oswaldo Caballero <hector.caballero@ericsson.com>
Change-Id: Ie79447d6121271cf5e25171be377ea396c7028e0
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Log as warning when an attempt to remove a directory
fails. This helps troubleshooting some bugs like the GC leaving
behind empty directories.
Change-Id: Idb94ce17f8be9668a970c7ecae31436bf434073c
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
This change fixes the issue [1]. Before this fix, a merge involving
the caching of consecutive yet similar filenames with Norwegian
characters [2] used to throw an IllegalStateException: Duplicate
stages not allowed. This was caused by inaccurate decoding of the
filenames, using string values assuming default encoding. In the
toString method of DirCacheEntry, used before through getPathString,
UTF-8 encoding is used, but the end result becomes default encoding,
through Object's default toString usage. The special characters in
those two consecutive (particular) filenames [2] were becoming the
very same decoded /single character, lending consecutive -but then
identical- filenames. Thus the perceived duplicate 0-staging of the
file(s).
Replace getPathString usage with getRawPath for this specific case,
or use byte array representations of cached entries instead of string.
Adding a test for this change is not possible, as there is no known
way to change the default encoding for filenames such as [2] (e.g.).
JGitTestUtil does write file contents through UTF-8, but encoding like
so does not apply to the actual file name. Hence there is no way to
create files with names properly made of special characters such as
[2]'s. And the test that is necessary for this case assumes such
Norwegian (or similar characters) filenames. Changing the default
locale programmatically in a test has no effect either. And changing
the LANG value passed to the JVM is only possible upon starting it.
[1] https://bugs.chromium.org/p/gerrit/issues/detail?id=9153
[2] <=>
(...)
"a/b/SíÒr-Norge.map",
"a/b/Sør-Norge.map",
(...)
Change-Id: Ib9f2f5297932337c9817064cc09d9f774dd168f4
Signed-off-by: Marco Miller <marco.miller@ericsson.com>
If I run
git config --global protocol.version 2
mkdir repo
cd repo
git init --bare
git remote add origin https://go.googlesource.com/proposal
git fetch --depth=1
git fetch --unshallow
then I expect to have a full history, just as though I had fetched
without --depth in the first place. Instead, it reports success
but does not fetch enough objects:
$ git fsck
notice: HEAD points to an unborn branch (master)
Checking object directories: 100% (256/256), done.
Checking objects: 100% (468/468), done.
broken link from commit 2c6bc83f234085c8eadb7ea33405ce6223c44d1b
to commit 878975cf2b600675b4c905e5d9591bd24541ae9e
missing commit 878975cf2b600675b4c905e5d9591bd24541ae9e
dangling commit 314be00dae78dd526851f5635e6349014e2ad0c2
The false success indicates problems in the client and the server.
Git 2.18-rc2 (the client) ought to have been more defensive, noticing
the incomplete history. The greater error is in JGit (the server),
which neglects to send the objects requested.
When serving protocol v0 requests, JGit sends the correct objects by
taking unshallowCommits into account when generating the pack to send
to the client. Do the same in the protocol v2 code path. I forgot to
do this in v5.0.0.201806050710-rc3~6 (Teach UploadPack shallow fetch
in protocol v2, 2018-03-15).
Reported-by: Russ Cox <rsc@golang.org>
Change-Id: I282b45f47616a641b9e8d6210b4a070d3efdbb9b
Signed-off-by: Jonathan Nieder <jrn@google.com>
When SshSupport.runSshCommand fails since the executed external ssh
command failed throw a CommandFailedException.
If discovery of LFS server fails due to failure of the
git-lfs-authenticate command chain the CommandFailureException to the
LfsConfigInvalidException in order to allow root cause analysis in the
application using that.
Change-Id: I2f9ea2be11274549f6d845937164c248b3d840b2
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
JGit now considers lightweight tags only if the --tags option is set
i.e. `git.describe().setAllTags(true)` has to be set, else the default
is now as in c git:
Only annotated tags are evaluated unless you pass true
equivalent to --tags (or --all) by the option setAllTags.
Hint: This (still) doesn't address any difference between c-git
`--all` and `!--all --tags` behavior;
perhaps this might be a follow up request
Bug: 423206
Change-Id: I9a3699756df0b9c6a7c74a7e8887dea0df17c8e7
Signed-off-by: Marcel Trautwein <me+eclipse@childno.de>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* URIish seems to have a tiny feature (bug?). The path of the URI
starts with a '/' only if the URI has a port set (it seems).
* GitHub does not return SSH authorization on a single line as Gerrit
does - need to account for that.
* Increase the SSH git-lfs-authenticate timeout, as GitHub sometimes
responds slower than expected.
* Guard against NPE in case the download action does not contain any
additional headers.
Change-Id: Icd1ead3d015479fd4b8bbd42ed42129b0abfb95c
Signed-off-by: Markus Duft <markus.duft@ssi-schaefer.com>
Change-Id: Ib4ebc57236bdea663f27295764886413e2550580
Signed-off-by: Michael Keppler <Michael.Keppler@gmx.de>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Jsch checks only for the availability of the algorithms given by
Jsch-internal config keys "CheckCiphers", "CheckKexes", and
"CheckSignatures". If the ssh config defines any algorithms
unknown to Jsch not listed in those keys, it'll still propose them
during the negotiation phase, and run into an NPE later on if the
server happens to propose such an algorithm and it gets chosen.
Jsch reads those "CheckCiphers" and the other values from either a
session-local config, or the global static Jsch config. It bypasses
~/.ssh/config for these values.
Therefore, copy these values from the config as read from
~/.ssh/config into the session-specific config. That makes Jsch
check _all_ configured algorithms up front, discarding any for
which it has no implementation. Thus it proposes only algorithms
it actually can handle.
Bug: 535672
Change-Id: I6a68e54f4d9a3267e895c536bcf3c58099826ad5
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
From the javadoc for Files.list:
"The returned stream encapsulates a DirectoryStream. If timely disposal
of file system resources is required, the try-with-resources construct
should be used to ensure that the stream's close method is invoked
after the stream operations are completed."
This is the only call to Files#newDirectoryStream that is not already in
a try-with-resources.
Change-Id: I91e6c56b5d74e8435457ad6ed9e6b4b24d2aa14e
(cherry picked from commit 1c16ea4601)
synchronize on simple Object monitor instead of using ReentrantLock
Change-Id: I897020ab35786336b51b0fef76ea6071aff8aefa
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>