It is not obvious why this return statement is needed. Clarify with a
comment that otherwise endless loop may show up when recent versions
of Jetty are used.
Change-Id: I8e5d4de51869fb1179bf599bfb81bcd7d745874b
Signed-off-by: David Ostrovsky <david@ostrovsky.org>
* stable-4.8:
Fix ObjectUploadListener#close
Fix error handling in FileLfsServlet
ObjectDownloadListener#onWritePossible: Make code spec compatible
ObjectDownloadListener: Return from onWritePossible when data is written
Fix IOException when LockToken#close fails
Change-Id: Id8eb635094336567d9f3c28ec985cd5127d31632
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* stable-4.7:
Fix ObjectUploadListener#close
Fix error handling in FileLfsServlet
ObjectDownloadListener#onWritePossible: Make code spec compatible
ObjectDownloadListener: Return from onWritePossible when data is written
Fix IOException when LockToken#close fails
Change-Id: Iad9836811be034cf992ea25dad4409addba75115
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Do not try to set response status if response is already committed.
Change-Id: I9a7c2871c86eb53416b905324775f3ed961c8ae6
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Check in #sendError method if the response was committed already.
If yes we cannot set response status or send an error message, last
resort is to close the outputstream.
If the response wasn't yet committed first reset the response before
using writer to send the error message to the client since mixing STREAM
and WRITE mode (mixing asynchronous and blocking I/O) is illegal in
servlet 3.1.
see the following bugs in the gerrit and jetty issue trackers
https://bugs.chromium.org/p/gerrit/issues/detail?id=9667https://bugs.chromium.org/p/gerrit/issues/detail?id=9721https://github.com/eclipse/jetty.project/issues/2911
Change-Id: Ie35563c2e0ac1c5e918185a746622589a880dc7f
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Current code violates the ServletOutputStream contract. For every
out.isReady() == true either write or close of that ServletOutputStream
should be called.
See also this issue upstream for more context: [1].
[1] https://github.com/eclipse/jetty.project/issues/2911
Change-Id: Ied575f3603a6be0d2dafc6c3329d685fc212c7a3
Signed-off-by: David Ostrovsky <david@ostrovsky.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When buffer was written not only call AsyncContext#complete() but also
return from the ObjectDownloadListener#onWritePossible(). This avoids
endless loop after upgrading from Jetty 9.3.x to 9.4.x lines.
In Jetty example implementation:[1] the return statemnt is also used:
// If we are at EOF then complete
if (len < 0)
{
async.complete();
return;
}
See also this issue upstream: [2].
[1] https://webtide.com/servlet-3-1-async-io-and-jetty
[2] https://github.com/eclipse/jetty.project/issues/2911
Change-Id: Iac73fb25e67d40228a378a8e34103f1d28b72a76
Signed-off-by: David Ostrovsky <david@ostrovsky.org>
This happened if the LockTokens hard link was already deleted earlier.
Bug: 531759
Change-Id: Idc84bd695fac1a763b3cbb797c9c4c636a16e329
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* stable-4.8:
Fix NoSuchFileException during directory cleanup in RefDirectory
Externalize warning message in RefDirectory.delete()
Suppress warning for trying to delete non-empty directory
Change-Id: I5e6cc35f3673545e7ff857e6ed0bcd2c44e50316
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* stable-4.7:
Fix NoSuchFileException during directory cleanup in RefDirectory
Externalize warning message in RefDirectory.delete()
Suppress warning for trying to delete non-empty directory
Change-Id: I9ec6352b5ff57aa1a3380079dc9165890cc76d49
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Change-Id: Icec16c01853a3f5ea016d454b3d48624498efcce
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
(cherry picked from commit 5e68fe245f)
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
This is actually a fairly common occurrence; deleting the parent
directories can work only if the file deleted was the last one
in the directory.
Bug: 537872
Change-Id: I86d1d45e1e2631332025ff24af8dfd46c9725711
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
(cherry picked from commit d9e767b431)
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
FS_POSIX.createNewFile(File) failed to properly implement atomic file
creation on NFS using the algorithm [1]:
- name of the hard link must be unique to prevent that two processes
using different NFS clients try to create the same link. This would
render nlink useless to detect if there was a race.
- the hard link must be retained for the lifetime of the file since we
don't know when the state of the involved NFS clients will be
synchronized. This depends on NFS configuration options.
To fix these issues we need to change the signature of createNewFile
which would break API. Hence deprecate the old method
FS.createNewFile(File) and add a new method createNewFileAtomic(File).
The new method returns a LockToken which needs to be retained by the
caller (LockFile) until all involved NFS clients synchronized their
state. Since we don't know when the NFS caches are synchronized we need
to retain the token until the corresponding file is no longer needed.
The LockToken must be closed after the LockFile using it has been
committed or unlocked. On Posix, if core.supportsAtomicCreateNewFile =
false this will delete the hard link which guarded the atomic creation
of the file. When acquiring the lock fails ensure that the hard link is
removed.
[1] https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html
also see file creation flag O_EXCL in
http://man7.org/linux/man-pages/man2/open.2.html
Change-Id: I84fcb16143a5f877e9b08c6ee0ff8fa4ea68a90d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When core.supportsAtomicCreateNewFile was set to false and the
repository was located on a filesystem which doesn't support the file
attribute "unix:nlink" then FS_POSIX#createNewFile may report an error
even if everything was ok. Modify FS_POSIX#createNewFile to silently
ignore this situation. An example of such a filesystem is sshfs where
reading "unix:nlink" always returns 1 (instead of throwing a exception).
Bug: 537969
Change-Id: I6deda7672fa7945efa8706ea1cd652272604ff19
Also-by: Thomas Wolf <thomas.wolf@paranor.ch>
I88304d34c and Ia555bce00 modified the way errors are handled when
trying to delete non-empty reference folders. Before, this error was
silently ignored as it was considered an expected output. Now, every
failed folder delete is logged which can be noisy.
Ignore the DirectoryNotEmptyException but log any other error avoiding
deletion of an eligible folder.
Signed-off-by: Hector Oswaldo Caballero <hector.caballero@ericsson.com>
Change-Id: I194512f67885231d62c03976ae683e5cc450ec7c
* stable-4.8:
Bazel: Use hyphen instead of underscore in external repository names
Bazel: Format all build files with buildifier 0.15.0
ChangeIdUtilTest: Remove unused notestCommitDashV
Change-Id: I17436237cd66ca1c2800ad5ab0142f4a2bc07328
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* stable-4.7:
Bazel: Use hyphen instead of underscore in external repository names
Bazel: Format all build files with buildifier 0.15.0
ChangeIdUtilTest: Remove unused notestCommitDashV
Change-Id: I414ade902dc38b696c566dd604000e3d289f3973
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Recent Bazel versions support the hyphen character in external
repository names. On the Gerrit project, the repository names
were harmonized to consistently use hyphen.
As a side effect, it is no longer possible to build jgit from source
in the gerrit tree, due to the different repository names.
Rename the dependencies to use hyphens, consistent with gerrit.
Change-Id: Ideebd858ddd3f0e6f765643001642dfb6c12441f
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
This test was never being run. Since it was introduced it was
named "notest.." which meant it didn't run with JUnit3, and
since it is not annotated @Test it also doesn't run with JUnit4.
When compiling with Bazel 0.6.0, error-prone raises an error
that the public method is not annotated with @Ignore or @Test.
Given that the test has never been run anyway, we can just
remove it.
Bug: 525415
Change-Id: Ie9a54f89fe42e0c201f547ff54ff1d419ce37864
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
The following commits introduced in stable-4.5 and stable-4.9
introduced some minor API additions in service releases.
f7ceeaa2 FileRepository: Add pack-based inserter implementation
085d1f95 Make PackInserter public
10e65cb4 Fix LockFile semantics when running on NFS
Change-Id: I4afed7e0395cf93d828e671080e3ec9ddf20987d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The S-repository doesn't exist anymore and we should have updated to the
final Photon orbit repository already.
Change-Id: I6d1757833be4abd3643677d6e7814363533e88b2
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This is actually a fairly common occurrence; deleting the parent
directories can work only if the file deleted was the last one
in the directory.
Bug: 537872
Change-Id: I86d1d45e1e2631332025ff24af8dfd46c9725711
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
If packed refs are used, duplicate updates result in an exception
because JGit tries to lock the same lock file twice. With non-atomic
ref updates, this used to work, since the same ref would simply be
locked and updated twice in succession.
Let's be more lenient in this case and remove duplicates before
trying to do the ref updates. Silently skip duplicate updates
for the same ref, if they both would update the ref to the same
object ID. (If they don't, behavior is undefined anyway, and we
still throw an exception.)
Add a test that results in a duplicate ref update for a tag.
Bug: 529400
Change-Id: Ide97f20b219646ac24c22e28de0c194a29cb62a5
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Bug: 529314
Change-Id: I91eaeda8a988d4786908fba6de00478cfc47a2a2
Signed-off-by: Marc Strapetz <marc.strapetz@syntevo.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Change-Id: I2442394fb7eae5b3715779555477dd27b274ee83
Signed-off-by: Marc Strapetz <marc.strapetz@syntevo.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
* stable-4.8:
Fix GC run in foreground to not use executor
Change-Id: Id9d864a8e727fefa35ca87eccb4e3801eb689c3c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* stable-4.7:
Fix GC run in foreground to not use executor
Change-Id: Ib150d132e2ce055d36ddffb2dbc37b5cb355e77a
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Since I3870cadb4, GC task was always delegated to an executor even when
background option was set to false. This was an issue because if more
than one GC object was instantiated and executed in parallel, only one GC
was actually running because of the single thread executor.
Change-Id: I8c587d22d63c1601b7d75914692644a385cd86d6
Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>