If some process executed by FS#readPipe ends in an error,
the error stream is never set as errorMessage because
FS#GobblerThread#waitForProcessCompletion always returned true.
This caused LOG#warn to be called with null.
Return false whenever FS#GobblerThread#waitForProcessCompletion fails.
Bug: 538723
Change-Id: Ic9492bd688431d52c8665f7a2efec2989e95a4ce
Signed-off-by: Cliffred van Velzen <cliffred@cliffred.nl>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Change-Id: Icec16c01853a3f5ea016d454b3d48624498efcce
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
(cherry picked from commit 5e68fe245f)
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
This is actually a fairly common occurrence; deleting the parent
directories can work only if the file deleted was the last one
in the directory.
Bug: 537872
Change-Id: I86d1d45e1e2631332025ff24af8dfd46c9725711
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
(cherry picked from commit d9e767b431)
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
According to String.replaceAll JavaDoc:
"Note that backslashes (\) and dollar signs ($) in the replacement
string may cause the results to be different than if it were being
treated as a literal replacement string; see Matcher.replaceAll. Use
java.util.regex.Matcher.quoteReplacement to suppress the special meaning
of these characters, if desired."
Bug: 536318
Change-Id: Ib70cfec41bf73e14d23d94d14aee05a25b1e87f6
Signed-off-by: Markus Duft <markus.duft@ssi-schaefer.com>
FS_POSIX.createNewFile(File) failed to properly implement atomic file
creation on NFS using the algorithm [1]:
- name of the hard link must be unique to prevent that two processes
using different NFS clients try to create the same link. This would
render nlink useless to detect if there was a race.
- the hard link must be retained for the lifetime of the file since we
don't know when the state of the involved NFS clients will be
synchronized. This depends on NFS configuration options.
To fix these issues we need to change the signature of createNewFile
which would break API. Hence deprecate the old method
FS.createNewFile(File) and add a new method createNewFileAtomic(File).
The new method returns a LockToken which needs to be retained by the
caller (LockFile) until all involved NFS clients synchronized their
state. Since we don't know when the NFS caches are synchronized we need
to retain the token until the corresponding file is no longer needed.
The LockToken must be closed after the LockFile using it has been
committed or unlocked. On Posix, if core.supportsAtomicCreateNewFile =
false this will delete the hard link which guarded the atomic creation
of the file. When acquiring the lock fails ensure that the hard link is
removed.
[1] https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html
also see file creation flag O_EXCL in
http://man7.org/linux/man-pages/man2/open.2.html
Change-Id: I84fcb16143a5f877e9b08c6ee0ff8fa4ea68a90d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When core.supportsAtomicCreateNewFile was set to false and the
repository was located on a filesystem which doesn't support the file
attribute "unix:nlink" then FS_POSIX#createNewFile may report an error
even if everything was ok. Modify FS_POSIX#createNewFile to silently
ignore this situation. An example of such a filesystem is sshfs where
reading "unix:nlink" always returns 1 (instead of throwing a exception).
Bug: 537969
Change-Id: I6deda7672fa7945efa8706ea1cd652272604ff19
Also-by: Thomas Wolf <thomas.wolf@paranor.ch>
I88304d34c and Ia555bce00 modified the way errors are handled when
trying to delete non-empty reference folders. Before, this error was
silently ignored as it was considered an expected output. Now, every
failed folder delete is logged which can be noisy.
Ignore the DirectoryNotEmptyException but log any other error avoiding
deletion of an eligible folder.
Signed-off-by: Hector Oswaldo Caballero <hector.caballero@ericsson.com>
Change-Id: I194512f67885231d62c03976ae683e5cc450ec7c
In order to support GPG-signed commits, add some methods which will
allow GPG signatures to be parsed out of RevCommit objects.
Later, we can add code to verify the signatures.
Change-Id: Ifcf6b3ac79115c15d3ec4b4eaed07315534d09ac
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Client will stop sending haves when the number of haves sent reaches maxhaves.
Change-Id: I1e5b1525be4c67f20a81ca24a2770c20eb5c1271
Signed-off-by: Minh Thai <mthai@google.com>
The parsing code for protocol v2 fetch doesn't have any dependency on
the rest of UploadPack.
Move it to its own class. This makes testing easier (no need to
instantiate the full UploadPack), simplifies the code in UploadPack and
increases modularity.
At the moment, the parser needs to know about the reference database to
validate incoming references. This dependency could be easily removed
moving the validation later in the flow, after the parsing, where other
validations are already happening. Postponing that to keep this patch
about moving unmodified code around.
Change-Id: I7ad29a6b99caa7c12c06f5a7f30ab6a5f6e44dc7
Signed-off-by: Ivan Frade <ifrade@google.com>
This fetch parameter is called deepen-since in the protocol. Call it
the same thing in the request object to make the code easier to reason
about.
This doesn't touch UploadPack#shallowSince, which is likely to be
eliminated altogether in a later patch anyway.
Change-Id: I8ef34bc7ad12fae3a9057ae951367cc024e1a1cb
Suggested-by: Ivan Frade <ifrade@google.com>
Signed-off-by: Jonathan Nieder <jrn@google.com>
There is an extra 'd' in deependNotRefs. Noticed during code review.
Change-Id: I93d8d7951fe5c351b62e23bdf5bad0ebd631017d
Signed-off-by: Jonathan Nieder <jrn@google.com>
Make "doneReceived" a member of the fetch request. It indicates if the
"done" line has been received (so it makes sense there) and makes all
the code after the parsing depend only on the request.
Rename "shallowExcludeRefs" to "deepenNot". Those refs come in
"deepen-not" lines in the protocol, and this name makes clearer the
intention.
Change-Id: I7bec65de04930277266491d278de7c3af7d8cbe6
Signed-off-by: Ivan Frade <ifrade@google.com>
When this exception is thrown, the `depth` member variable isn't set
yet, resulting in a confusing error message: "Invalid depth: 0".
Change-Id: I8a2bd5e1d9bec00acb0b8857bbf6821e95bf1369
Signed-off-by: Ivan Frade <ifrade@google.com>
At the moment there are two copies of the client shallow commit list:
one in the request and another in the clientShallowCommits member of
the class.
The verifyShallowCommit function was removing missing object ids
from the member but not the request list, and code afterwards was
using the request's version.
In practice, this didn't cause trouble because these shallow commits
are used as endpoint for a walk, and missing ids are just never reached.
Change-Id: I70a8f1fd46de135da09f16e5d954693c8438ffcb
Signed-off-by: Ivan Frade <ifrade@google.com>
This exception is thrown in GC.deleteTempPacksIdx() if the repository
has no packs.
Bug: 538286
Change-Id: Ieb482be751226baf0843068a0f847e0cdc6e0cb6
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This makes it symmetrical with ls-refs operation and gives the
instantiator of UploadPack the chance to run some code after parsing
the protocol and before any actual work for the fetch starts.
Request and Builder methods keep the naming in the original code to
make this change just about request encapsulation and hook invocation.
They are package-private for now to allow further improvements.
Change-Id: I5ad585c914d3a5f23b11c8251803faa224beffb4
Signed-off-by: Ivan Frade <ifrade@google.com>
The following commits introduced in stable-4.5 and stable-4.9
introduced some minor API additions in service releases.
f7ceeaa2 FileRepository: Add pack-based inserter implementation
085d1f95 Make PackInserter public
10e65cb4 Fix LockFile semantics when running on NFS
Change-Id: I4afed7e0395cf93d828e671080e3ec9ddf20987d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Code can check size instead of null, and that makes the initialization
trivial.
Change-Id: Icbe655816429a7a680926b0e871d96f3b2f1f7ba
Signed-off-by: Ivan Frade <ifrade@google.com>
On recent VMs, collection.toArray(new T[0]) is faster than
collection.toArray(new T[collection.size()]). Since it is also more
readable, it should now be the preferred way of collection to array
conversion.
https://shipilev.net/blog/2016/arrays-wisdom-ancients/
Change-Id: I80388532fb4b2b0663ee1fe8baa94f5df55c8442
Signed-off-by: Michael Keppler <Michael.Keppler@gmx.de>
This is actually a fairly common occurrence; deleting the parent
directories can work only if the file deleted was the last one
in the directory.
Bug: 537872
Change-Id: I86d1d45e1e2631332025ff24af8dfd46c9725711
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
If packed refs are used, duplicate updates result in an exception
because JGit tries to lock the same lock file twice. With non-atomic
ref updates, this used to work, since the same ref would simply be
locked and updated twice in succession.
Let's be more lenient in this case and remove duplicates before
trying to do the ref updates. Silently skip duplicate updates
for the same ref, if they both would update the ref to the same
object ID. (If they don't, behavior is undefined anyway, and we
still throw an exception.)
Add a test that results in a duplicate ref update for a tag.
Bug: 529400
Change-Id: Ide97f20b219646ac24c22e28de0c194a29cb62a5
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Bug: 529314
Change-Id: I91eaeda8a988d4786908fba6de00478cfc47a2a2
Signed-off-by: Marc Strapetz <marc.strapetz@syntevo.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>