Do not reload packfiles when their associated filesnapshot is not
modified on disk compared to the one currently stored in memory.
Fix the regression introduced by fef78212 which, in conjunction with
core.trustfolderstats = false, caused any lookup of objects inside
the packlist to loop forever when the object was not found in the pack
list.
Bug: 546190
Change-Id: I38d752ebe47cefc3299740aeba319a2641f19391
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When reading from a packfile, make sure that is valid
and has a non-null file-descriptor.
Because of concurrency between a thread invalidating a packfile
and another trying to read it, the read() may result into a NPE
that won't be able to be automatically recovered.
Throwing a PackInvalidException would instead cause the packlist
to be refreshed and the read to eventually succeed.
Bug: 544199
Change-Id: I27788b3db759d93ec3212de35c0094ecaafc2434
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
When a packfile is invalid, throw an exception explicitly
outside any catch scope, so that is not accidentally caught
by the generic catch-all cause, which would set the packfile
as valid again.
Flagging an invalid packfile as valid again would have
dangerous consequences such as the corruption of the in-memory
packlist.
Bug: 544199
Change-Id: If7a3188a68d7985776b509d636d5ddf432bec798
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Do not redundantly call File.lastModified() for extracting the
timestamp of the PackFile but rather use consistently the FileSnapshot
which reads all file attributes in a single bulk call.
Change-Id: I932675ae4fe56dcd3833dac249816f097303bb09
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Due to finite filesystem timestamp resolution the last modified
timestamp of files cannot detect file changes which happened in the
immediate past (less than one filesystem timer tick ago).
Read and consider file size also, so that differing file size can help
to more accurately detect file changes without reading the file content.
Use bulk read to avoid multiple stat calls to retrieve file attributes.
Change-Id: I974288fff78ac78c52245d9218b5639603f67a46
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The pack reload mechanism from the filesystem works only by name
and does not check the actual last modified date of the packfile.
This lead to concurrency issues where multiple threads were loading
and removing from each other list of packfiles when one of those
was failing the checksum.
Rely on FileSnapshot rather than directly checking lastModified
timestamp so that more checks can be performed.
Bug: 544199
Change-Id: I173328f29d9914007fd5eae3b4c07296ab292390
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
AdvertiseRefsHook is used to limit the visibility of the refs in Gerrit.
If this hook is not called, then all refs are treated as visible,
causing the server to serve commits reachable from branches the client
should not be able to access, if asked to via a request naming a guessed
object id.
This bug was introduced in v2.0.0.201206130900-r~123 (Modify refs in
UploadPack/ReceivePack using a hook interface, 2012-02-08). Stateful
bidirectional transports are not affected.
Fix it by moving the AdvertiseRefsHook call to
getAdvertisedOrDefaultRefs, ensuring the hook is called in all cases.
[jn: backported to stable-4.5 by splitting out tests and the protocol v2
specific parts]
Change-Id: I159f396216354f2eda3968d17802e166d8c8ec2d
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Signed-off-by: Jonathan Nieder <jrn@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
On a local non-NFS filesystem the .git/config file will be orphaned if
it is replaced by a new process while the current process is reading the
old file. The current process successfully continues to read the
orphaned file until it closes the file handle.
Since NFS servers do not keep track of open files, instead of orphaning
the old .git/config file, such a replacement on an NFS filesystem will
instead cause the old file to be garbage collected (deleted). A stale
file handle exception will be raised on NFS clients if the file is
garbage collected (deleted) on the server while it is being read. Since
we no longer have access to the old file in these cases, the previous
code would just fail. However, in these cases, reopening the file and
rereading it will succeed (since it will open the new replacement file).
Since retrying the read is a viable strategy to deal with stale file
handles on the .git/config file, implement such a strategy.
Since it is possible that the .git/config file could be replaced again
while rereading it, loop on stale file handle exceptions, up to 5 extra
times, trying to read the .git/config file again, until we either read
the new file, or find that the file no longer exists. The limit of 5 is
arbitrary, and provides a safe upper bounds to prevent infinite loops
consuming resources in a potential unforeseen persistent error
condition.
Change-Id: I6901157b9dfdbd3013360ebe3eb40af147a8c626
Signed-off-by: Nasser Grainawi <nasser@codeaurora.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When running on NFS there was a chance that JGits LockFile
semantic is broken because File#createNewFile() may allow
multiple clients to create the same file in parallel. This
change provides a fix which is only used when the new config
option core.supportsAtomicCreateNewFile is set to false. The
default for this option is true. This option can only be set in the
global or the system config file. The repository config file is not
taken into account in this case.
If the config option core.supportsAtomicCreateNewFile is true
then File#createNewFile() is trusted and the behaviour doesn't
change.
But if core.supportsAtomicCreateNewFile is set to false then after
successful creation of the lock file a hardlink to that lock file is
created and the attribute nlink of the lock file is checked to be 2. If
multiple clients manage to create the same lock file nlink would be
greater than 2 showing the error.
This expensive workaround is described in
https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html
section III.d) "Exclusive File Creation"
Change-Id: I3d2cc48d8eb280d5f7039eb94da37804f903be6a
Then list of packed refs was cached in RefDirectory based on mtime of
the packed-refs file. This may fail on NFS when attributes are cached.
A cached mtime of the packed-refs file could cause JGit to trust the
cached content of this file and to overlook that the file is modified.
Honor the config option trustFolderStats and always read the packed-refs
content if the option is false. By default this option is set to true
and this fix is not active.
Change-Id: I2b65cfaa8f4aba2efbf8a5e865d3f09f927e2eec
When creating a new PackFile instance it is specified whether this pack
has an associated bitmap index file or not. This information is cached
and the public method getBitmapIndex() will always assume a bitmap index
file must exist if the cached data tells so. But it may happen that the
packfiles are repacked during a gc in a different process causing the
packfile, bitmap-index and index file to be deleted. Since JGit still
has an open FileHandle on the packfile this file is not really deleted
and can still be accessed. But index and bitmap index file are deleted.
Fix getBitmapIndex() to invalidate the cached packfile instance if such
a situation occurs.
This problem showed up when a gerrit server was serving repositories
which where garbage collected with native git regularly. Fetch and
clone commands for certain repositories failed permanently after a
native git gc had deleted old bitmap index files.
Change-Id: I8e620bec74dd3f310ba42024f9a657062f868f0e
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Add NoPackSignatureException and UnsupportedPackVersionException to
explicitly mark permanent unrecoverable problems with a pack
Assume problem with a pack is permanent only if we are sure the
exception signals a non-transient problem we can't recover from:
- AccessDeniedException: we lack permissions
- CorruptObjectException: we detected corruption
- EOFException: file ended unexpectedly
- NoPackSignatureException: pack has no pack signature
- NoSuchFileException: file has gone missing
- PackMismatchException: pack no longer matches its index
- UnpackException: unpacking failed
- UnsupportedPackIndexVersionException: unsupported pack index version
- UnsupportedPackVersionException: unsupported pack version
Do not attempt to handle Errors since they are thrown for serious
problems applications should not try to recover from.
Change-Id: I2c416ce2b0e23255c4fb03a3f9a0ee237f7a484a
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
A packfile random file open operation may fail with a
FileNotFoundException even if the file exists, possibly
for the temporary lack of resources.
Instead of managing the FileNotFoundException as any generic
IOException it is best to rethrow the exception but prevent
the packfile for being flagged as invalid until it is actually
opened and read successfully or unsuccessfully.
Bug: 514170
Change-Id: Ie37edba2df77052bceafc0b314fd1d487544bf35
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The FileNotFoundException is typically raised in three conditions:
1. file doesn't exist
2. incompatible read vs. read/write open modes
3. filesystem locking
4. temporary lack of resources (e.g. too many open files)
1. is already managed, 2. would never happen as packs are not
overwritten while with 3. and 4. it is worth logging the exception and
retrying to read the pack again.
Log transient errors using an exponential backoff strategy to avoid
flooding the logs with the same error if consecutive retries to access
the pack fail repeatedly.
Bug: 513435
Change-Id: I03c6f6891de3c343d3d517092eaa75dba282c0cd
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When a repository is being GCed and a concurrent push is received, there
is the possibility of having a missing object. This is due to the fact
that after the list of objects to delete is built, there is a window of
time when an unreferenced and ready to delete object can be referenced
by the incoming push. In that case, the object would be deleted because
there is no way to know it is no longer unreferenced. This will leave
the repository in an inconsistent state and most of the operations fail
with a missing tree/object error.
Given the incoming push change the last modified date for the now
referenced object, verify this one is still a candidate to delete
before actually performing the delete operation.
Change-Id: Iadcb29b8eb24b0cb4bb9335b670443c138a60787
Signed-off-by: Hector Oswaldo Caballero <hector.caballero@ericsson.com>
Adding a space before the unit ('g', 'm', 'k) causes git to fail with
the error:
fatal: bad numeric config value
Change-Id: I57f11d3a1cdcca4549858e773af1a2a80fc0369f
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Earlier we tried to close the repository before removing it from the
cache, so close only reduced refcount but didn't close it.
Now that we no longer leak usage count on purpose and the usage count is
now ignored anyway, there is no longer a need to run the removal twice.
Change-Id: I8b62cec6d8a3e88c096d1f37a1f7f5a5066c90a0
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
If the repository close method was called twice (or more) for one open,
the usage count became negative and the repository was never be evicted
from the cache because the method checking if repository is expired was
not considering negative usage count.
Change-Id: I18a80c415c54c37d1b9def2b311ff2d0afa455ca
Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>
There was a bug when carrying over flags from a merge commit to its
non-first parents. The first parent of a merge commit was handled
differently and correct but the non-first parents are handled by a
recursive algorithm. Flags should be copied from the root merge commit
to parent-2, to grandparent-2, ... up to the limit of STACK_DEPTH==500
parents-levels. But the recursive algorithm was always copying only to
the direct parents of the merge commit and not the grand*-parents.
This seems to be no problem when commits are handled in a strict date
order because then copying only one level is no problem if children are
handled before parents. But when commits are not seperated anymore by
distinctive correct dates (e.g. because all commits have the same date)
then it may happen that a merge-parent is handled before the merge
commit and when dealing later with the merge commit one has to copy
flags down to more than one level
Bug: 501211
Change-Id: I2d79a7cf1e3bce21a490905ccd9d5e502d7b8421
Repository.close() method is used when reference counting and expiration
needs to be honored. The RepositoryCache.unregisterAndCloseRepository
method should close the repository unconditionally. This is also indicated
from its javadoc.
Change-Id: I19392d1eaa17f27ae44b55eea49dcff05a52f298
BranchConfig treated this config property as a boolean, but git also
allows the values "preserve" and "interactive". Config property
pull.rebase also allows the same values.
Replace private enum PullCommand.PullRebaseMode by new public enum
BranchConfig.BranchRebaseMode and adapt all uses. Add a new setter to
PullCommand.
Note: PullCommand will treat "interactive" like "true", i.e., as a
non-interactive rebase. Not sure how "interactive" should be handled.
At least it won't balk on it.
Bug: 499482
Change-Id: I7309360f5662b2c2efa1bd8ea6f112c63cf064af
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Change-Id: I6691b454404dd4db3c690ecfc7515de765bc2ef7
Signed-off-by: Martin Goellnitz <m.goellnitz@outlook.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
- enhance FS.readPipe to throw an exception if the external command
fails to enable the caller to handle the command failure
- reduce log level to warning if system git config does not exist
- improve log message
Bug: 476639
Change-Id: I94ae3caec22150dde81f1ea8e1e665df55290d42
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This allows the same try/catch to handle parsing the command list,
push certificate and push options. Any errors will be caught and
handled by the same catch block, as the client is in the same state.
Change-Id: I13a66f9100e2dc8ca8f72cd701a5bd44d093ec84
Checking if the instance allows push options before returning the
collection or null is a bit overkill. Just return the collection
or return null.
Change-Id: Icdc3755194373966e5819284aeb9bfe8dd34de82
Some embeddings of JGit require creating a ReceivePack instance in
another process from the one that handled the network socket with the
client. Similar to the PushCertificate add a setter to allow the
option list to be supplied.
Change-Id: I303a30e54942ad067c79251eff8b53329c406628
Refactor all of the push option support code to allocate the list
immediately before parsing the options section off the stream.
Move option support down to ReceivePack instead of BaseReceivePack.
Push options are specific to the ReceivePack protocol and are not
likely to appear in the 4 year old subscription proposal. These
changes are OK before JGit 4.5 ships as no consumer should be relying
on these new APIs.
Change-Id: Ib07d18c877628aba07da07cd91875f918d509c49
Initialize pushOptions when we decide to use them, instead of when we
advertise them.
In the case of HTTP the advertisement is in a different network
request, hence in a different instance of the BaseReceivePack.
Change-Id: I094c60942e04de82cb6d8433c9cd43a46ffae332
Signed-off-by: Stefan Beller <sbeller@google.com>
Do not open an OBJ_TREE if the caller is expecting an OBJ_BLOB or
OBJ_COMMIT; instead throw IncorrectObjectTypeException. This better
matches behavior of WindowCursor, the ObjectReader implementation of
the local file based object store.
Change-Id: I3fb0e77f54895b123679a405e1b6ba5b95752ff0
DfsRefDatabase#compareAndPut had a vague semantics for reference
matching. Because of this, an operation to make a symbolic
reference had been broken for some DFS implementations even if they
followed the contract of compareAndPut. The clarified semantics
requires the implementations to satisfy the followings:
* Matching references should be both symbolic references or both
object ID references.
* If both are symbolic references, both should have the same target
name.
* If both are object ID references, both should have the same object
ID.
This semantics is defined based on
https://git.eclipse.org/r/#/c/77416/. Before this commit,
DfsRefDatabase couldn't see the target of symbolic references.
InMemoryRepository is changed to comply with the new semantics. This
semantics change can affect the existing DFS implementations that only
checks object IDs. This commit adds two tests that the previous
InMemoryRepository couldn't pass.
Change-Id: I6c6b5d3cc8241a81f4a37782381c88e8a59fdf15
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
When doing a detaching operation, JGit fakes a SymbolicRef as an
ObjectIdRef. This is because RefUpdate#updateImpl dereferences the
SymbolicRef when updating it. For example, assume that HEAD is
pointing to refs/heads/master. If I try to make a detached HEAD
pointing to a commit c0ffee, RefUpdate dereferences HEAD as
refs/heads/master first and changes refs/heads/master to c0ffee. The
detach argument of RefDatabase#newUpdate avoids this dereference by
faking HEAD as ObjectIdRef.
This faking is problematic for the linking operation of
DfsRefDatabase. It does a compare-and-swap operation on every
reference change because of its distributed systems nature. If a
SymbolicRef is faked as an ObjectRef, it thinks that there is a
racing change in the reference and rejects the update. Because of
this, DFS based repositories cannot change the link target of symbolic
refs. This has not been a problem for file-based repositories because
they have a file-lock based semantics instead of the CAS based one.
The reference implementation, InMemoryRepository, is not affected
because it only compares ObjectIds.
When [1] introduced this faking code, there was no way for RefUpdate
to distinguish the detaching operation. When [2] fixed the detaching
operation, it introduced a detachingSymbolicRef flag. This commit uses
this flag to control whether it needs to dereference the symbolic refs
by calling Ref#getLeaf. The same flag is used in the reflog update
operation.
This commit does not affect any operation that succeeds currently. In
some DFS repository implementations, this fixes a ref linking
operation, which is currently failing.
[1]: 01b5392cdb
[2]: 3a86868c08
Change-Id: I118f85f0414dbfad02250944e28d74dddd59469b
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Imitate the packet tracing feature from C Git v1.7.5-rc0~58^2~1 (add
packet tracing debug code, 2011-02-24). Unlike C Git, use the log4j
log level setting instead of the GIT_TRACE_PACKET environment variable
to enable tracing.
Tested as follows:
1. Enable tracing by adding the lines
log4j.logger.org.eclipse.jgit.transport=DEBUG, stderr
log4j.additivity.org.eclipse.jgit.transport=false
to org.eclipse.jgit.pgm/resources/log4j.properties.
2. mvn package
3. org.eclipse.jgit.pgm/target/jgit \
ls-remote git://git.kernel.org/pub/scm/git/git 2>&1 |less
Then the output provides a trace of packets sent and received over
the wire:
2016-08-24 16:36:42 DEBUG PacketLineOut:145 - git> git-upload-pack /pub/scm/git/git^@host=git.kernel.org^@
2016-08-24 16:36:42 DEBUG PacketLineIn:165 - git< 2632c897f74b1cc9b5533f467da459b9ec725538 HEAD^@multi_ack thin-pack side-band side-band-64k ofs-delta shallow no-progress include-tag multi_ack_detailed symref=HEAD:refs/heads/master agent=git/2.8.4
2016-08-24 16:36:42 DEBUG PacketLineIn:165 - git< e0c1ceafc5bece92d35773a75fff59497e1d9bd5 refs/heads/maint
Change-Id: I5028c064f3ac090510386057cb4e6d30d4eae232
Signed-off-by: Dan Wang <dwwang@google.com>