Since git-core ff5effd (v1.7.12.1) the native wire protocol transmits
the server and client implementation and version strings using
capability "agent=git/1.7.12.1" or similar.
Support this in JGit and hang the implementation data off UploadPack
and ReceivePack. On HTTP transports default to the User-Agent HTTP
header until the client overrides this with the optional capability
string in the first line.
Extract the user agent string into a UserAgent class under transport
where it can be specified to a different value if the application's
build process has broken the Implementation-Version header in the
JGit package.
Change-Id: Icfc6524d84a787386d1786310b421b2f92ae9e65
A larger than expected number of real-world repositories found on
the Internet contain invalid author, committer and tagger lines
in their history. Many of these seem to be caused by users misusing
the user.name and user.email fields, e.g.:
[user]
name = Au Thor <author@example.com>
email = author@example.com
that some version of Git (or a reimplementation thereof) copied
directly into the object header. These headers are not valid and
are rejected by a strict fsck, making it impossible to transfer
the repository with JGit/EGit.
Another form is an invalid committer line with double negative for
the time zone, e.g.
committer Au Thor <a@b> 1288373970 --700
The real world is messy. :(
Allow callers and users to weaken the fsck settings to accept these
sorts of breakages if they really want to work on a repo that has
broken history. Most routines will still function fine, however
commit timestamp sorting in RevWalk may become confused by a corrupt
committer line and sort commits out of order. This is mostly fine if
the corrupted chain is shorter than the slop window.
Change-Id: I6d529542c765c131de590f4f7ef8e7c1c8cb9db9
This error happens on nfs file system when you try to read a file that
was deleted or replaced.
When the error happens because the file was deleted, removing it from
the list is the proper way to handle the error, same use case as
FileNotFoundException. When the error happens because the file was
replaced, removing the file from the list will cause the file to be
re-read so it will get the latest version of the file.
Bug: 462868
Change-Id: I368af61a6cf73706601a3e4df4ef24f0aa0465c5
Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>
Pack not found and pack corrupted/invalid are handled by the code (pack
is removed from the list) so logging an error and the stacktrace is
misleading because it implies that there is an action to take to fix the
error.
Lower the log level to warn and remove the stacktrace for those 2 types
of errors and keep the error log statement for any other.
Change-Id: I2400fe5fec07ac6d6c244b852cce615663774e6e
Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* stable-3.7:
Prepare 3.7.2-SNAPSHOT builds
JGit v3.7.1.201504261725-r
Revert "Let ObjectWalk.markUninteresting also mark the root tree as"
Change-Id: If1b62ff695e063d797c3d13c43e488ca56f29cbe
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The Iff2de881 tried to fix missing tree ..." but introduced severe
performance degradation (>10x in some cases) when acting as server
(git push) and as client (replication). IOW cure is worse than the
disease.
This reverts commit c4797fe986.
Change-Id: I4e6056eb352d51277867f857a0cab380eca153ac
Signed-off-by: David Ostrovsky <david@ostrovsky.org>
Cached packs are only used when writing over the network or to
a bundle file and reuse validation is always disabled in these
two contexts. The client/consumer of the stream will be SHA-1
checksumming every object.
Reuse validation is most critical during local GC to avoid silently
ignoring corruption by stopping as soon as a problem is found and
leaving everything alone for the end-user to debug and salvage.
Cached packs are not supported during local GC as the bitmap rebuild
logic does not support including a cached pack in the result.
Strip out the validation and force PackWriter to always disable the
cached pack feature if reuseValidation is enabled.
Change-Id: If0d7baf2ae1bf1f7e71bf773151302c9f7887039
Sensible suggestion from Terry Parker as a late comment on
commit f2efcdc6f769d59722b17e9274932d585035cfb6.
Change-Id: I225775bfb6d3d91ae066ff00f9d80a9c02a422c2
This hint allows an underlying implementation to read more bytes when
possible and buffer them locally for future read calls to consume.
Change-Id: Ia986a1bb8640eecb91cfbd515c61fa1ff1574a6f
When a large pack (> 30% of the block cache) is being reused by
copying it pollutes the block cache with noise by storing blocks
that are never referenced again.
Avoid this by streaming the file directly from its channel onto
the output stream.
Change-Id: I2e53de27f3dcfb93de68b1fad45f75ab23e79fe7
In 6c1f739388 the AWT based credentials
provider was dropped because we don't support Java 5 any longer so we
can always use the ConsoleCredentialsProvider which requires Java 6.
This broke debugging org.eclipse.jgit.pgm since Eclipse doesn't support
using a system console authenticator [1].
[1] see https://bugs.eclipse.org/bugs/show_bug.cgi?id=148831
Change-Id: Iba71001a7762e73d6579ba9dfa5a08ddaba777ea
The clone or fetch depth is a valuable bit of information
for access logging. Create a public getter to faciliate access.
A precondition check prevents unintentional misuse when the
data isn't valid yet.
Change-Id: I4603d5fd3bd4a767e3e2419b0f2da3664cfbd7f8
Signed-off-by: David Pletcher <dpletcher@google.com>
JGit hit IllegalArgumentException: invalid content length
when pushing large packs to S3.
Bug: 463015
Change-Id: Iddf50d90c7e3ccb15b9ff71233338c6b204b3648
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>