Sun's Java 5, 6, 7 implementation had a bug [1] where a Reference can be
enqueued and dequeued twice on the same reference queue due to a race
condition within ReferenceQueue.enqueue(Reference).
This bug was fixed for Java 8 [2] hence remove the workaround.
[1] http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6837858
[2] http://hg.openjdk.java.net/jdk8/jdk8/jdk/rev/858c75eb83b5
Change-Id: I2deeb607e3d237f9f825a207533acdee305c7e73
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Doing so goes through the TypedConfigGetter and thus allows library
clients (for instance EGit) to warn about invalid configurations.
Change-Id: If1080ad90b8aff54a903d4d75637614faad6469b
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
While parsing .gitmodules, the name of the submodule subsection is
purely arbitrary: it frequently is the path of the submodule, but
there's no requirement for it to be. By building a map of paths to
the section name in .gitmodules, we can more accurately return
the submodule URL.
Bug: 508801
Change-Id: I8399ccada1834d4cc5d023344b97dcf8d5869b16
Also-by: Doug Kelly <dougk.ff7@gmail.com>
Signed-off-by: Doug Kelly <dougk.ff7@gmail.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Add "Do nothing" comments, consistent with other empty methods in
the same class.
Change-Id: I27a13a402e94104af617be0e14d8982e75fa73bd
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Don't convert a lot of ObjectId to String stored in generic
java.util.HashSet. This is a very expensive way to store objects.
Instead rely on "this" from the FsckPackParser to lookup information
about the objects in this pack file, which lets the verify code avoid
sorting the object list.
Use ObjectIdOwnerMap, which is the most efficient format JGit has
for storing lots of objects.
Change-Id: Ib68f93acb4d91b96d0a44c0612f704500d332ac1
This simplifies the logic about allocation of the DfsReader, and
clarifies the code considerably by using smaller scopes with less
indentation.
A few static imports from PackExt and slightly shorter variable names
make for a more understandable-at-glance implementation.
Change-Id: Iaf5a0e14fe0349215d9e44446f68d1129ad3bb3d
The simpler algorithm is to load all branch tips into an ObjectWalk
and run that walk exactly once. This avoids redoing work related to
parsing and considering trees reused across side branches.
Move the connectivity check into its own helper method. This moves it
left one level of identation, and makes it easier to fit the method's
logic with less line wrapping.
Add a "Counting objects..." progress monitor around this phase. Its
what is used when a server receives a push and is also trying to
verify the client sent all required objects.
Change-Id: I4d53d75d0cdd1a13fff7d513a6ae0b2d14ea4090
Do not automatically organize imports using a save action since this
seems to be buggy and removed some annotations org.eclipse.jgit.pgm
needs to use args4j.
Change-Id: I5a91292c3b9241ce2dde3e4ecce14ad460097129
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Revert the following save actions which were introduced in c0ad77d8:
- always use braces around blocks
- remove unused imports
Other than I expected save actions are run globally on edited files -
and not only on edited code lines only.
Hence revert the save action "Convert control statement bodies to
blocks" which would affect a large number of code lines not affected by
the change editing some small part of a class. This would generate a
large number of changes which may lead to many unnecessary conflicts.
Total number of affected lines across jgit would be around 10k lines.
Also revert "Remove unused imports" since it erroneously removes imports
of some annotations needed by pgm classes using args4j.
Change-Id: I879a47f68e664129e6124cf25c1ae1f6a2d7a5aa
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Otherwise, the stack trace doesn't really tell anything.
See for instance [1].
[1] https://www.eclipse.org/forums/index.php/t/1088535/
Change-Id: If22f2c63c36fec6b32818d2c2acecf20531b4185
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
ReftableStack maintains multiple open reftables
in an AutoCloseable format, making it easier for
higher level code to handle multiple files.
Change-Id: I7ac35c18e67b7e771fb3de29169d1ee50fab62ca
Reftable storage in DFS is related to pack storage. Reftables are
stored in the same namespace, but with PackExt.REFTABLE. Include
the set of DfsReftable instances in the PackList and export some
helpers to access the tables.
Change-Id: I6a4f5f953ed6b0ff80a7780f4c6cbcc5eda0da3e
DfsBlockCache directly shares its internal byte[] with ReftableReader,
avoding copying between the DfsBlockCache and the BlockReader
instances used by ReftableReader.
Change-Id: Icaa4f40052b26f952681414653a8b5314b7c2c23
Add the following Eclipse save actions executed when saving modified
lines. This should help to reduce manual work needed to maintain a clean
and consistent code style:
- organize imports
- always use braces around blocks
- add missing annotations
- @Override including implementation of interface methods
- @Deprecated
- remove
- unused imports
- unnecessary $NON-NLS$ tags
- redundant type arguments
Also add default values for new settings that were introduced in recent
Eclipse versions up to Neon since we updated save rules the last time.
Change-Id: Idc90b249df044d0552f04edf01a5f607c4846f50
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Some repositories can have a policy that do not accept certain blobs. To
check if the incoming pack file contains such blobs, ObjectChecker can
be used. However, this ObjectChecker is not called by PackParser if the
blob is stored as a whole. This is because the object can be so large
that it doesn't fit in memory.
This change introduces BlobObjectChecker. This interface takes chunks of
a blob instead of the entire object. ObjectChecker can optionally return
a BlobObjectChecker. This won't change existing ObjectChecker
implementation; existing implementation continues to receive deltified
blob objects only.
Change-Id: Ic33a92c2de42bd7a89786a4da26b7a648b25218d
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
When a JGit API command is implemented in terms of other API
commands, the child command must "inherit" all relevant settings.
Calling configure() ensures that the CredentialsProvider and the
connection timeout are propagated correctly.
Bug: 515325
Change-Id: I948e306693a9edb7b199a735877413b6eddcfba4
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
** matching always tries the empty match first. If a mismatch occurs
later, the ** must be extended by exactly one segment and matching must
resume with the matcher following the ** matcher.
Bug: 520920
Change-Id: Id019ad1c773bd645ae92e398021952f8e961f45c
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Path pattern matching for attribute rules is different than matching
for excluded files.
The first difference concerns patterns without slashes. For
gitattributes those must match on the last component only, not on
any earlier segment. This is true also for directory-only patterns.
The second difference concerns directory-only patterns. Those also
must not match on a prefix or segment except the last one. They do
not apply recursively to all files beneath.
And third, matches only on a prefix must match for gitattributes
only if the last matcher was "/**".
Add a new parameter for such path matching to IMatcher.matches() and
pass it through as appropriate (false for gitignore, true for
gitattributes). As far as gitignore is concerned, there is no change.
New tests have been added, and some existing attribute matching tests
have been fixed since they operated on wrong assumptions.
Bug: 508568
Change-Id: Ie825dc2cac8a85a72a7eeb0abb888f3193d21dd2
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
C git silently ignores invalid tagopt values; so make JGit behave the
same way.
Bug: 429625
Change-Id: I99587cc46c7e0c19348bcc63f602038fa9a7f378
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Reading RefSpecs from a Config can be seen as another typed value
conversion, so add a getter to Config and to TypedConfigGetter. Use
it in RemoteConfig.
Doing this allows clients of the JGit library to customize the
handling of invalid RefSpecs in git config files by installing a
custom TypedConfigGetter.
Bug: 517314
Change-Id: I0ebc0f073fabc85c2a693b43f5ba5962d8a795ff
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Make the handling of typed values somewhat configurable by using
a separate converter. The default converter is the same as before;
just the implementations of the getters were moved. They also still
raise IllegalArgumentException on invalid values as before.
The converter can be set globally via Config.setTypedConfigGetter(),
which EGit can use in its core Activator to plug in a variant that
catches the IllegalArgumentException, logs the problem, and then
returns the default value.
In this way the behavior for other users of the JGit library is
unchanged, while EGit can deal gracefully with invalid git configs.
Bug: 520978
Change-Id: Ie8f81d206e358b6cc57aa29b9d7ad2a5d34b86a1
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
In Eclipse Oxygen, the following warning is emitted:
At least one of the problems in category 'synthetic-access' is not
analysed due to a compiler option being ignored
Removing the suppression gets rid of the warning.
Change-Id: Ibfe5cc1e347150b699f54e2f204ab5ee770da202
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Except for %p and %r and partially %C, we can do token substitutions
as defined by OpenSSH inside the config file parser. %p and %r can
be replaced only if specified in the config; if not, it would be the
caller's responsibility to replace them with values obtained from the
URI to connect to.
Jsch doesn't know about token substitutions at all. By doing the
replacements as good as we can in the config file parser, we can
make Jsch support most of these tokens.
%i is not handled at all as Java has no concept of a "user ID".
Includes unit tests.
Bug: 496170
Change-Id: If9d324090707de5d50c740b0d4455aefa8db46ee
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Ensure the Jsch instance used knows about ~/.ssh/config. This
enables Jsch to honor more user configurations (see
com.jcraft.jsch.Session.applyConfig()), in particular also the
UserKnownHostsFile configuration, or additional identities given
via multiple IdentityFile entries.
Turn JGit's OpenSshConfig into a full parser that can be a
Jsch-compliant ConfigRepository. This avoids a few bugs
in Jsch's OpenSSHConfig and keeps the JGit-facing interface
unchanged. At the same time we can supply a JGit OpenSshConfig
instance as a ConfigRepository to Jsch. And since they'll both
work from the same object, we can also be sure that the parsing
behavior is identical.
The parser does not handle the "Match" and "Include" keys, and it
doesn't do %-token substitutions (yet).
Note that Jsch doesn't handle multi-valued UserKnownHostFile
entries as known by modern OpenSSH.[1]
[1] http://man.openbsd.org/OpenBSD-current/man5/ssh_config.5
Additional tests for new features are provided in OpenSshConfigTest.
Bug: 490939
Change-Id: Ic683bd412fa8c5632142aebba4a07fad4c64c637
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This is continuation from https://git.eclipse.org/r/#/c/94249/. When an
error happens, we might not read the entire stream. Consume the request
body before we flush the buffer.
Change-Id: Ia473a04ace600653b2d1f2822e3023570d992410
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
The addition of "tooManyRedirects" in commit 7ac1bfc ("Do
authentication re-tries on HTTP POST") was an error I didn't
catch after rebasing that change. That message had been renamed
in the earlier commit e17bfc9 ("Add support to follow HTTP
redirects") to "redirectLimitExceeded".
Also make sure we always use the TransportException(URIish, ...)
constructor; it'll prefix the message given with the sanitized URI.
Change messages to remove the explicit mention of that URI inside the
message. Adapt tests that check the expected exception message text.
For the info logging of redirects, remove a potentially present
password component in the URI to avoid leaking it into the log.
Change-Id: I517112404757a9a947e92aaace743c6541dce6aa
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
There is at least one git server out there (GOGS) that does
not require authentication on the initial GET for
info/refs?service=git-receive-pack but that _does_ require
authentication for the subsequent POST to actually do the push.
This occurs on GOGS with public repositories; for private
repositories it wants authentication up front.
Handle this behavior by adding 401 handling to our POST request.
Note that this is suboptimal; we'll re-send the push data at
least twice if an authentication failure on POST occurs. It
would be much better if the server required authentication
up-front in the GET request.
Added authentication unit tests (using BASIC auth) to the
SmartClientSmartServerTest:
- clone with authentication
- clone with authentication but lacking CredentialsProvider
- clone with authentication and wrong password
- clone with authentication after redirect
- clone with authentication only on POST, but not on GET
Also tested manually in the wild using repositories at try.gogs.io.
That server offers only BASIC auth, so the other paths
(DIGEST, NEGOTIATE, fall back from DIGEST to BASIC) are untested
and I have no way to test them.
* public repository: GET unauthenticated, POST authenticated
Also tested after clearing the credentials and then entering a
wrong password: correctly asks three times during the HTTP
POST for user name and password, then gives up.
* private repository: authentication already on GET; then gets
applied correctly initially to the POST request, which succeeds.
Also fix the authentication to use the credentials for the redirected
URI if redirects had occurred. We must not present the credentials
for the original URI in that case. Consider a malicious redirect A->B:
this would allow server B to harvest the user credentials for server
A. The unit test for authentication after a redirect also tests for
this.
Bug: 513043
Change-Id: I97ee5058569efa1545a6c6f6edfd2b357c40592a
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Add an update_index to every reference in a reftable, storing the
exact transaction that last modified the reference. This is necessary
to fix some merge race conditions.
Consider updates at T1, T3 are present in two reftables. Compacting
these will create a table with range [T1,T3]. If T2 arrives during
or after the compaction its impossible for readers to know how to
merge the [T1,T3] table with the T2 table.
With an explicit update_index per reference, MergedReftable is able to
individually sort each reference, merging individual entries at T3
from [T1,T3] ahead of identically named entries appearing in T2.
Change-Id: Ie4065d4176a5a0207dcab9696ae05d086e042140
Reserve "ref" extension for reftable files. This allows them to be
used in a DFS repository as a stream in a DfsPackDescription.
Change-Id: Ife781bb64d0bb063333183ad2be70a41a2482513
resolve(Ref) helps callers recursively chase symbolic references and
is a useful function when wrapping a Reftable inside a RefDatabase, as
RefCursor does not resolve symbolic references during iteration.
Change-Id: I1ba143f403773497972e225dc92c35ecb989e154
Transactions may wish to merge several tables together as part of an
operation. Setting a byte limit allows the transaction to consider
only some recent tables, bounding the cost of the compaction.
Change-Id: If037f2cbdc174ff1a215d5917178b33cde4ddaba
A compaction of reftables is just copying the results of a
MergedReftable into a ReftableWriter. Wrap this up into a utility.
Change-Id: I6f5677d923e9628993a2d8b4b007a9b8662c9045
MergedReftable combines multiple reference tables together in a stack,
allowing higher/later tables to shadow earlier/lower tables. This
forms the basis of a transaction system, where each transaction writes
a new reftable containing only the modified references, and readers
perform a merge on the fly to get the latest value.
Change-Id: Ic2cb750141e8c61a8b2726b2eb95195acb6ddc83
ReftableReader provides sequential scanning support over all
references, a range of references within a subtree (such as
"refs/heads/"), and lookup of a single reference. Reads can be
accelerated by an index block, if it was created by the writer.
The BlockSource interface provides an abstraction to read from the
reftable's backing storage, supporting a future commit to connect
to JGit DFS and the DfsBlockCache.
Change-Id: Ib0dc5fa937d0c735f2a9ff4439d55c457fea7aa8
This is a simple writer to create reftable formatted files. Follow-up
commits will add support for reading from reftable, debugging
utilities, and tests.
Change-Id: I3d520c3515c580144490b0b45433ea175a3e6e11
git-core follows HTTP redirects so JGit should also provide this.
Implement config setting http.followRedirects with possible values
"false" (= never), "true" (= always), and "initial" (only on GET, but
not on POST).[1]
We must do our own redirect handling and cannot rely on the support
that the underlying real connection may offer. At least the JDK's
HttpURLConnection has two features that get in the way:
* it does not allow cross-protocol redirects and thus fails on
http->https redirects (for instance, on Github).
* it translates a redirect after a POST to a GET unless the system
property "http.strictPostRedirect" is set to true. We don't want
to manipulate that system setting nor require it.
Additionally, git has its own rules about what redirects it accepts;[2]
for instance, it does not allow a redirect that adds query arguments.
We handle response codes 301, 302, 303, and 307 as per RFC 2616.[3]
On POST we do not handle 303, and we follow redirects only if
http.followRedirects == true.
Redirects are followed only a certain number of times. There are two
ways to control that limit:
* by default, the limit is given by the http.maxRedirects system
property that is also used by the JDK. If the system property is
not set, the default is 5. (This is much lower than the JDK default
of 20, but I don't see the value of following so many redirects.)
* this can be overwritten by a http.maxRedirects git config setting.
The JGit http.* git config settings are currently all global; JGit has
no support yet for URI-specific settings "http.<pattern>.name". Adding
support for that is well beyond the scope of this change.
Like git-core, we log every redirect attempt (LOG.info) so that users
may know about the redirection having occurred.
Extends the test framework to configure an AppServer with HTTPS support
so that we can test cloning via HTTPS and redirections involving HTTPS.
[1] https://git-scm.com/docs/git-config
[2] 6628eb41db
[3] https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
CQ: 13987
Bug: 465167
Change-Id: I86518cb76842f7d326b51f8715e3bbf8ada89859
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Currently there is no way to determine the precise changes done
to the working tree by a JGit command. Only the CheckoutCommand
actually provides access to the lists of modified, deleted, and
to-be-deleted files, but those lists may be inaccurate (since they
are determined up-front before the working tree is modified) if
the actual checkout then fails halfway through. Moreover, other
JGit commands that modify the working tree do not offer any way to
figure out which files were changed.
This poses problems for EGit, which may need to refresh parts of the
Eclipse workspace when JGit has done java.io file operations.
Provide the foundations for better file change tracking: the working
tree is modified exclusively in DirCacheCheckout. Make it emit a new
type of RepositoryEvent that lists all files that were modified or
deleted, even if the checkout failed halfway through. We update the
'updated' and 'removed' lists determined up-front in case of file
system problems to reflect the actual state of changes made.
EGit thus can register a listener for these events and then knows
exactly which parts of the Eclipse workspace may need to be refreshed.
Two commands manage checking out individual DirCacheEntries themselves:
checkout specific paths, and applying a stash with untracked files.
Make those two also emit such a new WorkingTreeModifiedEvent.
Furthermore, merges may modify files, and clean, rm, and stash create
may delete files.
CQ: 13969
Bug: 500106
Change-Id: I7a100aee315791fa1201f43bbad61fbae60b35cb
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
When creating a new PackFile instance it is specified whether this pack
has an associated bitmap index file or not. This information is cached
and the public method getBitmapIndex() will always assume a bitmap index
file must exist if the cached data tells so. But it may happen that the
packfiles are repacked during a gc in a different process causing the
packfile, bitmap-index and index file to be deleted. Since JGit still
has an open FileHandle on the packfile this file is not really deleted
and can still be accessed. But index and bitmap index file are deleted.
Fix getBitmapIndex() to invalidate the cached packfile instance if such
a situation occurs.
This problem showed up when a gerrit server was serving repositories
which where garbage collected with native git regularly. Fetch and
clone commands for certain repositories failed permanently after a
native git gc had deleted old bitmap index files.
Change-Id: I8e620bec74dd3f310ba42024f9a657062f868f0e
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Per the git config documentation[1], pushInsteadOf is ignored when
a remote has explicit pushUris.
Implement this, and adapt tests.
Up to now JGit mistakenly applied pushInsteadOf also to existing
pushUris. If some repositories had relied on this mis-feature,
pushes may newly suddenly fail (the uncritical case; the config
just needs to be fixed) or even still succeed but push to unexpected
places, namely to the non-rewritten pushUrls (the critical case).
The release notes should point out this change.
[1] https://git-scm.com/docs/git-config
Bug: 393170
Change-Id: I38c83204d2ac74f88f3d22d0550bf5ff7ee86daf
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>