The message is formatted as:
Invalid id: : abcde...
but should be:
Invalid id: abcde...
Change-Id: Ie15cacdcf2f168edaee262e6cf8061ebfe9d998d
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Pretty printing the response is useful for human readers, but most
(if not all) of the time, the response will be read by programs.
Remove it to avoid the additional overhead of the formatting and
extra bytes in the response. Adjust the test accordingly.
Note that LfsProtocolServlet already doesn't use pretty printing,
so this change makes FileLfsServlet's behavior consistent. In fact,
both classes now have duplicate Gson handling; this will be cleaned
up in a separate change.
Change-Id: I113a23403f9222f16e2c0ddf39461398b721d064
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
FindBugs reports:
This class is an inner class, but does not use its embedded reference
to the object which created it. This reference makes the instances
of the class larger, and may keep the reference to the creator object
alive longer than necessary. If possible, the class should be made
static.
Change-Id: I245e44678166176de0cfb275e22ddd159f88e0bd
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
This was silenced before but suppression was unintentionally lost in
merge commit 6858339c1e.
This method was removed in 4.9.0 and reintroduced in 4.9.1 to avoid
breaking EMF compare versions which were built against older versions.
See: abf420302b
Change-Id: I152d58ac885e044bcab682b9423f6cc83b667989
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When a GC operation is interrupted, temporary packs and indexes can be
left on the pack folder. In big, busy repositories this can lead to
significant amounts of wasted disk space if this interruption is done
with a certain frequency.
Remove stale temporary packs and indexes at the end of the GC process so
they do not accumulate. To avoid interfering with a possible concurrent
JGit GC process in the same repository, only delete temporary files that
are older than one day.
Change-Id: If9b6c1e57fac8a6a0ecc0a703089634caba4caae
Signed-off-by: Hector Caballero <hector.caballero@ericsson.com>
- this is a new warning option in Eclipse 4.7 and higher
- we always change version of all bundles in a release to keep release
engineering simple
Change-Id: Ic7523d77b67b2802f1bab3bc70af250d712a034f
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When running on NFS there was a chance that JGits LockFile
semantic is broken because File#createNewFile() may allow
multiple clients to create the same file in parallel. This
change provides a fix which is only used when the new config
option core.supportsAtomicCreateNewFile is set to false. The
default for this option is true. This option can only be set in the
global or the system config file. The repository config file is not
taken into account in this case.
If the config option core.supportsAtomicCreateNewFile is true
then File#createNewFile() is trusted and the behaviour doesn't
change.
But if core.supportsAtomicCreateNewFile is set to false then after
successful creation of the lock file a hardlink to that lock file is
created and the attribute nlink of the lock file is checked to be 2. If
multiple clients manage to create the same lock file nlink would be
greater than 2 showing the error.
This expensive workaround is described in
https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html
section III.d) "Exclusive File Creation"
Change-Id: I3d2cc48d8eb280d5f7039eb94da37804f903be6a
Then list of packed refs was cached in RefDirectory based on mtime of
the packed-refs file. This may fail on NFS when attributes are cached.
A cached mtime of the packed-refs file could cause JGit to trust the
cached content of this file and to overlook that the file is modified.
Honor the config option trustFolderStats and always read the packed-refs
content if the option is false. By default this option is set to true
and this fix is not active.
Change-Id: I2b65cfaa8f4aba2efbf8a5e865d3f09f927e2eec
Jsch 0.1.54 passes on the values from ~/.ssh/config for
"ServerAliveInterval" and "ConnectTimeout" as read from
the config file to java.net.Socket.setSoTimeout(). That
method expects milliseconds, but the values in the config
file are seconds!
The missing conversion in Jsch means that the timeout is
set way too low, and if the server doesn't respond within
that very short time frame, Jsch kills the connection and
then throws an exception with a message such as "session is
down" or "timeout in waiting for rekeying process".
As a work-around, do the conversion to milliseconds in the
Jsch-facing Config interface of OpenSshConfig. That way Jsch
already gets these values as milliseconds.
Bug: 526867
Change-Id: Ibc9b93f7722fffe10f3e770dfe7fdabfb3b97e74
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Args4J does no longer allow to use final fields to reference
arguments or options [1]. Change RevParse to be compatibel with this
change.
[1] 6e11f89d40
See-also: a0558b7094
Signed-off-by: Rüdiger Herrmann <ruediger.herrmann@gmx.de>
Change-Id: I33b233f195c06855d9e094c8c9ba804fbe7b1438
JSch unconditionally overrides the user name given in the connection
URI by the one found in ~/.ssh/config (if that does specify one for
the used host). If the SSH config file has a different user name,
we'll end up using the wrong name, which typically results in an
authentication failure or in Eclipse/EGit asking for a password for
the wrong user.
Unfortunately there is no way to prevent or circumvent this Jsch
behavior up front; it occurs already in the Session constructor at
com.jcraft.jsch.Session() and the Session.applyConfig() method. And
while there is a Session.setUserName() that would enable us to correct
this, that latter method has package visibility only.
So resort to reflection to invoke that setUserName() method to ensure
that Jsch uses the user name from the URI, if there is one.
Bug: 526778
Change-Id: Ia327099b5210a037380b2750a7fd76ff25c41a5a
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Applications that use ObjectInserters to create lots of individual
objects may prefer to avoid cluttering up the object directory with
loose objects. Add a specialized inserter implementation that produces a
single pack file no matter how many objects. This inserter is loosely
based on the existing DfsInserter implementation, but is simpler since
we don't need to buffer blocks in memory before writing to storage.
An alternative for such applications would be to write out the loose
objects and then repack just those objects later. This operation is not
currently supported with the GC class, which always repacks existing
packs when compacting loose objects. This in turn requires more
CPU-intensive reachability checks and extra I/O to copy objects from old
packs to new packs.
So, the choice was between implementing a new variant of repack, or not
writing loose objects in the first place. The latter approach is likely
less code overall, and avoids unnecessary I/O at runtime.
The current implementation does not yet support newReader() for reading
back objects.
Change-Id: I2074418f4e65853b7113de5eaced3a6b037d1a17
GC explicitly handles the case where a new pack has the same name as an
existing pack due to it containing the exact same set of objects. In
this case, the pack passed to insertPack will have the same name as an
existing pack, but it will also almost certainly have a later mtime than
the existing pack.
The loop in insertPack tried to short-circuit when inserting a new pack,
to avoid walking more of the pack list than necessary. Unfortunately,
this means it will never get to the check for an identical name,
resulting in a duplicate entry for the same PackFile in the pack list.
Remove the short-circuit so that insertPack does not insert a duplicate
entry.
Change-Id: I00711b28594622ad3bd104332334e8a3592cda7f
So far we follow OSGi semantic versioning [1] which says the following:
"A change in the second (minor) part of the version signals that the
change is backward compatible with consumers of the API package but not
with the providers of that API. That is, when the API package goes from
version 1.5 to 1.6 it is no longer compatible with a provider of that
API but consumers of that API are backward compatible with that API
package."
The change Ib5fbf17bdaf727bc5d0e106ce88f2620d9f87a6f broke EMF Compare
which subclasses ResolveMerger since we added a new parameter to the
protected ResolveMerger.processEntry() method. According to the above
cited OSGi semantic versioning this is ok, implementers should expect
that they break on minor version changes of the API they implement.
This change reintroduces the old processEntry() method in order to help
avoid breakage for existing EMF Compare versions which expect breakage
also for the implementer case only for major version change (in this
case from JGit 4.x to 5.x).
[1] http://www.osgi.org/wp-content/uploads/SemanticVersioning1.pdf
See: https://dev.eclipse.org/mhonarc/lists/jgit-dev/msg03431.html
Change-Id: I48ba4308dee73925fa32d6c2fd6b5fd89632c571
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Round first, then calculate the labels. This avoids "x years, 12 months"
and instead produces "x+1 years".
One test case has been added for the original example the bug was found
with, and one assertion has been moved from an existing test case to the
new test case, since it also triggered the bug.
Bug: 525907
Change-Id: I3270af3850c4fb7bae9123a0a6582f93055c9780
Signed-off-by: Michael Keppler <Michael.Keppler@gmx.de>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Do tilde replacement for values from the ssh config file that are
file names in all cases to make sure that they are already replaced
when Jsch tries to get the values.
Previously, OpenSshConfig did tilde replacement only for the
IdentityFile in the JGit-facing "Host" interface and left the
replacement in the Jsch-facing "Config" interface to Jsch.
But on Windows the JGit notion of what should be used to replace the
tilde differs from Jsch's replacement. Jsch always replaces the tilde
by the value of the system property "user.home", whereas JGit also
considers some environment variables like %HOME%. This can lead to
rather surprising failures as in the case of bug 526175 where
%HOME% != user.home.
Prior to commit 9d24470 (i.e.,prior to JGit 4.9.0) this problem never
occurred because Jsch was completely unaware of the ssh config file
and all host and IdentityFile handling happened exclusively in JGit.
Bug: 526175
Change-Id: I1511699664ffea07cb58ed751cfdb79b15e3a99e
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Shadow commits in the RevWalk in the UploadPack object may cause the
UNINTERESTING flag not being carried over to their parents commits since
they were marked NO_PARENTS during the assumeShallow or
initializeShallowCommits call.
A new RevWalk needs to be created for this reason, but instead of
creating a new RevWalk from Repository, we can reuse the ObjectReader in
the RevWalk of UploadPack to load objects.
Change-Id: Ic3fee0512d35b4f555c60e696a880f8b192e4439
Signed-off-by: Zhen Chen <czhen@google.com>
Checkout sometimes throws an exception, and
other times it writes an error message to outw
and returns normally, even though the command
failed. This commit now reports all failures
through a die() exception.
Change-Id: I038a5d976d95020fea3faac68e9178f923c25b28
Signed-off-by: Ned Twigg <ned.twigg@diffplug.com>
Per git-config(1), core.logAllRefUpdates auto-creates reflogs for HEAD
and for refs under heads, notes, tags, and for HEAD. Add notes and
remove stash from ReflogWriter#shouldAutoCreateLog. Explicitly force
writing reflogs for refs/stash at call sites, now that this is
supported.
Change-Id: I3a46d2c2703b7c243e0ee2bbf6948279800c485c
Even if a repository has core.logAllRefUpdates=true, ReflogWriter does
not create reflog files unless the refs are under a hard-coded list of
prefixes, or unless the forceWrite bit is set. Expose the forceWrite bit
on a per-update basis in RefUpdate/BatchRefUpdate/ReceiveCommand,
creating RefLogWriters as necessary.
Change-Id: Ifc851fba00f76bf56d4134f821d0576b37810f80
The ReflogWriter constructor just took a Repository and called
getDirectory() on it to figure out the reflog dirs, but not all
Repository instances use this storage format for reflogs, so it's
incorrect to attempt to use ReflogWriter when there is not a
RefDirectory directly involved. In practice, ReflogWriter was mostly
only used by the implementation of RefDirectory, so enforcing this is
mostly just shuffling around calls in the same internal package.
The one exception is StashDropCommand, which writes to a reflog lock
file directly. This was a reasonable implementation decision, because
there is no general reflog interface in JGit beyond using
(Batch)RefUpdate to write new entries to the reflog. So to implement
"git stash drop <N>", which removes an arbitrary element from the
reflog, it's fair to fall back to the RefDirectory implementation.
Creating and using a more general interface is well beyond the scope of
this change.
That said, the old behavior of writing out the reflog file even if
that's not the reflog format used by the given Repository is clearly
wrong. Fail fast in this case instead.
Change-Id: I9bd4b047bc3e28a5607fd346ec2400dde9151730
This test was never being run. Since it was introduced it was
named "notest.." which meant it didn't run with JUnit3, and
since it is not annotated @Test it also doesn't run with JUnit4.
When compiling with Bazel 0.6.0, error-prone raises an error
that the public method is not annotated with @Ignore or @Test.
Given that the test has never been run anyway, we can just
remove it.
Bug: 525415
Change-Id: Ie9a54f89fe42e0c201f547ff54ff1d419ce37864
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
This allows the tests to all be built and run by bazel.
Bug: 525412
Change-Id: Ie9281d07462cd07200fadb4b0e7b7f88c44f7865
Signed-off-by: Pepper Lebeck-Jobe <eljobe@gmail.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Same problem as in commit c227268: openUserConfig() just creates the
FileBasedConfig object, but doesn't read the file yet. An explicit
load() is needed.
As HttpConfig is read-only this omission did not cause any bad effects,
but it simply ignored values from the user config. Most uses of
HttpConfig go through the two-argument constructor, though, where
HttpConfig is given an already loaded repo config.
Change-Id: Ibe7c562c17d6ef37de8b661ab7f6fa0246db01a2
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
SystemReader.openUserConfig() does not load the config yet; an
explicit StoredConfig.load() is needed.
Bug: 374703
Change-Id: I1c397e2fb1a07ac4d9de3675d996417734ff90e9
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Added a public method to TextBuiltin which makes it possible for
clients to initialize all of its state, including output and error
streams. This gives clients the ability to customize the way in
which a command is run.
Change-Id: If718236964d2c5cf869e120c74f1a12965f4812e
Signed-off-by: Ned Twigg <ned.twigg@diffplug.com>
When the submodule already exists, it is fetched instead of
cloned.
Use the fetch callback instead of clone callback in this case.
Change-Id: I170c21ab92b4117f25fdf940fe6807f214b04d39
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
By default, this is turned off unless cmd.setFetch(true) is given. It
will default to true in a future release to mimic c-git behaviour.
This is needed to prevent Eclipse from crashing with "Missing unknown
[REF]" when cloning a repo with submodules.
Bug: 470318
Change-Id: I8ae37c7c5bd2408cead8d57dd13e93e01e0e9dc1
Signed-off-by: Michael FIG <michael@fig.org>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
When a https connection could not be established because the SSL
handshake was unsuccessful, TransportHttp would unconditionally
throw a TransportException.
Other https clients like web browsers or also some SVN clients
handle this more gracefully. If there's a problem with the server
certificate, they inform the user and give him a possibility to
connect to the server all the same.
In git, this would correspond to dynamically setting http.sslVerify
to false for the server.
Implement this using the CredentialsProvider to inform and ask the
user. We offer three choices:
1. skip SSL verification for the current git operation, or
2. skip SSL verification for the server always from now on for
requests originating from the current repository, or
3. always skip SSL verification for the server from now on.
For (1), we just suppress SSL verification for the current instance of
TransportHttp.
For (2), we store a http.<uri>.sslVerify = false setting for the
original URI in the repo config.
For (3), we store the http.<uri>.sslVerify setting in the git user
config.
Adapt the SmartClientSmartServerSslTest such that it uses this
mechanism instead of setting http.sslVerify up front.
Improve SimpleHttpServer to enable setting it up also with HTTPS
support in anticipation of an EGit SWTbot UI test verifying that
cloning via HTTPS from a server that has a certificate that doesn't
validate pops up the correct dialog, and that cloning subsequently
proceeds successfully if the user decides to skip SSL verification.
Bug: 374703
Change-Id: Ie1abada9a3d389ad4d8d52c2d5265d2764e3fb0e
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>