This reverts commit 19f869996f.
Leaving path info encoded confuses applications like Gitiles.
Trying to fix this inside of JGit was maybe the wrong solution.
Change-Id: I8df9ab6233ff513e427701c8a1a66022c19784eb
In one place LocalDiskRepositoryTestCase was ignoring the specification
whether to create a bare or non-bare repository. Fix this and fix also
one test which fails now because bare repos don't write reflogs by
default.
Change-Id: I4bcf8cf97c5b46e2f3919809eaa121a8d0e47010
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Gitiles malfunctions in conjunction with jgit and guice
because of a recent Guice bug fix. Work around the problem
by parsing the URI directly, bypassing the unescaping
performed by the getPathInfo method.
This rest of this message is copied from
https://gerrit-review.googlesource.com/#/c/60820/ :
The fix for Guice issue #745[1] causes getPathInfo() within the
GuiceFilter to return decoded values, eliminating the difference
between "foo/bar" and "foo%2Fbar". This is in spec with the servlet
standard, whose javadoc for getPathInfo[2] states that the return
value be "decoded by the web container".
Work around this by extracting the path part directly from the request
URI, which is unmodified by the container. This is copying the Guice
behavior prior to the bugfix.
[1] https://github.com/google/guice/issues/745
[2] http://docs.oracle.com/javaee/7/api/javax/servlet/http/HttpServletRequest.html#getPathInfo()
Change-Id: I7fdb291bda377dab6160599ee537962d5f60f1e8
Signed-off-by: David Pletcher <dpletcher@google.com>
PostReceiveHooks can make use of this information to, for example,
update a cached size of the Git repository.
Change-Id: I2bf1200959a50531e2155a7609c96035ba45b10d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This change implements the http connection abstraction with the help of
org.apache.http.client.HttpClient. The default implementation used by
JGit is still the JDK HttpURLConnection. But now JGit users have the
possibility to switch completely to org.apache.httpclient. The reason
for this is that in certain (e.g. cloud) environments you are forced to
use the org.apache classes.
Change-Id: I0b357f23243ed13a014c79ba179fa327dfe318b2
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Necessary to get tests to pass when running under Buck.
Has no impact on Maven based invocation of tests.
Change-Id: I437114863df0bac346c94ef13796def47333d916
/tmp is a symbolic link and some tests break when the path
gets canonicalized by JGit or Jetty. Allow Jetty to serve
symlinks by setting init parameter "aliases" to true [1].
[1] http://wiki.eclipse.org/Jetty/Howto/How_to_serve_symbolically_linked_files
Change-Id: I45359a40435e8a33def6e0bb6784b4d8637793ac
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Setting the walk and other fields to null will result in NPEs when the
user e.g. calls fetch on the connection, but at least the advertised
refs can be read like that without having a local repository.
Bug: 413389
Change-Id: I39c8363e81a1c7e6cb3412ba88542ead669e69ed
Signed-off-by: Robin Stocker <robin@nibor.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
/tmp is a symbolic link and some tests break when the path
gets canonicalized by JGit or Jetty. Allow Jetty to serve
symlinks by setting init parameter "aliases" to true [1].
[1] http://wiki.eclipse.org/Jetty/Howto/How_to_serve_symbolically_linked_files
Change-Id: I45359a40435e8a33def6e0bb6784b4d8637793ac
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Allow users to provide their OutputStream (via Transport#
push(monitor, refUpdates, out)) so that server messages can be written
to it (in SideBandInputStream) while they're coming in.
CQ: 7065
Bug: 398404
Change-Id: I670782784b38702d52bca98203909aca0496d1c0
Signed-off-by: Andre Dietisheim <andre.dietisheim@gmail.com>
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This breaks all existing callers once. Applications are not supposed
to build against the internal storage API unless they can accept API
churn and make necessary updates as versions change.
Change-Id: I2ab1327c202ef2003565e1b0770a583970e432e9
I have unfortunately introduced a few bugs in the native Git client
over the years. 1.7.5 is unable to send chunked requests correctly,
resulting in corrupt data at the server. Ban this client whenever
it uses chunked encoding with an error message.
Prior to some more recent versions, git push over HTTP failed to
report status information and error messages due to a race within
the client and its helper process. Check for these bad versions and
send errors as messages before the status report, enabling users
to see the failures on their terminal.
Change-Id: Ic62d6591cbd851d21dbb3e9b023d655eaecb0624
This reverts commit 24a0f47e32 and
updates JGit dependencies to use the latest available Jetty 7.x
release. We can't use Jetty 8.x since it depends on Servlet API 3.0
which requires Java 6 but JGit still wants to support Java 5.
Use one of the target platforms defined in
Ibf67a6d3539fa0708a3e5dbe44fb899c56fbd8ed to work with that in Eclipse.
Change-Id: I343273d994dc7b6e0287c604e5926ff77d5b585b
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This never should have been in the core library test suite, as that
test suite never should depend upon the HTTP server module.
Change-Id: Ie0528c4d1c755823303d138e327a3a2f4caccc32
Error messages are typically short, below the 32 KiB in-memory buffer
size of the SmartOutputStream. When an error is queued up for sending
to a client and an exception is thrown up into the servlet handler we
discarded the message and sent nothing to the client, as the messages
were stuck inside of the SmartOutputStream buffer.
Hoist the creation of the output stream above the invocation of try
block of the service, and use close() in the few catch blocks that
assume there are buffered messages ready for transmission. This will
ensure errors from unpacking a stream in ReceivePack are sent off to
a client correctly, as previously these were causing no status report
to arrive at the client side as the data was stuck in the buffer.
Change-Id: I5534b560697731121f48979ae077aa7c95b8e39c
I modified the way errors are returned, and this particular test is
now getting a different access denied response. The new text happens
to be what I intended to have here, so update the test.
Change-Id: I53f8410ca0a52755d80473cd5cbcdb4d8502febf
It's useful to have ReflogEntry refactored out so it can be
used by clients via the JGit API.
Change-Id: I03044df9af9f9547777545b7c9b93bdf5f8b7cb5
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Smart HTTP clients may request both multi_ack_detailed and no-done in
the same request to prevent the client from needing to send a "done"
line to the server in response to a server's "ACK %s ready".
For smart HTTP, this can save 1 full HTTP RPC in the fetch exchange,
improving overall latency when incrementally updating a client that
has not diverged very far from the remote repository.
Unfortuantely this capability cannot be enabled for the traditional
bi-directional connections. multi_ack_detailed has the client sending
more "have" lines at the same time that the server is creating the
"ACK %s ready" and writing out the PACK stream, resulting in some race
conditions and/or deadlock, depending on how the pipe buffers are
implemented. For very small updates, a server might actually be able
to send "ACK %s ready", then the PACK, and disconnect before the
client even finishes sending its first batch of "have" lines. This
may cause the client to fail with a broken pipe exception. To avoid
all of these potential problems, "no-done" is restricted only to the
smart HTTP variant of the protocol.
Change-Id: Ie0d0a39320202bc096fec2e97cb58e9efd061b2d
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When the client is clearly making a smart HTTP request to our smart
HTTP server, return any errors like RepositoryNotFoundException or
ServiceNotEnabledException inside of the payload as a Git level ERR
message, rather than an HTTP error code.
This prevents the C Git command line client from retrying a failed
"$URL/info/refs?service=git-upload-pack" request without the smart
service URL, only to fail again with "403 Forbidden" when the dumb
as-is service has been disabled by the server configuration, or is
unavailable because the repository is not on the local filesystem.
Change-Id: I57e8756d5026e885e0ca615979bfcd729703be6c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Some clients coming through proxies may advertise a different
Accept-Encoding, for example "Accept-Encoding: gzip(proxy)".
Matching by substring causes us to identify this as a false positive;
that the client understands gzip encoding and will inflate the
response before reading it.
In this particular case however it doesn't. Its the reverse proxy
server in front of JGit letting us know the proxy<->JGit link can
be gzip compressed, while the client<->proxy part of the link is not:
client <-- no gzip --> proxy <-- gzip --> JGit
Use a more standard method of parsing by splitting the value into
tokens, and only using gzip if one of the tokens is exactly the
string "gzip". Add a unit test to make sure this isn't broken in
the future.
Change-Id: Ib4c40f9db177322c7a2640808a6c10b3c4a73819
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Some clients coming through proxies may advertise a different
Accept-Encoding, for example "Accept-Encoding: gzip(proxy)".
Matching by substring causes us to identify this as a false positive;
that the client understands gzip encoding and will inflate the
response before reading it.
In this particular case however it doesn't. Its the reverse proxy
server in front of JGit letting us know the proxy<->JGit link can
be gzip compressed, while the client<->proxy part of the link is not:
client <-- no gzip --> proxy <-- gzip --> JGit
Use a more standard method of parsing by splitting the value into
tokens, and only using gzip if one of the tokens is exactly the
string "gzip". Add a unit test to make sure this isn't broken in
the future.
Change-Id: I30cda8a6d11ad235b56457adf54a2d27095d964e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Using a resolver and factory pattern for the anonymous git:// Daemon
class makes transport.Daemon more useful on non-file storage systems,
or in embedded applications where the caller wants more precise
control over the work tasks constructed within the daemon.
Rather than defining new interfaces, move the existing HTTP ones
into transport.resolver and make them generic on the connection
handle type. For HTTP, continue to use HttpServletRequest, and
for transport.Daemon use DaemonClient.
To remain compatible with transport.Daemon, FileResolver needs to
learn how to use multiple base directories, and how to export any
Repository instance at a fixed name.
Change-Id: I1efa6b2bd7c6567e983fbbf346947238ea2e847e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Properly handle return value of java.io.File.createNewFile().
Change-Id: I3a74cc84cd126ca1a0eaccc77b2944d783ff0747
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Eclipse has some problem re-running single JUnit tests if
the tests are in Junit 3 format, but the JUnit 4 launcher
is used. This was quite unnecessary and the move was not
completed. We still have no JUnit4 test.
This completes the extermination of JUnit3. Most of the
work was global searce/replace using regular expression,
followed by numerous invocarions of quick-fix and organize
imports and verification that we had the same number of
tests before and after.
- Annotations were introduced.
- All references to JUnit3 classes removed
- Half-good replacement for getting the test name. This was
needed to make the TestRngs work. The initialization of
TestRngs was also made lazily since we can not longer find
out the test name in runtime in the @Before methods.
- Renamed test classes to end with Test, with the exception
of TestTranslateBundle, which fails from Maven
- Moved JGitTestUtil to the junit support bundle
Change-Id: Iddcd3da6ca927a7be773a9c63ebf8bb2147e2d13
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When the Config is changed, it should be saved back to its local
file. This ensure that a future call to getConfig() won't wipe
out the edits that were just made.
Change-Id: Id46d3f85d1c9b377f63ef861b72824e1aa060eee
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Each time getConfig() is called on FileRepository, it checks the
last modified time of both ~/.gitconfig and $GIT_DIR?config. If
$GIT_DIR/config appears to have been modified, it is read back in
from disk and the current config is wiped out.
When mutating a configuration file, this may cause in-memory edits
to disappear. To avoid that callers need to avoid calling getConfig
until after the configuration has been saved to disk.
Unfortunately the API is still horribly broken. Configuration should
be modified only while a lock is held on the configuration file, very
similar to the way a ref is updated via its locking protocol. But our
existing API is really broken for that so we'll have to defer cleaning
up the edit path for a future change.
Change-Id: I5888dd97bac20ddf60456c81ffc1eb8df04ef410
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Introduce a http test bundle to make this functionality available for
EGit tests. A simple http server class is provided. The jetty version
was updated to a version that is also available via p2 (needed in EGit
UI tests).
Change-Id: I13bfc4c6c47e27d8f97d3e9752347d6d23e553d4
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This change is based on http://egit.eclipse.org/r/#change,1652
by David Green. The change adds the concept of a CredentialsProvider
which can be registered for git transports and which is
responsible to return credential-related data like passwords and
usernames. Whenenver the transports detects that an authentication
with certain credentials has to be done it will ask the
CredentialsProvider for this data. Foreseen implementations for
such a Provider may be a EGitCredentialsProvider (caching
credential data entered e.g. in the Clone-Wizzard) or a NetRcProvider
(gathering data out of ~/.netrc file).
Bug: 296201
Change-Id: Ibe13e546b45eed3e193c09ecb414bbec2971d362
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: David Green <dgreen99@gmail.com>
The auth-scheme token (like "Basic" or "Digest") is not specified in a
case sensitive way. RFC2617 (http://tools.ietf.org/html/rfc2617) specifies
in section 1.2 the use of a "case-insensitive token to identify the
authentication scheme". Jetty, for example, uses "basic" as token.
Change-Id: I635a94eb0a741abcb3e68195da6913753bdbd889
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
Some strings were not externalized. Also use them in HTTP tests to
ensure that they will also succeed when message bundles are
translated.
Change-Id: Id02717176557e7d57e676e1339cd89f2be88d330
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Since 858b2c92 we have a HTTP authentication implementation hence
we now get different exception messages when required authentication
headers are not available. This broke the HTTP tests.
Change-Id: Ie08c1ec37e497c2a6f70a75f7c59f0805812a5cc
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
On April 27, 2010 the Logger interface was upgraded with a number of new methods
to make it consistent with the implementations it was meant to support.
This patch makes RecordingLogger consistent with the Logger interface and allows to
also use Jetty 7.1.5 released with Helios which can be installed from the p2 repository
at http://download.eclipse.org/jetty/7.1.5.v20100705/repository
Change-Id: I5645436bbe7492f82d4069e4d9cbebede0bf764e
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This move isolates all of the local file specific implementation code
into a single package, where their package-private methods and support
classes are properly hidden away from the rest of the core library.
Because of the sheer number of files impacted, I have limited this
change to only the renames and the updated imports.
Change-Id: Icca4884e1a418f83f8b617d0c4c78b73d8a4bd17
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When the surrounding code is already heavily based upon the
assumption that we have a FileRepository (e.g. because it
created that type of repository) keep the type around and
use it directly. This permits us to continue to do things
like save the configuration file.
Change-Id: Ib783f0f6a11acd6aa305c16d61ccc368b46beecc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Change the Repository API to use straight-up FileBasedConfig.
This lets us remove the subclass RepositoryConfig and stop having
a specialized configuration type for repository, letting us instead
focus the config type heirarchy on type-of-storage rather than use.
Change-Id: I7236800e8090624453a89cb0c7a9a632702691c6
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Static classes are preferrable to keep unwanted dependencies away,
and they have one less member field.
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
The HTTP server side code now uses the same approach that the smart
HTTP client code uses when preparing a request body. The payload
is streamed into a TemporaryBuffer of limited size. If the entire
data fits, its compressed with gzip if the user agent supports that,
and a Content-Length header is used to transmit the fixed length
body to the peer. If however the data overflows the limited memory
segment, its streamed uncompressed to the peer.
One might initially think that larger contents which overflow
the buffer should also be compressed, rather than sent raw, since
they were deemed "large". But usually these larger contents are
actually a pack file which has been already heavily compressed by
Git specific routines. Trying to deflate that with gzip is probably
going to take up more space, not less, so the compression overhead
isn't worthwhile.
This buffer and compress optimization helps repositories with a
large number of references, as their text based advertisements
compress well. For example jgit's own native repository currently
requires 32,628 bytes for its full advertisement of 489 references.
Most repositories have fewer references, and thus could compress
their entire response in one buffer.
Change-Id: I790609c9f763339e0a1db9172aa570e29af96f42
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the application wants to, it can use sendError(String) to send one
or more error messages to clients before the advertisements are sent.
These will cause a C Git client to break out of the advertisement
parsing loop, display "remote error: message\n", and terminate.
Servers can optionally use this to send a detailed error to a client
explaining why it cannot use the ReceivePack service on a repository.
Over smart HTTP these errors are sent in a 200 OK response, and
are in the payload, allowing the Git client to give the end-user
the custom message rather than the generic error "403 Forbidden".
Change-Id: I03f4345183765d21002118617174c77f71427b5a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Any messages received on side band #2 that aren't scraped as a
progress message into our ProgressMonitor are now forwarded to a
buffer which is later included into the OperationResult object.
Application callers can use this buffer to present the additional
messages from the remote peer after the push or fetch operation
has concluded.
The smart push connections using the native send-pack/receive-pack
protocol now request side-band-64k capability if it is available
and forward any messages received through that channel onto this
message buffer. This makes hook messages available over smart HTTP,
or even over SSH.
The SSH transport was modified to redirect the remote command's
stderr stream into the message buffer, interleaved with any data
received over side band #2. Due to buffering between these two
different channels in the SSH channel mux itself the order of any
writes between the two cannot be ensured, but it tries to stay close.
The local fork transport was also modified to redirect the local
receive-pack's stderr into the message buffer, rather than going to
the invoking JVM's System.err. This gives applications a chance
to log the local error messages, rather than needing to redirect
their JVM's stderr before startup.
To keep things simple, the application has to wait for the entire
operation to complete before it can see the messages. This may
be a downside if the user is trying to debug a remote hook that is
blocking indefinitely, the user would need to abort the connection
before they can inspect the message buffer in any sort of UI built
on top of JGit.
Change-Id: Ibc215f4569e63071da5b7e5c6674ce924ae39e11
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Ensure the background Jetty threads have been able to write the
request log record before the JUnit thread tries to read the set
of requests back. This wait is necessary because the JUnit thread
may be able to continue as soon as Jetty has finished writing
the response onto the socket, and hasn't necessarily finished the
post-response logging activity.
By using a semaphore with a fixed number of resources, and using
one resource per request, but all of them when we want to read the
log, we implement a simple lock that requires there be no active
requests when we want to get the log from the JUnit thread.
Change-Id: I499e1c96418557185d0e19ba8befe892f26ce7e4
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
No Eclipse support for this project is provided, because the
Jetty project does not publish a complete P2 repository.
Change-Id: Ic5fe2e79bb216e36920fd4a70ec15dd6ccfd1468
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>