Do not open an OBJ_TREE if the caller is expecting an OBJ_BLOB or
OBJ_COMMIT; instead throw IncorrectObjectTypeException. This better
matches behavior of WindowCursor, the ObjectReader implementation of
the local file based object store.
Change-Id: I3fb0e77f54895b123679a405e1b6ba5b95752ff0
DfsRefDatabase#compareAndPut had a vague semantics for reference
matching. Because of this, an operation to make a symbolic
reference had been broken for some DFS implementations even if they
followed the contract of compareAndPut. The clarified semantics
requires the implementations to satisfy the followings:
* Matching references should be both symbolic references or both
object ID references.
* If both are symbolic references, both should have the same target
name.
* If both are object ID references, both should have the same object
ID.
This semantics is defined based on
https://git.eclipse.org/r/#/c/77416/. Before this commit,
DfsRefDatabase couldn't see the target of symbolic references.
InMemoryRepository is changed to comply with the new semantics. This
semantics change can affect the existing DFS implementations that only
checks object IDs. This commit adds two tests that the previous
InMemoryRepository couldn't pass.
Change-Id: I6c6b5d3cc8241a81f4a37782381c88e8a59fdf15
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
The RepositoryTestCase hierarchy no longer comes from TestCase, so all
test methods must have @Test.
Fix one test that was broken but never run; fortunately this was just
a typo in the test code.
Change-Id: I3ac8ccdab5e2d5539c63d7b0a88d8bdb0c5ff66e
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
A JUnit TestRule which enables to run the same JUnit test repeatedly.
This may help to identify the root cause why a flaky tests which succeed
most often does fail sometimes.
Add the RepeatRule to the test class containing the test to be repeated:
public class MyTest {
@Rule
public RepeatRule repeatRule = new RepeatRule();
...
}
and annotate the test to be repeated with the @Repeat(n=<repetitions>)
annotation:
@Test
@Repeat(n = 100)
public void test() {
...
}
then this test will be repeated 100 times. If any test execution fails
test repetition will be stopped.
Change-Id: I7c49ccebe1cb00bcde6b002b522d95c13fd3a35e
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When doing a detaching operation, JGit fakes a SymbolicRef as an
ObjectIdRef. This is because RefUpdate#updateImpl dereferences the
SymbolicRef when updating it. For example, assume that HEAD is
pointing to refs/heads/master. If I try to make a detached HEAD
pointing to a commit c0ffee, RefUpdate dereferences HEAD as
refs/heads/master first and changes refs/heads/master to c0ffee. The
detach argument of RefDatabase#newUpdate avoids this dereference by
faking HEAD as ObjectIdRef.
This faking is problematic for the linking operation of
DfsRefDatabase. It does a compare-and-swap operation on every
reference change because of its distributed systems nature. If a
SymbolicRef is faked as an ObjectRef, it thinks that there is a
racing change in the reference and rejects the update. Because of
this, DFS based repositories cannot change the link target of symbolic
refs. This has not been a problem for file-based repositories because
they have a file-lock based semantics instead of the CAS based one.
The reference implementation, InMemoryRepository, is not affected
because it only compares ObjectIds.
When [1] introduced this faking code, there was no way for RefUpdate
to distinguish the detaching operation. When [2] fixed the detaching
operation, it introduced a detachingSymbolicRef flag. This commit uses
this flag to control whether it needs to dereference the symbolic refs
by calling Ref#getLeaf. The same flag is used in the reflog update
operation.
This commit does not affect any operation that succeeds currently. In
some DFS repository implementations, this fixes a ref linking
operation, which is currently failing.
[1]: 01b5392cdb
[2]: 3a86868c08
Change-Id: I118f85f0414dbfad02250944e28d74dddd59469b
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Since [1], the git-lfs specification allows the server to return
HTTP 507 if there is insufficient storage for the uploaded object(s).
Add a new exception class, which implementations may throw from the
getRepository() method, causing HTTP 507 to be returned to the client.
[1] https://github.com/github/git-lfs/pull/1473
Change-Id: If5bc0a35fcf870d4216af6ca2f7c8924689ef9c5
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Imitate the packet tracing feature from C Git v1.7.5-rc0~58^2~1 (add
packet tracing debug code, 2011-02-24). Unlike C Git, use the log4j
log level setting instead of the GIT_TRACE_PACKET environment variable
to enable tracing.
Tested as follows:
1. Enable tracing by adding the lines
log4j.logger.org.eclipse.jgit.transport=DEBUG, stderr
log4j.additivity.org.eclipse.jgit.transport=false
to org.eclipse.jgit.pgm/resources/log4j.properties.
2. mvn package
3. org.eclipse.jgit.pgm/target/jgit \
ls-remote git://git.kernel.org/pub/scm/git/git 2>&1 |less
Then the output provides a trace of packets sent and received over
the wire:
2016-08-24 16:36:42 DEBUG PacketLineOut:145 - git> git-upload-pack /pub/scm/git/git^@host=git.kernel.org^@
2016-08-24 16:36:42 DEBUG PacketLineIn:165 - git< 2632c897f74b1cc9b5533f467da459b9ec725538 HEAD^@multi_ack thin-pack side-band side-band-64k ofs-delta shallow no-progress include-tag multi_ack_detailed symref=HEAD:refs/heads/master agent=git/2.8.4
2016-08-24 16:36:42 DEBUG PacketLineIn:165 - git< e0c1ceafc5bece92d35773a75fff59497e1d9bd5 refs/heads/maint
Change-Id: I5028c064f3ac090510386057cb4e6d30d4eae232
Signed-off-by: Dan Wang <dwwang@google.com>
Unless the user passed --push-option, the client does not intend to
pass push options to the server.
Without this change, all pushes to servers without push option support
fail.
Not enabling the feature (instead of enabling it and sending an empty
list of options) in this case is more intuitive and matches the
behavior of C git push's --push-option parameter better.
Bug: 500149
Change-Id: Ia4f13840cc54d8ba54e99b1432108f1c43022c53
Signed-off-by: Stefan Beller <sbeller@google.com>
HttpClientConnection uses a TemporaryBufferEntity which uses
TemporaryBuffer.LocalFile to buffer an HttpEntity. It was leaking
temporary files if the buffered entities were larger than 1MB since it
failed to destroy the TemporaryBuffer.LocalFile.
Bug: 500079
Change-Id: Ib963e04efc252bdd0420a5c69b1a19181e9e6169
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This fixes the tests failed in JDK8.
FS uses java.nio API to get file attributes. The timestamps obtained
from that API are more precise than the ones from
java.io.File#lastModified() since Java8.
This difference accidentally makes JGit detect newly added files as
smudged. Use the precised timestamp to avoid this false positive.
Bug: 500058
Change-Id: I9e587583c85cb6efa7562ad6c5f26577869a2e7c
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Signed-off-by: Andrey Loskutov <loskutov@gmx.de>
The metadata comparison of submodules is not reliable because of the
last modified timestamp and directory length.
Bug: 498759
Change-Id: If5db69ef3868e475ac477d3e8a7750b268799b0c
The exception can be thrown in a various reason, and sometimes 403
Forbidden is not appropriate. Make the HTTP status code customizable.
Change-Id: If2ef6f454f7479158a4e28a12909837db483521c
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
The git-lfs specification [1] describes the following optional status codes
that may be returned:
429 - The user has hit a rate limit with the server. Though the API does
not specify any rate limits, implementors are encouraged to set some
for availability reasons.
509 - Returned if the bandwidth limit for the user or repository has been
exceeded. The API does not specify any bandwidth limit, but implementors
may track usage.
Add two new exception classes to represent these cases. Implementations may
throw these from #getLargeFileRepository(), causing the corresponding HTTP
status codes to be returned to the client.
[1] https://github.com/github/git-lfs/blob/master/docs/api/v1/http-v1-batch.md
Change-Id: I7b93f3cf90f7344c90b1587e07927fdeb167097e
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
If the message is not sent, the client shows:
Unable to parse HTTP response for POST http://admin@localhost:8080/test-project/info/lfs/objects/batch
Change-Id: I8b72d1aded2bcd41b7389676e2373034625a1379
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Instead of hard-coding the message strings, define them in a properties
file. This will allow them to be translated.
Change-Id: I77556881579e66b2c13d187759c7efdddfee87ae
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Instead of using hard-coded HTTP status codes, use the enums
which makes it a bit easier to see what's expected.
Change-Id: I2da5d25632f374b8625d64da4df70d1c9c406bb1
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
AddCommandTest is flaky because IOException is thrown sometimes.
Caused by: java.io.IOException: Stream closed
at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:433)
at java.io.OutputStream.write(OutputStream.java:116)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at org.eclipse.jgit.util.FS.runProcess(FS.java:993)
at org.eclipse.jgit.util.FS.execute(FS.java:1102)
at org.eclipse.jgit.treewalk.WorkingTreeIterator.filterClean(WorkingTreeIterator.java:470)
... 22 more
OpenJDK replaces the underlying OutputStream with NullOutputStream when
the process exits. This throws IOException for all write operation. When
it exits before JGit writes the input to the pipe buffer, the input
stays in BufferedOutputStream. The close method tries to write it again,
and IOException is thrown.
Since we ignore IOException in StreamGobbler, we also ignore it when
we close the stream.
Fixes Bug 499633.
Change-Id: I30c7ac78e05b00bd0152f697848f4d17d53efd17
Signed-off-by: Masaya Suzuki <draftcode@gmail.com>
If a reference was updated more recently than a pack was written
(typical) the PackList was perpetually dirty until the next GC
was completed for the repository.
Detect this condition by observing no changes to the PackList
membership and resetting the dirty bit.
Change-Id: Ie2133aca1f8083307c73b6a26358175864f100ef
This will be used by EGit for implementing commit amend in the staging
view (see Idcd1efeeee8b3065bae36e285bfc0af24ab1e88f).
Change-Id: Ice9ebbb1c0c3314c679f4db40cdd3664f61c27c3
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
If the Content-Type is not set on error responses, the git-lfs client
does not read the body which contains the error message, and instead
just displays a generic error message.
Also set the charset on the Content-Type header.
Change-Id: I88e6f07f20b622a670e7c5063145dffb8b630aee
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Pretty printing is only used for outputting json content, which is
interpreted by the client and does not need to be pretty printed.
Change-Id: I48e0280241b6b0f5706300ae0f4c9bc461a89110
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Trying to open a new writer on the response causes an illegal state
exception and the response is not sent.
Change-Id: Ic718d23cfb3e74f5691cc2aea7283003af7df207
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Reported-by: David Pursehouse <david.pursehouse@gmail.com>
Change-Id: I9e9b021d335bda4d58b6bcc30f59b81ac5b37724
Signed-off-by: Jonathan Nieder <jrn@google.com>
bb9988c2 changed the signature of getLargeFileRepository() which is only
breaking implementors which is ok according to OSGi semantic versioning
rules.
Change-Id: I68bda7900b72e217571f74aee53705167f8100a2
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* changes:
Shallow fetch: Pass along "shallow"s in unparsed-wants case, too
Shallow fetch: Pass a DepthWalk to PackWriter
Change-Id: I7d1c3b4d0b7ebc254b53404d1618522b0174ac23
Since 84d2738ff2 (Don't skip want validation when the client sends no
haves, 2013-06-21), this branch is not taken. Process the
"shallow"s anyway as a defensive measure in case the code path gets
revived.
Change-Id: Idfb834825d77f51e17191c1635c9d78c78738cfd
Signed-off-by: Jonathan Nieder <jrn@google.com>
d385a7a5e5 (Shallow fetch: Respect "shallow" lines, 2016-08-03) forgot
that UploadPack wasn't passing a DepthWalk to PackWriter in the first
place. As a result, shallow clones fail:
java.lang.IllegalArgumentException: Shallow packs require a DepthWalk
at org.eclipse.jgit.internal.storage.pack.PackWriter.preparePack(PackWriter.java:756)
at org.eclipse.jgit.transport.UploadPack.sendPack(UploadPack.java:1497)
at org.eclipse.jgit.transport.UploadPack.sendPack(UploadPack.java:1381)
at org.eclipse.jgit.transport.UploadPack.service(UploadPack.java:774)
at org.eclipse.jgit.transport.UploadPack.upload(UploadPack.java:667)
at org.eclipse.jgit.http.server.UploadPackServlet.doPost(UploadPackServlet.java:191)
Change-Id: Ib0d8c2946eebfea910a2b767fb92e23da15d4749
This fixes the warning "src/ is missing from source.."
Change-Id: I166e3a6a3d5230e4110d3283ec4dbc7d1dfe6732
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* changes:
LfsProtocolServlet: Allow access to objects in request
LfsProtocolServlet: Allow getLargeFileRepository to raise exceptions
Remove references to org.eclipse.jgit.java7
cgit changed the --depth parameter to mean the total depth of history
rather than the depth of ancestors to be returned [1]. JGit still uses
the latter meaning, so update it to match cgit.
depth=0 still means a non-shallow clone. depth=1 now means only the
wants rather than the wants and their direct parents.
This is accomplished by changing the semantic meaning of "depth" in
UploadPack and PackWriter to mean the total depth of history desired,
while keeping "depth" in DepthWalk.{RevWalk,ObjectWalk} to mean
the depth of traversal. Thus UploadPack and PackWriter always
initialize their DepthWalks with "depth-1".
[1] upload-pack: fix off-by-one depth calculation in shallow clone
https://code.googlesource.com/git/+/682c7d2f1a2d1a5443777237450505738af2ff1a
Change-Id: I87ed3c0f56c37e3491e367a41f5e555c4207ff44
Signed-off-by: Terry Parker <tparker@google.com>
When fetching from a shallow clone, the client sends "have" lines
to tell the server about objects it already has and "shallow" lines
to tell where its local history terminates. In some circumstances,
the server fails to honor the shallow lines and fails to return
objects that the client needs.
UploadPack passes the "have" lines to PackWriter so PackWriter can
omit them from the generated pack. UploadPack processes "shallow"
lines by calling RevWalk.assumeShallow() with the set of shallow
commits. RevWalk creates and caches RevCommits for these shallow
commits, clearing out their parents. That way, walks correctly
terminate at the shallow commits instead of assuming the client has
history going back behind them. UploadPack converts its RevWalk to an
ObjectWalk, maintaining the cached RevCommits, and passes it to
PackWriter.
Unfortunately, to support shallow fetches the PackWriter does the
following:
if (shallowPack && !(walk instanceof DepthWalk.ObjectWalk))
walk = new DepthWalk.ObjectWalk(reader, depth);
That is, when the client sends a "deepen" line (fetch --depth=<n>)
and the caller has not passed in a DepthWalk.ObjectWalk, PackWriter
throws away the RevWalk that was passed in and makes a new one. The
cleared parent lists prepared by RevWalk.assumeShallow() are lost.
Fortunately UploadPack intends to pass in a DepthWalk.ObjectWalk.
It tries to create it by calling toObjectWalkWithSameObjects() on
a DepthWalk.RevWalk. But it doesn't work: because DepthWalk.RevWalk
does not override the standard RevWalk#toObjectWalkWithSameObjects
implementation, the result is a plain ObjectWalk instead of an
instance of DepthWalk.ObjectWalk.
The result is that the "shallow" information is thrown away and
objects reachable from the shallow commits can be omitted from the
pack sent when fetching with --depth from a shallow clone.
Multiple factors collude to limit the circumstances under which this
bug can be observed:
1. Commits with depth != 0 don't enter DepthGenerator's pending queue.
That means a "have" cannot have any effect on DepthGenerator unless
it is also a "want".
2. DepthGenerator#next() doesn't call carryFlagsImpl(), so the
uninteresting flag is not propagated to ancestors there even if a
"have" is also a "want".
3. JGit treats a depth of 1 as "1 past the wants".
Because of (2), the only place the UNINTERESTING flag can leak to a
shallow commit's parents is in the carryFlags() call from
markUninteresting(). carryFlags() only traverses commits that have
already been parsed: commits yet to be parsed are supposed to inherit
correct flags from their parent in PendingGenerator#next (which
doesn't happen here --- that is (2)). So the list of commits that have
already been parsed becomes relevant.
When we hit the markUninteresting() call, all "want"s, "have"s, and
commits to be unshallowed have been parsed. carryFlags() only
affects the parsed commits. If the "want" is a direct parent of a
"have", then it carryFlags() marks it as uninteresting. If the "have"
was also a "shallow", then its parent pointer should have been null
and the "want" shouldn't have been marked, so we see the bug. If the
"want" is a more distant ancestor then (2) keeps the uninteresting
state from propagating to the "want" and we don't see the bug. If the
"shallow" is not also a "have" then the shallow commit isn't parsed
so (2) keeps the uninteresting state from propagating to the "want
so we don't see the bug.
Here is a reproduction case (time flowing left to right, arrows
pointing to parents). "C" must be a commit that the client
reports as a "have" during negotiation. That can only happen if the
server reports it as an existing branch or tag in the first round of
negotiation:
A <-- B <-- C <-- D
First do
git clone --depth 1 <repo>
which yields D as a "have" and C as a "shallow" commit. Then try
git fetch --depth 1 <repo> B:refs/heads/B
Negotiation sets up: have D, shallow C, have C, want B.
But due to this bug B is marked as uninteresting and is not sent.
Change-Id: I6e14b57b2f85e52d28cdcf356df647870f475440
Signed-off-by: Terry Parker <tparker@google.com>
Classes implementing the LFS servlet should be able to inspect the
objects given in the request.
Add a getObjects method. Make the LfsObject class public, and add
accessor methods.
Change-Id: I27961679f620eb3a89dc8521aadd4ea2f936c60e
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>