This fixes the warning "src/ is missing from source.."
Change-Id: I166e3a6a3d5230e4110d3283ec4dbc7d1dfe6732
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* changes:
LfsProtocolServlet: Allow access to objects in request
LfsProtocolServlet: Allow getLargeFileRepository to raise exceptions
Remove references to org.eclipse.jgit.java7
cgit changed the --depth parameter to mean the total depth of history
rather than the depth of ancestors to be returned [1]. JGit still uses
the latter meaning, so update it to match cgit.
depth=0 still means a non-shallow clone. depth=1 now means only the
wants rather than the wants and their direct parents.
This is accomplished by changing the semantic meaning of "depth" in
UploadPack and PackWriter to mean the total depth of history desired,
while keeping "depth" in DepthWalk.{RevWalk,ObjectWalk} to mean
the depth of traversal. Thus UploadPack and PackWriter always
initialize their DepthWalks with "depth-1".
[1] upload-pack: fix off-by-one depth calculation in shallow clone
https://code.googlesource.com/git/+/682c7d2f1a2d1a5443777237450505738af2ff1a
Change-Id: I87ed3c0f56c37e3491e367a41f5e555c4207ff44
Signed-off-by: Terry Parker <tparker@google.com>
When fetching from a shallow clone, the client sends "have" lines
to tell the server about objects it already has and "shallow" lines
to tell where its local history terminates. In some circumstances,
the server fails to honor the shallow lines and fails to return
objects that the client needs.
UploadPack passes the "have" lines to PackWriter so PackWriter can
omit them from the generated pack. UploadPack processes "shallow"
lines by calling RevWalk.assumeShallow() with the set of shallow
commits. RevWalk creates and caches RevCommits for these shallow
commits, clearing out their parents. That way, walks correctly
terminate at the shallow commits instead of assuming the client has
history going back behind them. UploadPack converts its RevWalk to an
ObjectWalk, maintaining the cached RevCommits, and passes it to
PackWriter.
Unfortunately, to support shallow fetches the PackWriter does the
following:
if (shallowPack && !(walk instanceof DepthWalk.ObjectWalk))
walk = new DepthWalk.ObjectWalk(reader, depth);
That is, when the client sends a "deepen" line (fetch --depth=<n>)
and the caller has not passed in a DepthWalk.ObjectWalk, PackWriter
throws away the RevWalk that was passed in and makes a new one. The
cleared parent lists prepared by RevWalk.assumeShallow() are lost.
Fortunately UploadPack intends to pass in a DepthWalk.ObjectWalk.
It tries to create it by calling toObjectWalkWithSameObjects() on
a DepthWalk.RevWalk. But it doesn't work: because DepthWalk.RevWalk
does not override the standard RevWalk#toObjectWalkWithSameObjects
implementation, the result is a plain ObjectWalk instead of an
instance of DepthWalk.ObjectWalk.
The result is that the "shallow" information is thrown away and
objects reachable from the shallow commits can be omitted from the
pack sent when fetching with --depth from a shallow clone.
Multiple factors collude to limit the circumstances under which this
bug can be observed:
1. Commits with depth != 0 don't enter DepthGenerator's pending queue.
That means a "have" cannot have any effect on DepthGenerator unless
it is also a "want".
2. DepthGenerator#next() doesn't call carryFlagsImpl(), so the
uninteresting flag is not propagated to ancestors there even if a
"have" is also a "want".
3. JGit treats a depth of 1 as "1 past the wants".
Because of (2), the only place the UNINTERESTING flag can leak to a
shallow commit's parents is in the carryFlags() call from
markUninteresting(). carryFlags() only traverses commits that have
already been parsed: commits yet to be parsed are supposed to inherit
correct flags from their parent in PendingGenerator#next (which
doesn't happen here --- that is (2)). So the list of commits that have
already been parsed becomes relevant.
When we hit the markUninteresting() call, all "want"s, "have"s, and
commits to be unshallowed have been parsed. carryFlags() only
affects the parsed commits. If the "want" is a direct parent of a
"have", then it carryFlags() marks it as uninteresting. If the "have"
was also a "shallow", then its parent pointer should have been null
and the "want" shouldn't have been marked, so we see the bug. If the
"want" is a more distant ancestor then (2) keeps the uninteresting
state from propagating to the "want" and we don't see the bug. If the
"shallow" is not also a "have" then the shallow commit isn't parsed
so (2) keeps the uninteresting state from propagating to the "want
so we don't see the bug.
Here is a reproduction case (time flowing left to right, arrows
pointing to parents). "C" must be a commit that the client
reports as a "have" during negotiation. That can only happen if the
server reports it as an existing branch or tag in the first round of
negotiation:
A <-- B <-- C <-- D
First do
git clone --depth 1 <repo>
which yields D as a "have" and C as a "shallow" commit. Then try
git fetch --depth 1 <repo> B:refs/heads/B
Negotiation sets up: have D, shallow C, have C, want B.
But due to this bug B is marked as uninteresting and is not sent.
Change-Id: I6e14b57b2f85e52d28cdcf356df647870f475440
Signed-off-by: Terry Parker <tparker@google.com>
Classes implementing the LFS servlet should be able to inspect the
objects given in the request.
Add a getObjects method. Make the LfsObject class public, and add
accessor methods.
Change-Id: I27961679f620eb3a89dc8521aadd4ea2f936c60e
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
According to the specification [1] the server may return the following
HTTP error responses:
- 403: The user has read, but not write access.
- 404: The repository does not exist for the user.
- 422: Validation error with one or more of the objects in the request.
In the current implementation, however, getLargeFileRepository can only
return null to indicate an error. This results in the error code:
- 503: Service Unavailable
being returned to the client regardless of what the actual reason was.
Add exception classes to cover these cases, derived from a common base
exception, and change the specification of getLargeFileRepository to throw
the base exception.
In LfsProtocolServlet#post, handle the new exceptions and send back the
appropriate HTTP responses as mentioned above.
The specification also mentions several other optional response codes (406,
429, 501, and 509) but these are not implemented in this commit. It should
be trivial to implement them in follow-up commits.
[1] https://github.com/github/git-lfs/blob/master/docs/api/v1/http-v1-batch.md#response-errors
Change-Id: I91be6165bcaf856d0cefc533882330962e2fc9b2
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
The bundle org.eclipse.jgit.java7 was removed in 4.0.
Remove references to it from the README.md.
Remove reference to it from org.eclipse.jgit.test/.project, which
causes an error message when opening the project in Eclipse:
Resource '/org.eclipse.jgit.java7' does not exist.
Change-Id: If0dbd562dcd60550bec3c0f793289474b7624bce
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
DepthWalk needs to override toObjectWalkWithSameObjects() and thus
needs to be able to directly set the objects and freeFlags fields, so
make them package private.
Change-Id: I24561b82c54ba3d6522582ca25105b204d777074
Signed-off-by: Terry Parker <tparker@google.com>
When doing an incremental fetch from JGit, "have" commits are marked
as "uninteresting". In a non-shallow fetch, when the RevWalk hits an
"uninteresting" commit it marks the commit's corresponding tree as
uninteresting. That has the effect of dropping those trees and all the
trees and blobs they reference out of the thin pack returned to the
client.
However, shallow fetches use a DepthWalk to limit the RevWalk, which
nearly always causes the RevWalk to terminate before encountering the
"have" commits. As a result the pack created for the incremental fetch
never encounters "uninteresting" tree objects and thus includes
duplicate objects that it knows the client already has.
Change-Id: I7b1f7c3b0d83e04d34cd2fa676f1ad4fec904c05
Signed-off-by: Terry Parker <tparker@google.com>
Previously jgit would attempt to clean git repositories that had not
been committed by calling a non-recursive delete on them, which would
fail as they are directories. This commit addresses that issue in the
following ways.
Repositories are skipped in a default clean, similarly to cgit and only
cleaned when the force flag is applied. When the force flag is applied
repositories are deleted using a recursive delete call. The force flag
and setForce method are added here to CleanCommand to support this
change.
Bug: 498367
Change-Id: Ib6cfff65a033d0d0f76395060bf76719e13fc467
Signed-off-by: Matthaus Owens <matthaus@puppetlabs.com>
This commit adds some test coverage to cleaning a repository with a
submodule, which did not previously exist.
Bug: 498367
Change-Id: Ia5c4e4cc53488800dd486f8556dc57656783f1c4
Signed-off-by: Matthaus Owens <matthaus@puppetlabs.com>
We have to be able to access the enum from outside the package as part of
the API.
Change-Id: I4bdc6bd53a14237c5f4fb9397ae850f9a24c4cfb
Signed-off-by: Stefan Beller <sbeller@google.com>
Passing the request and path to the method will allow implementations
to have more control over determination of the backend, for example:
- return different backends for different requests
- accept or refuse requests based on request characteristics
- etc
Change-Id: I1ec6ec54c91a5f0601b620ed18846eb4a3f46783
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
According to the specification [1], the error response status code
should be 422 when there is a validation error with one or more of
the objects in the request
[1] https://github.com/github/git-lfs/blob/master/docs/api/v1/http-v1-batch.md#response-errors
Change-Id: Id03fe00a2109b896d9a154228a14a33bce5accc3
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
The message "close() called when useCnt is already zero" is logged with
level warning, and then if debug logging is enabled, the stack trace is
logged separately with level debug.
Log the message and the stack trace in the same call, so that they always
appear together in the output rather than potentially interleaved with
other log statements.
Change-Id: I1b5c1557ddc2d19f3f5b29baec96e62bc467d88a
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
This prevents the warning:
Potential heap pollution via varargs parameter
The method doesn't do any casting of types that would cause the heap
pollution, so it should be safe to add @SafeVarArgs.
See [1] for information about this warning.
[1] http://stackoverflow.com/a/12462259/381622
Change-Id: Ic6d252915ea44b4f1c385afecb98906cd2c54382
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
The location of the API v1 documentation has changed. Update the
links accordingly.
Change-Id: If0148a0b573c474bbe157fcb7e6674c0055fe8b4
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* stable-4.4:
JGit v4.4.1.201607150455-r
RefDirectory: remove ref lock file for following ref dir removal
Change-Id: Ifc8a782efd7f2f991e70ad2a3691a8dba66c7554
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Gerrit's superproject subscription feature uses RefSpecs to formalize
the ACLs of when the superproject subscription feature is allowed.
As this is a slightly different use case than describing a local/remote
pair of refs, we need to be more permissive. Specifically we want to allow:
refs/heads/*
refs/heads/*:refs/heads/master
refs/heads/master:refs/heads/*
Introduce a new constructor, that allows constructing these RefSpecs.
Change-Id: I46c0bea9d876e61eb2c8d50f404b905792bc72b3
Signed-off-by: Stefan Beller <sbeller@google.com>
We had a case in Gerrits superproject subscriptions where
'refs/heads/' was configured with the intention to mean 'refs/heads/*'.
The first expression lacks the '*', which is why it is not considered
a wildcard but it was considered valid and so was not found early to be
a typo.
Refs are not allowed to end with '/' anyway, so add a check for that.
Change-Id: I3ffdd9002146382acafb4fbc310a64af4cc1b7a9
Signed-off-by: Stefan Beller <sbeller@google.com>
Example usage:
$ ./jgit push \
--push-option "Reviewer=j.doe@example.org" \
--push-option "<arbitrary string>" \
origin HEAD:refs/for/master
Stefan Beller has also made an equivalent change to CGit:
http://thread.gmane.org/gmane.comp.version-control.git/299872
Change-Id: I6797e50681054dce3bd179e80b731aef5e200d77
Signed-off-by: Dan Wang <dwwang@google.com>
What's invalidated when an object database is "dirty" is not the whole
database, but rather a specific list of packs. If there is a race
between getting the pack list and setting the volatile dirty flag
where the packs are rescanned, we don't need to mark the new pack list
as dirty.
This is a fine point that only really applies if the decision of
whether or not to mark dirty actually requires introspecting the pack
list (say, its timestamps). The general operation of "take whatever
is the current pack list and mark it dirty" may still be inherently
racy, but the cost is not so high.
Change-Id: I159e9154bd8b2d348b4e383627a503e85462dcc6
This variable has been populated and never used since it was
introduced in commit 5cf53fdacf
(Speed up clone/fetch with large number of refs, 2013-02-18).
Noted by FindBugs:
"BatchRefUpdate.java:359, UC_USELESS_OBJECT, Priority: Normal"
Change-Id: I7aacb49540aaee4a83db3d38b15633bb6c4773d0
Signed-off-by: Dan Wang <dwwang@google.com>
9aa3748 added dummy implementations for loadRoleInfo() and
loadUserInfo() to class MappedLoginService to fix compile errors in
Eclipse when using 4.6 target platform which brings Jetty 9.3 adding
these two methods. Unfortunately this causes errors when using non 4.6
target platform coming with an older Jetty version. Fix this by
extracting the anonymous subclass of MappedLoginService which allows to
suppress the unused private method errors in Eclipse.
Change-Id: I75baeea7ff4502ce9ef2b541b3c0555da5535d79
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Currently, there is a race where a user of a DfsRepository in a single
thread may get unexpected MissingObjectExceptions trying to look up an
object that appears as the current value of a ref:
1. Thread A scans packs before scanning refs, for example by reading
an object by SHA-1.
2. Thread B flushes an object and updates a ref to point to that
object.
3. Thread A looks up the ref updated in (2). Since it is scanning refs
for the first time, it sees the new object SHA-1.
4. Thread A tries to read the object it found in (3), using the cached
pack list it got from (1). The object appears missing.
Allow implementations to work around this by marking the object
database's current pack list as "dirty." A dirty pack list means that
DfsReader will rescan packs and try again if a requested object is
missing. Implementations should mark objects as dirty any time the ref
database reads or scans refs that might be newer than a previously
cached pack list.
Change-Id: I06c722b20c859ed1475628ec6a2f6d3d6d580700
* stable-4.4:
Log if Repository.useCnt becomes negative
Time based eviction strategy for repository cache
Add method to read time unit from config
Align include.path max depth with native git
Config load should not fail on unsupported or nonexistent include path
Allow using JDK 7 bootclasspath when compiling JGit using Java 8
Extract work queue to allow reusing it
Change-Id: I6aeedb1cb8b0c3068af344a719c80a03ae68fc23
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
We observe in Gerrit 2.12 that useCnt can become negative in rare cases.
Log this to help finding the bug.
Change-Id: Ie91c7f9d190a5d7cf4733d4bf84124d119ca20f7
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When Repository.close() decrements the useCount to 0 currently the cache
immediately evicts the repository from WindowCache and RepositoryCache.
This leads to I/O overhead on busy repositories because pack files and
references are inserted and deleted from the cache frequently.
This commit defers the eviction of a repository from the caches until
last use of the repository is older than time to live. The eviction is
handled by a background task running periodically.
Add two new configuration parameters:
* core.repositoryCacheExpireAfter: cache entries are evicted if the
cache entry wasn't accessed longer than this time in milliseconds
* core.repositoryCacheCleanupDelay: defines the interval in milliseconds
for running a background task evicting expired cache entries. If set to
-1 the delay is set to min(repositoryCacheExpireAfter, 10 minutes). If
set to 0 the time based eviction is switched off and no background task
is started. If time based eviction is switched off the JVM can still
evict cache entries if heap memory is running low.
Change-Id: I4a0214ad8b4a193985dda6a0ade63b70bdb948d7
Also-by: Matthias Sohn <matthias.sohn@sap.com>
Also-by: Hugo Arès <hugo.ares@ericsson.com>
Also-by: Sasa Zivkov <sasa.zivkov@sap.com>
Eclipse Neon comes with Jetty 9.3 which is causing unimplemented
abstract method errors in test class AppServer when using the JGit or
EGit Neon target platform. Fix this by adding dummy implementations.
Change-Id: Ie49107d814a846997de95f149e91fe1ec2fbe4d8
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
These try-with-resources blocks close the underlying output stream
twice: once when closing the CountingOutputStream wrapper, then again
when closing the DfsOutputStream out.
Simplify by only closing the CountingOutputStream.
In practice this shouldn't matter because the close() method of a
Closable is required to be idempotent, but avoiding the redundant
extra close makes the code simpler to read and understand.
Change-Id: I1778c4fc8ba075a2c6cd2129528bb272cb3a1af7
This assertion only defined a message but didn't assert anything.
Change-Id: I0914642b64b69dc4e3ec24acbf8052f9171613d8
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>