5e8e2179b2 (Update Jetty to
9.4.1.v201470120, 2017-01-26) updated Jetty in the maven build.
Update the buck build to match so buck builds work again.
The buck build will go away soon, but in the meantime (until the bazel
build gets the same level of support) it is convenient as a faster way
of running tests than using maven.
The bazel build doesn't need this change since it doesn't build or run
http tests yet.
Change-Id: Ibbdaf2880e76b32fc9f6b5605a2ff29e3deffda2
(cherry picked from commit 2470f01d0f)
MappedLoginService is no longer available in Jetty 9.4 therefore base
TestLoginService on AbstractLoginService.
Apparently Jetty now uses slf4j hence adapt RecordingLogger accordingly
so we can log error messages containing slf4j style formatting anchors
"{}".
Change-Id: Ibb36aba8782882936849b6102001a88b699bb65c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
(cherry picked from commit 5e8e2179b2)
In order to limit the number of directories we check for emptiness only
consider fanout directories which contained unreferenced loose objects
we deleted in the same gc run.
Change-Id: Idf8d512867ee1c8ed40bd55752122ce83a98ffa2
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
An orphan file is either a bitmap or an idx file in pack folder,
and its corresponding pack file is missing.
Change-Id: I3c4cb1f7aa99dd7b398bdb8d513f528d7761edff
Signed-off-by: Hongkai Liu <hongkai.liu@ericsson.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This fixes the error the javadoc plugin raises when generating the maven
site for this bundle.
Change-Id: I72026aa33be86960747a246af4a70f6a91b40102
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* origin/stable-4.5:
Fix one case of missing object
Change-Id: Ia6384f4be71086d5a0a8c42c7521220f57dfd086
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
We only need the tree id to add it to a TreeWalk so change tree's type
to AnyObjectId.
Bug: 509385
Change-Id: I98dd5fef15cd173fe1fd84273f0f48e64e12e608
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Infer [1] is a static code checker.
[1] http://fbinfer.com/
Change-Id: I880cefe0a20f6af88ab10f6e862fda44fbe0883d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Recently we frequently suffer from OutOfMemoryError when creating source
archives in the Maven build. maven-source-plugin 3.0.0 has a bug [1]
causing OOM which is fixed in 3.0.1.
[1] https://issues.apache.org/jira/browse/MSOURCES-94
Change-Id: Ie900bd546c42523c5a04e22bd3d3f510d2a81ca2
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When a repository is being GCed and a concurrent push is received, there
is the possibility of having a missing object. This is due to the fact
that after the list of objects to delete is built, there is a window of
time when an unreferenced and ready to delete object can be referenced
by the incoming push. In that case, the object would be deleted because
there is no way to know it is no longer unreferenced. This will leave
the repository in an inconsistent state and most of the operations fail
with a missing tree/object error.
Given the incoming push change the last modified date for the now
referenced object, verify this one is still a candidate to delete
before actually performing the delete operation.
Change-Id: Iadcb29b8eb24b0cb4bb9335b670443c138a60787
Signed-off-by: Hector Oswaldo Caballero <hector.caballero@ericsson.com>
FileSnapshot.isModified may have reported a file to be clean although it
was actually dirty.
Imagine you have a FileSnapshot on file f. lastmodified and lastread are
both t0. Now time is t1 and you
1) modify the file
2) update the FileSnapshot of the file (lastModified=t1, lastRead=t1)
3) modify the file again
4) wait 3 seconds
5) ask the Filesnapshot whether the file is dirty or not. It erroneously
answered it's clean.
Any file which has been modified longer than 2.5 seconds ago was
reported to be clean. As the test shows that's not always correct.
The real-world problem fixed by this change is the following:
* A gerrit server using JGit to serve git repositories is processing
fetch requests while simultaneously a native git garbage collection
runs on the repo.
* At time t1 native git writes temporary files in the pack folder
setting the mtime of the pack folder to t1.
* A fetch request causes JGit to search for new packfiles and JGit
remembers this scan in a Filesnapshot on the packs folder. Since the gc
is not finished JGit doesn't see any new packfiles.
* The fetch is processed and the gc ends while the filesystem timer is
still t1. GC writes a new packfile and deletes the old packfile.
* 3 seconds later another request arrives. JGit does not yet know about
the new packfile but is also not rescanning the pack folder because it
cached that the last scan happened at time t1 and pack folder's mtime is
also t1. Now JGit will not be able to resolve any object contained in
this new pack. This behavior may be persistent if objects referenced by
the ref/meta/config branch are affected so gerrit can't read permissions
stored in the refs/meta/config branch anymore and will not allow any
pushes anymore. The pack folder will not change its mtime and therefore
no rescan will take place.
Change-Id: I3efd0ccffeb97b01207dc3e7a6b85c6b06928fad
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When the reply is already compressed (e.g. a packfile fetched using dumb
HTTP), "Content-Encoding: gzip" wastes bandwidth relative to sending the
content raw. So don't "Accept-Encoding: gzip" for such requests.
Change-Id: Id25702c0b0ed2895df8e9790052c3417d713572c
Signed-off-by: Zhen Chen <czhen@google.com>
We faced many OOM errors on Hudson recently, so increase maximum heap size.
Change-Id: I81d4d562a06afcd8b4ff7d1f69c4d7f12099afad
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The add(long) method was deprecated in favor of addWord(long) in
the 0.8.3 release of JavaEWAH [1].
[1] https://github.com/lemire/javaewah/commit/e443cf5e
Change-Id: I89c397ed02e040f57663d04504399dfdc0889626
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Providing own implementation to doGet/doPut methods is troublesome when
this method is private.
Change-Id: I098cdc5cb90410eaaebc56c88c2d9e168584dd6d
Signed-off-by: Jacek Centkowski <geminica.programs@gmail.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* stable-4.5:
Use the same variable to check and extract LFS object id
Change-Id: I314bd4373f40843c68853b3999f60d85e08628d9
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
The latest version removes dependency on `realpath` which is not
available by default on OSX.
This upgrades buck to the same version used on Gerrit master.
Change-Id: I584211986be4e64f68d4eb905c09d3c5d60133e7
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Fix JGits merge-base calculation in case of inconsistent commit times.
JGit was potentially failing to compute correct merge-bases when the
commit times where inconsistent (a parent commit was younger than a
child commit). The code in MergeBaseGenerator was aware of the fact that
sometimes the discovery of a merge base x can occur after the parents of
x have been seen (see comment in #carryOntoOne()). But in the light of
inconsistent commit times it was possible that these parents of a
merge-base have already been returned as a merge-base.
This commit fixes the bug by buffering all commits generated by
MergeBaseGenerator. It is expected that this buffer will be small
because the number of merge-bases will be small. Additionally a new
flag is used to mark the ancestors of merge-bases. This allows to filter
out the unwanted commits.
Bug: 507584
Change-Id: I9cc140b784c3231b972bd2c3de61a789365237ab
Maven 3.3.1 or later supports sharing project specific command line
options
https://maven.apache.org/docs/3.3.1/release-notes.html
Change-Id: I61ddf08ff8c0ebb76fc2e1d0a0cd5902c65f9469
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
LFS pointer files have to be UTF-8 with \n as line ending character.
That is described in [1]. Fix JGit to follow this rules.
[1] https://github.com/github/git-lfs/blob/master/docs/spec.md
Bug: 507120
Change-Id: Ib6bd13f1cc17f1a3de125249b4f250b7b0692396
It is easier to maintain when the same variable is used for both check
and extraction of LFS object id.
Change-Id: I5406f9bc4a085aa164c4565a9667ad2925105190
Signed-off-by: Jacek Centkowski <geminica.programs@gmail.com>
This does not address all cases where no message is specified, only
cases where Repository#isValidRefName returns false.
Change-Id: Ib88cdabfdcdf37be0053e06949b0e21ad87a9575
Signed-off-by: Grace Wang <gracewang92@gmail.com>
TransportHttp sets 'Accept-Encoding: gzip' to allow the server to
compress HTTP responses. When fetching a loose object over HTTP, it
uses the following code to read the response:
InputStream in = openInputStream(c);
int len = c.getContentLength();
return new FileStream(in, len);
If the content is gzipped, openInputStream decompresses it and produces
the correct content for the object. Unfortunately the Content-Length
header contains the length of the compressed stream instead of the
actual content length. Use a length of -1 instead since we don't know
the actual length.
Loose objects are already compressed, so the gzip encoding typically
produces a longer compressed payload. The value from the Content-Length
is too high, producing EOFException: Short read of block.
Change-Id: I8d5284dad608e3abd8217823da2b365e8cd998b0
Signed-off-by: Zhen Chen <czhen@google.com>
Helped-by: Jonathan Nieder <jrn@google.com>
Per the interface specification, the getContentLength method should
return -1 if content length is unknown or greater than
Integer.MAX_VALUE.
For chunked transfer encoding, the content length is not included in the
header, hence will cause a NullPointerException when trying to parse the
content length header.
Change-Id: Iaa36b5c146a8d654e364635fa0bd2d14129baf17
Signed-off-by: Zhen Chen <czhen@google.com>
The InputStream in FileStream in downloadPack is never closed.
Change-Id: I59975d0b8d51f4b3e3ba9d4496b254d508cb936d
Signed-off-by: Zhen Chen <czhen@google.com>