The following refspec, which can be used to fetch GitHub pull requests,
is supported by C Git but was not yet by JGit:
+refs/pull/*/head:refs/remotes/origin/pr/*
The reason is that the wildcard in the source is in the middle.
This change also includes more validation (e.g. "refs//heads" is not
valid) and test cases.
Bug: 405099
Change-Id: I9bcef7785a0762ed0a98ca95a0bdf8879d5702aa
Allow use of ArchiveCommand without depending on the jgit command-line
tools.
To avoid complicating the process of installing and upgrading JGit,
this does not add a dependency by the org.eclipse.jgit bundle on
commons-compress. Instead, the caller is responsible for registering
any formats they want to use by calling ArchiveCommand.registerFormat.
This patch puts functionality that requires an archiver into a
separate org.eclipse.jgit.archive bundle for people who want it. One
can use it by calling ArchiveCommand.registerFormat directly to
register its formats or by relying on OSGi class loading to load
org.eclipse.jgit.archive.FormatActivator, which takes care of
registration automatically.
Once the appropriate formats are registered, you can make a tar or zip
from a git tree object as follows:
ArchiveCommand cmd = git.archive();
try {
cmd.setTree(tree).setFormat(fmt).setOutputStream(out).call();
} finally {
cmd.release();
}
Change-Id: I418e7e7d76422dc6f010d0b3b624d7bec3b20c6e
The most important difference is that in Java7 we have symbolic links
and for most operations in the work tree we want to operate on the link
itself rather than the link target, which the old File methods generally
do.
We also add support for the hidden attribute, which only makes sense
on Windows and exists, just since there are claims that Files.exists
is faster the File.exists.
A new bundle is only activated when run with a Java7 execution
environment. It is implemented as a fragment.
Tycho currently has no way to conditionally include optional features
based on the java version used to run the build, this means with this
change the jgit packaging build always needs to be run using java 7.
Change-Id: I3d6580d6fa7b22f60d7e54ab236898ed44954ffd
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When running the garbage collection for a repository it is often
interesting to compare the repository statistics from before and after
the garbage collection to understand the effect of the garbage
collection. This is why it makes sense that the
GarbageCollectionCommand provides a method to retrieve the repository
statistics before running the garbage collection.
So far without running the garbage collection the repository statistics
can only be retrieved by using JGit internal classes. This is what EGit
and Gerrit do at the moment, but it would be better to have an API for
this.
Change-Id: Id7e579157e9fbef5cfd1fc9f97ada45f0ca8c379
Signed-off-by: Edwin Kempin <edwin.kempin@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Update the PackWriter to support writing out pack bitmap indexes,
a parallel ".bitmap" file to the ".pack" file.
Bitmaps are selected at commits every 1 to 5,000 commits for
each unique path from the start. The most recent 100 commits are
all bitmapped. The next 19,000 commits have a bitmaps every 100
commits. The remaining commits have a bitmap every 5,000 commits.
Commits with more than 1 parent are prefered over ones
with 1 or less. Furthermore, previously computed bitmaps are reused,
if the previous entry had the reuse flag set, which is set when the
bitmap was placed at the max allowed distance.
Bitmaps are used to speed up the counting phase when packing, for
requests that are not shallow. The PackWriterBitmapWalker uses
a RevFilter to proactively mark commits with RevFlag.SEEN, when
they appear in a bitmap. The walker produces the full closure
of reachable ObjectIds, given the collection of starting ObjectIds.
For fetch request, two ObjectWalks are executed to compute the
ObjectIds reachable from the haves and from the wants. The
ObjectIds needed to be written are determined by taking all the
resulting wants AND NOT the haves.
For clone requests, we get cached pack support for "free" since
it is possible to determine if all of the ObjectIds in a pack file
are included in the resulting list of ObjectIds to write.
On my machine, the best times for clones and fetches of the linux
kernel repository (with about 2.6M objects and 300K commits) are
tabulated below:
Operation Index V2 Index VE003
Clone 37530ms (524.06 MiB) 82ms (524.06 MiB)
Fetch (1 commit back) 75ms 107ms
Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB)
Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB)
Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB)
Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB)
Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB)
Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
A pack bitmap index is an additional index of compressed
bitmaps of the object graph. Furthermore, a logical API of the index
functionality is included, as it is expected to be used by the
PackWriter.
Compressed bitmaps are created using the javaewah library, which is a
word-aligned compressed variant of the Java bitset class based on
run-length encoding. The library only works with positive integer
values. Thus, the maximum number of ObjectIds in a pack file that
this index can currently support is limited to Integer.MAX_VALUE.
Every ObjectId is given an integer mapping. The integer is the
position of the ObjectId in the complete ObjectId list, sorted
by offset, for the pack file. That integer is what the bitmaps
use to reference the ObjectId. Currently, the new index format can
only be used with pack files that contain a complete closure of the
object graph e.g. the result of a garbage collection.
The index file includes four bitmaps for the Git object types i.e.
commits, trees, blobs, and tags. In addition, a collection of
bitmaps keyed by an ObjectId is also included. The bitmap for each entry
in the collection represents the full closure of ObjectIds reachable
from the keyed ObjectId (including the keyed ObjectId itself). The
bitmaps are further compressed by XORing the current bitmaps against
prior bitmaps in the index, and selecting the smallest representation.
The XOR'd bitmap and offset from the current entry to the position
of the bitmap to XOR against is the actual representation of the entry
in the index file. Each entry contains one byte, which is currently
used to note whether the bitmap should be blindly reused.
Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
Extend ResolveMerger with RecursiveMerger to merge two tips
that have up to 200 bases.
Bug: 380314
CQ: 6854
Change-Id: I6292bb7bda55c0242a448a94956f2d6a94fddbaa
Also-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Instead of the complicated strange stuff, implement staah
apply as cherry-pick.
Provided there are no conflicts and it is requested that
the index should be applied, perform yet another cherry-pick,
but discard tha results thereof it that would result in conflicts.
Bug: 376035
Change-Id: I553f3a753e0124b102a51f8edbb53ddeff2912e2
This adds a new optional TreeFilter[] argument to DiffEntry.scan. All
filters will be checked during the scan to determine if an entry should
be "marked" with regard to that filter.
After having called scan, the user can then call isMarked(int) on the
entries to find out whether they matched the TreeFilter with the passed
index.
An example use case for this is in the file diff viewer of EGit's
History view, where we'd like to highlight entries that are matching the
current filter.
See EGit change I03da4b38d1591495cb290909f0e4c6e52270e97f.
Bug: 393610
Change-Id: Icf911fe6fca131b2567514f54d66636a44561af1
Signed-off-by: Robin Stocker <robin@nibor.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Previously, the Dfs GC excluded objects from packs by passing a
previously written index to the PackWriter. Reading back a file on
Dfs is slow. Instead, allow the PackWriter to expose the objects
included in a pack and forward that to invocations of excludeObjects() .
Change-Id: I377cb4ab07f62cf790505e1eeb0b2efe81897c79
Cherry-pick has been fixed, but even though revert does
basically the same thing, the fixes were not carried over here.
- Recognize the revert-states, analogous to the cherry picking states
- Make reset handle a revert-in-progress
- Update REVERT_HEAD and MERGE_MSG when revert fails due to conflicts
- Clear revert state on commit and reset
- Format the message similarily to how cherry-pick does. This is
not exactly how C Git does it.
The interface is still not the same as for cherry-picking.
Change-Id: I8ea956fcbc9526d62a2365360feea23a9280eba3
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
StartGenerator now processes .git/shallow to have the
RevWalk stop for shallow commits.
See RevWalkShallowTest for tests.
Bug: 394543
CQ: 6908
Change-Id: Ia5af1dab3fe9c7888f44eeecab1e1bcf2e8e48fe
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
The test checks if an error is thrown when trying to create the same tag
for the second time.
Change-Id: I4ed2f6c997587f0ea23bd26a32fb64a2d48a980e
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
If a client attempts to create a branch that already exists on the
remote side, tell them "already exists" rather than repeat lots of
information about the reference. Previously the error looked like:
! [remote rejected] tags/1.3.1 -> 1.3.1 (Ref Ref[refs/tags/1.3.1=e3857ee05...] already exists)
Now it will simply say:
! [remote rejected] tags/1.3.1 -> 1.3.1 (already exists)
Change-Id: I96fc67ca8b650052de6e662449a3c5bc8bbc010b
Instead of just returning null when something was not parseable we
should throw a real ParseException. This allows us to distinguish
between specifications which are unparseable and those which represent
no date (e.g. "never")
Change-Id: Ib3c1aa64b65ed0e0270791a365f2fa72ab78a3f4
Implements a garbage collector for FileRepositories. Main ideas are
copied from the garbage collector for DFS based repos
(DfsGarbageCollector). Added functionalities are
- pruning loose objects
- handling of the index
- packing refs
- handling of reflogs (objects referenced from reflog will not be
pruned/)
These are features of a GC which are not handled in this change and
which should come with subsequent changes:
- unpacking packed objects into loose objects (to support that pruning
packed objects doesn't delete them until they are older than two weeks)
- expiration of reflogs
- support for configuration parameters (e.g. gc.pruneExpire)
Change-Id: I14ea5cb7e0fd1b5c50b994fd77f4e05bfbb9d911
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
When receiving a pack, data buffered after the pack can restored
to the InputStream if the stream supports mark and reset.
Change-Id: If04915c32c91be28db8df7e8491ed3e9fe0e1608
It is not always appropriate to use the .gitmodules file from the
working tree, for example if reading the modules at a specific commit.
And sometimes it is impossible, as in a bare repository.
When using the static factory methods, automatically set up the
appropriate root tree so lazy loading of the config file reads from
the appropriate place. Leave the current behavior of looking in the
working tree as a fallback for the case where walking the index.
Change-Id: I71b7ed3ba16c80b0adb8c5fd85b5c37fd4aef8eb
Let a Transport instance be opened with only a URI, for use in the
upcoming publish-subscribe feature.
Change-Id: I391c60c10d034b5c1c0ef19b1f24a9ba76b17bb5
This extracts the logic for writing to the reflog from
RefDirectory into a new ReflogWriter class. This class
creates a public API for writing reflog entries similar
to ReflogReader for reading reflog entries.
The new command supports rewriting the stash's log to remove
a configured entry followed by updating the stash ref to
the value at the bottom of the newly written log.
Change-Id: Icfcbc70e838666769a742a94196eb8dc9c7efcc7
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
Applies the changes in a stashed commit to the local working
directory and index
Bug: 309355
Change-Id: I9fd5ede8affc7f0060ffa7c5cec34573b6fa2b1b
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
Adds a new command to stash the index and working directory
changes in a commit stored in refs/stash
Bug: 309355
Change-Id: I2ce85b1601b74b07e286a3f99feb358dfbdfe29c
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
A '.git' file in a repository's working tree root is now parsed
as a ref to a folder located elsewhere. This supports submodules
having their repository location outside of the parent repository's
working directory such as in the parent repository's '.git/modules'
directory.
This adds support to BaseRepositoryBuilder for repositories created
with the '--separate-git-dir' option specified to 'git init'.
Change-Id: I73c538f6d845bdbc0c4e2bce5a77f900cf36e1a9
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Interpret submodule URLs that start with './' or '../' as
relative to either the configured remote for the HEAD branch,
or 'origin', or the parent repository working directory if no
remote URL is configured
Bug: 368536
Change-Id: Id4985824023b75cd45cd64a4dd9d421166391e10
Adds the following commands:
- Add
- Init
- Status
- Sync
- Update
This also updates AddCommand so that file patterns added that
are submodules can be staged in the index.
Change-Id: Ie5112aa26430e5a2a3acd65a7b0e1d76067dc545
Signed-off-by: Kevin Sawicki <kevin@github.com>
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
This will allows calling classes to handle lock failures
without checking against the message and will also provide
access to the file that could not be locked.
Change-Id: I95bc59e1330a7af71ae3b0485c4516299193f504
Revision strings such as 'master@{0}' can now be resolved
by Repository.resolve by reading the reflog for the ref and
returning the commit for the entry number specified.
This still throws an exception for cases not supported
such as 'master@{yesterday}'.
Change-Id: I6162777d6510e083565a77cac4545cda5a9aefb3
Throw a NoHeadException when Repository.getFullBranch
returns null
Bug: 351543
Change-Id: I666cd5b67781508a293ae553c6fe5c080c8f4d99
Signed-off-by: Kevin Sawicki <kevin@github.com>
ReceivePack (and PackParser) can be configured with the
maxObjectSizeLimit in order to prevent users from pushing too large
objects to Git. The limit check is applied to all object types
although it is most likely that a BLOB will exceed the limit. In all
cases the size of the object header is excluded from the object size
which is checked against the limit as this is the size of which a BLOB
object would take in the working tree when checked out as a file.
When an object exceeds the maxObjectSizeLimit the receive-pack will
abort immediately.
Delta objects (both offset and ref delta) are also checked against the
limit. However, for delta objects we will first check the size of the
inflated delta block against the maxObjectSizeLimit and abort
immediately if it exceeds the limit. In this case we even do not know
the exact size of the resolved delta object but we assume it will be
larger than the given maxObjectSizeLimit as delta is generally only
chosen if the delta can copy more data from the base object than the
delta needs to insert or needs to represent the copy ranges. Aborting
early, in this case, avoids unnecessary inflating of the (huge) delta
block.
Unfortunately, it is too expensive (especially for a large delta) to
compute SHA-1 of an object that causes the receive-pack to abort.
This would decrease the value of this feature whose main purpose is to
protect server resources from users pushing huge objects. Therefore
we don't report the SHA-1 in the error message.
Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e
Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Double ' characters are needed for variables to appear in
single quotes. Variables surrounded with a s single ' will
not be replaced when formatted
Change-Id: I0182c1f679ba879ca19dd81bf46924f415dc6003
Signed-off-by: Kevin Sawicki <kevin@github.com>
Exposes essentially the same state machine to the programmer as is
exposed to the client via a ProgressMonitor, using a wrapper around
beginTask()/endTask().
Change-Id: Ic3622b4acea65d2b9b3551c668806981fa7293e3
In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Change-Id: Iec524abdf172e9ec5485d6c88ca6512cd8a6eafb