Update the PackWriter to support writing out pack bitmap indexes,
a parallel ".bitmap" file to the ".pack" file.
Bitmaps are selected at commits every 1 to 5,000 commits for
each unique path from the start. The most recent 100 commits are
all bitmapped. The next 19,000 commits have a bitmaps every 100
commits. The remaining commits have a bitmap every 5,000 commits.
Commits with more than 1 parent are prefered over ones
with 1 or less. Furthermore, previously computed bitmaps are reused,
if the previous entry had the reuse flag set, which is set when the
bitmap was placed at the max allowed distance.
Bitmaps are used to speed up the counting phase when packing, for
requests that are not shallow. The PackWriterBitmapWalker uses
a RevFilter to proactively mark commits with RevFlag.SEEN, when
they appear in a bitmap. The walker produces the full closure
of reachable ObjectIds, given the collection of starting ObjectIds.
For fetch request, two ObjectWalks are executed to compute the
ObjectIds reachable from the haves and from the wants. The
ObjectIds needed to be written are determined by taking all the
resulting wants AND NOT the haves.
For clone requests, we get cached pack support for "free" since
it is possible to determine if all of the ObjectIds in a pack file
are included in the resulting list of ObjectIds to write.
On my machine, the best times for clones and fetches of the linux
kernel repository (with about 2.6M objects and 300K commits) are
tabulated below:
Operation Index V2 Index VE003
Clone 37530ms (524.06 MiB) 82ms (524.06 MiB)
Fetch (1 commit back) 75ms 107ms
Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB)
Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB)
Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB)
Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB)
Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB)
Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
A pack bitmap index is an additional index of compressed
bitmaps of the object graph. Furthermore, a logical API of the index
functionality is included, as it is expected to be used by the
PackWriter.
Compressed bitmaps are created using the javaewah library, which is a
word-aligned compressed variant of the Java bitset class based on
run-length encoding. The library only works with positive integer
values. Thus, the maximum number of ObjectIds in a pack file that
this index can currently support is limited to Integer.MAX_VALUE.
Every ObjectId is given an integer mapping. The integer is the
position of the ObjectId in the complete ObjectId list, sorted
by offset, for the pack file. That integer is what the bitmaps
use to reference the ObjectId. Currently, the new index format can
only be used with pack files that contain a complete closure of the
object graph e.g. the result of a garbage collection.
The index file includes four bitmaps for the Git object types i.e.
commits, trees, blobs, and tags. In addition, a collection of
bitmaps keyed by an ObjectId is also included. The bitmap for each entry
in the collection represents the full closure of ObjectIds reachable
from the keyed ObjectId (including the keyed ObjectId itself). The
bitmaps are further compressed by XORing the current bitmaps against
prior bitmaps in the index, and selecting the smallest representation.
The XOR'd bitmap and offset from the current entry to the position
of the bitmap to XOR against is the actual representation of the entry
in the index file. Each entry contains one byte, which is currently
used to note whether the bitmap should be blindly reused.
Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
Extend ResolveMerger with RecursiveMerger to merge two tips
that have up to 200 bases.
Bug: 380314
CQ: 6854
Change-Id: I6292bb7bda55c0242a448a94956f2d6a94fddbaa
Also-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Instead of the complicated strange stuff, implement staah
apply as cherry-pick.
Provided there are no conflicts and it is requested that
the index should be applied, perform yet another cherry-pick,
but discard tha results thereof it that would result in conflicts.
Bug: 376035
Change-Id: I553f3a753e0124b102a51f8edbb53ddeff2912e2
This adds a new optional TreeFilter[] argument to DiffEntry.scan. All
filters will be checked during the scan to determine if an entry should
be "marked" with regard to that filter.
After having called scan, the user can then call isMarked(int) on the
entries to find out whether they matched the TreeFilter with the passed
index.
An example use case for this is in the file diff viewer of EGit's
History view, where we'd like to highlight entries that are matching the
current filter.
See EGit change I03da4b38d1591495cb290909f0e4c6e52270e97f.
Bug: 393610
Change-Id: Icf911fe6fca131b2567514f54d66636a44561af1
Signed-off-by: Robin Stocker <robin@nibor.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Previously, the Dfs GC excluded objects from packs by passing a
previously written index to the PackWriter. Reading back a file on
Dfs is slow. Instead, allow the PackWriter to expose the objects
included in a pack and forward that to invocations of excludeObjects() .
Change-Id: I377cb4ab07f62cf790505e1eeb0b2efe81897c79
Cherry-pick has been fixed, but even though revert does
basically the same thing, the fixes were not carried over here.
- Recognize the revert-states, analogous to the cherry picking states
- Make reset handle a revert-in-progress
- Update REVERT_HEAD and MERGE_MSG when revert fails due to conflicts
- Clear revert state on commit and reset
- Format the message similarily to how cherry-pick does. This is
not exactly how C Git does it.
The interface is still not the same as for cherry-picking.
Change-Id: I8ea956fcbc9526d62a2365360feea23a9280eba3
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
StartGenerator now processes .git/shallow to have the
RevWalk stop for shallow commits.
See RevWalkShallowTest for tests.
Bug: 394543
CQ: 6908
Change-Id: Ia5af1dab3fe9c7888f44eeecab1e1bcf2e8e48fe
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
The test checks if an error is thrown when trying to create the same tag
for the second time.
Change-Id: I4ed2f6c997587f0ea23bd26a32fb64a2d48a980e
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
If a client attempts to create a branch that already exists on the
remote side, tell them "already exists" rather than repeat lots of
information about the reference. Previously the error looked like:
! [remote rejected] tags/1.3.1 -> 1.3.1 (Ref Ref[refs/tags/1.3.1=e3857ee05...] already exists)
Now it will simply say:
! [remote rejected] tags/1.3.1 -> 1.3.1 (already exists)
Change-Id: I96fc67ca8b650052de6e662449a3c5bc8bbc010b
Instead of just returning null when something was not parseable we
should throw a real ParseException. This allows us to distinguish
between specifications which are unparseable and those which represent
no date (e.g. "never")
Change-Id: Ib3c1aa64b65ed0e0270791a365f2fa72ab78a3f4
Implements a garbage collector for FileRepositories. Main ideas are
copied from the garbage collector for DFS based repos
(DfsGarbageCollector). Added functionalities are
- pruning loose objects
- handling of the index
- packing refs
- handling of reflogs (objects referenced from reflog will not be
pruned/)
These are features of a GC which are not handled in this change and
which should come with subsequent changes:
- unpacking packed objects into loose objects (to support that pruning
packed objects doesn't delete them until they are older than two weeks)
- expiration of reflogs
- support for configuration parameters (e.g. gc.pruneExpire)
Change-Id: I14ea5cb7e0fd1b5c50b994fd77f4e05bfbb9d911
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
When receiving a pack, data buffered after the pack can restored
to the InputStream if the stream supports mark and reset.
Change-Id: If04915c32c91be28db8df7e8491ed3e9fe0e1608
It is not always appropriate to use the .gitmodules file from the
working tree, for example if reading the modules at a specific commit.
And sometimes it is impossible, as in a bare repository.
When using the static factory methods, automatically set up the
appropriate root tree so lazy loading of the config file reads from
the appropriate place. Leave the current behavior of looking in the
working tree as a fallback for the case where walking the index.
Change-Id: I71b7ed3ba16c80b0adb8c5fd85b5c37fd4aef8eb
Let a Transport instance be opened with only a URI, for use in the
upcoming publish-subscribe feature.
Change-Id: I391c60c10d034b5c1c0ef19b1f24a9ba76b17bb5
This extracts the logic for writing to the reflog from
RefDirectory into a new ReflogWriter class. This class
creates a public API for writing reflog entries similar
to ReflogReader for reading reflog entries.
The new command supports rewriting the stash's log to remove
a configured entry followed by updating the stash ref to
the value at the bottom of the newly written log.
Change-Id: Icfcbc70e838666769a742a94196eb8dc9c7efcc7
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
Applies the changes in a stashed commit to the local working
directory and index
Bug: 309355
Change-Id: I9fd5ede8affc7f0060ffa7c5cec34573b6fa2b1b
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
Adds a new command to stash the index and working directory
changes in a commit stored in refs/stash
Bug: 309355
Change-Id: I2ce85b1601b74b07e286a3f99feb358dfbdfe29c
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
A '.git' file in a repository's working tree root is now parsed
as a ref to a folder located elsewhere. This supports submodules
having their repository location outside of the parent repository's
working directory such as in the parent repository's '.git/modules'
directory.
This adds support to BaseRepositoryBuilder for repositories created
with the '--separate-git-dir' option specified to 'git init'.
Change-Id: I73c538f6d845bdbc0c4e2bce5a77f900cf36e1a9
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Interpret submodule URLs that start with './' or '../' as
relative to either the configured remote for the HEAD branch,
or 'origin', or the parent repository working directory if no
remote URL is configured
Bug: 368536
Change-Id: Id4985824023b75cd45cd64a4dd9d421166391e10
Adds the following commands:
- Add
- Init
- Status
- Sync
- Update
This also updates AddCommand so that file patterns added that
are submodules can be staged in the index.
Change-Id: Ie5112aa26430e5a2a3acd65a7b0e1d76067dc545
Signed-off-by: Kevin Sawicki <kevin@github.com>
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
This will allows calling classes to handle lock failures
without checking against the message and will also provide
access to the file that could not be locked.
Change-Id: I95bc59e1330a7af71ae3b0485c4516299193f504
Revision strings such as 'master@{0}' can now be resolved
by Repository.resolve by reading the reflog for the ref and
returning the commit for the entry number specified.
This still throws an exception for cases not supported
such as 'master@{yesterday}'.
Change-Id: I6162777d6510e083565a77cac4545cda5a9aefb3
Throw a NoHeadException when Repository.getFullBranch
returns null
Bug: 351543
Change-Id: I666cd5b67781508a293ae553c6fe5c080c8f4d99
Signed-off-by: Kevin Sawicki <kevin@github.com>
ReceivePack (and PackParser) can be configured with the
maxObjectSizeLimit in order to prevent users from pushing too large
objects to Git. The limit check is applied to all object types
although it is most likely that a BLOB will exceed the limit. In all
cases the size of the object header is excluded from the object size
which is checked against the limit as this is the size of which a BLOB
object would take in the working tree when checked out as a file.
When an object exceeds the maxObjectSizeLimit the receive-pack will
abort immediately.
Delta objects (both offset and ref delta) are also checked against the
limit. However, for delta objects we will first check the size of the
inflated delta block against the maxObjectSizeLimit and abort
immediately if it exceeds the limit. In this case we even do not know
the exact size of the resolved delta object but we assume it will be
larger than the given maxObjectSizeLimit as delta is generally only
chosen if the delta can copy more data from the base object than the
delta needs to insert or needs to represent the copy ranges. Aborting
early, in this case, avoids unnecessary inflating of the (huge) delta
block.
Unfortunately, it is too expensive (especially for a large delta) to
compute SHA-1 of an object that causes the receive-pack to abort.
This would decrease the value of this feature whose main purpose is to
protect server resources from users pushing huge objects. Therefore
we don't report the SHA-1 in the error message.
Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e
Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Double ' characters are needed for variables to appear in
single quotes. Variables surrounded with a s single ' will
not be replaced when formatted
Change-Id: I0182c1f679ba879ca19dd81bf46924f415dc6003
Signed-off-by: Kevin Sawicki <kevin@github.com>
Exposes essentially the same state machine to the programmer as is
exposed to the client via a ProgressMonitor, using a wrapper around
beginTask()/endTask().
Change-Id: Ic3622b4acea65d2b9b3551c668806981fa7293e3
In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Change-Id: Iec524abdf172e9ec5485d6c88ca6512cd8a6eafb
Though it may seem less precise, "0 months" looks bad and the reference
Git implementation also does not display "0 months"
Change-Id: I488e9c97656f9941788ae88d7c5c1562ab6c26f0
Adds method into DiffEntry class that allows to specify whether changed
trees are included in scanning result list. By default changed trees
aren't added, but in some cases having changed tree would be useful.
Also adds check for tree count in TreeWalk and when it is different from
two it will thrown an IllegalArgumentException.
This change is required by egit
I7ddb21e7ff54333dd6d7ace3209bbcf83da2b219
Change-Id: I5a680a73e1cffa18ade3402cc86008f46c1da1f1
Signed-off-by: Dariusz Luksza <dariusz@luksza.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When trying to clone into a folder that already contains a cloned
repository native git will fail with a message "fatal: destination path
'folder' already exists and is not an empty directory.". Now JGit will
also fail in this situation throwing a JGitInternalException.
The test case was provided by Tomasz Zarna.
Bug: 347852
Change-Id: If9e9919a5f92d13cf038dc470c21ee5967322dac
Also-by: Tomasz Zarna <Tomasz.Zarna@pl.ibm.com>
Signed-off-by: Adrian Goerler <adrian.goerler@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
If an internal exception occurs while packing and the request
needs to abort, the HTTP response might already be committed due
to progress message having already been delivered to the client.
This prevents UploadPackServlet from resetting the response and
sending back an HTTP 500 response.
Try to catch all exceptions and report internal errors over the
sideband stream or as an ERR command during the initial ACK/NAK
negotiation phase. This allows JGit to transmit an error message
that the user will receive on their console without needing to
worry about resetting the (already gone) HTTP response.
Change-Id: Ie393fb8bb55d2b79ab1276adf71c781c1807f9fe
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If a fetch or push needs to apply more than a few references
to the local repository it may take more than 0.25 seconds to
process all of the updates. This is especially true in the DHT
storage system during an initial push of a project with many tags.
The backend database may need to use a transaction to ensure each
tag reference creation is unique, and there may be large delays
caused by these transactions.
Change-Id: Ib11a077adfbd525253e425d327f2e2c2380804c7
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
BlameGenerator digs through history and discovers the origin of each
line of some result file. BlameResult consumes the stream of regions
created by the generator and lays them out in a table for applications
to display alongside of source lines.
Applications may optionally push in the working tree copy of a file
using the push(String, byte[]) method, allowing the application to
receive accurate line annotations for the working tree version. Lines
that are uncommitted (difference between HEAD and working tree) will
show up with the description given by the application as the author,
or "Not Committed Yet" as a default string.
Applications may also run the BlameGenerator in reverse mode using the
reverse(AnyObjectId, AnyObjectId) method instead of push(). When
running in the reverse mode the generator annotates lines by the
commit they are removed in, rather than the commit they were added in.
This allows a user to discover where a line disappeared from when they
are looking at an older revision in the repository. For example:
blame --reverse 16e810b2..master -L 1080, org.eclipse.jgit.test/tst/org/eclipse/jgit/storage/file/RefDirectoryTest.java
( 1080) }
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1081)
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1082) /**
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1083) * Kick the timestamp of a local file.
Above we learn that line 1080 (a closing curly brace of the prior
method) still exists in branch master, but the Javadoc comment below
it has been removed by Christian Halstrick on May 20th as part of
commit 2302a6d3. This result differs considerably from that of C
Git's blame --reverse feature. JGit tells the reader which commit
performed the delete, while C Git tells the reader the last commit
that still contained the line, leaving it an exercise to the reader
to discover the descendant that performed the removal.
This is still only a basic implementation. Quite notably it is
missing support for the smart block copy/move detection that the C
implementation of `git blame` is well known for. Despite being
incremental, the BlameGenerator can only be run once. After the
generator runs it cannot be reused. A better implementation would
support applications browsing through history efficiently.
In regards to CQ 5110, only a little of the original code survives.
CQ: 5110
Bug: 306161
Change-Id: I84b8ea4838bb7d25f4fcdd540547884704661b8f
Signed-off-by: Kevin Sawicki <kevin@github.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>