ReceivePack (and PackParser) can be configured with the
maxObjectSizeLimit in order to prevent users from pushing too large
objects to Git. The limit check is applied to all object types
although it is most likely that a BLOB will exceed the limit. In all
cases the size of the object header is excluded from the object size
which is checked against the limit as this is the size of which a BLOB
object would take in the working tree when checked out as a file.
When an object exceeds the maxObjectSizeLimit the receive-pack will
abort immediately.
Delta objects (both offset and ref delta) are also checked against the
limit. However, for delta objects we will first check the size of the
inflated delta block against the maxObjectSizeLimit and abort
immediately if it exceeds the limit. In this case we even do not know
the exact size of the resolved delta object but we assume it will be
larger than the given maxObjectSizeLimit as delta is generally only
chosen if the delta can copy more data from the base object than the
delta needs to insert or needs to represent the copy ranges. Aborting
early, in this case, avoids unnecessary inflating of the (huge) delta
block.
Unfortunately, it is too expensive (especially for a large delta) to
compute SHA-1 of an object that causes the receive-pack to abort.
This would decrease the value of this feature whose main purpose is to
protect server resources from users pushing huge objects. Therefore
we don't report the SHA-1 in the error message.
Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e
Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This allows callers to determine why the revert
did not complete successfully
Change-Id: Ie44bb8523cac388b63748bc69ebdd3c3a3665d06
Signed-off-by: Kevin Sawicki <kevin@github.com>
The DfsReader must offer every representation of an object that
exists on the local repository when PackWriter asks for them. This
is necessary to identify objects in the thin pack part that are also
in the cached pack that will be appended onto the end of the stream.
Without looking at all alternatives, PackWriter may pack the same
object twice (once in the thin section, again in the cached base
pack). This may cause the command line C version to go into an
infinite loop when repacking the resulting repository, as it may see
a delta chain cycle with one of those duplicate copies of the object.
Previously the DfsReader tried to avoid looking at packs that it
might not care about, but this is insufficient, as all versions
must be considered during pack generation.
Change-Id: Ibf4a3e8ea5c42aef16404ffc42a5781edd97b18e
Consider two objects A->B where A uses B as a delta base, and these
are in the same source pack file ordered as "A B".
If cached packs is enabled and B is also in the cached pack that
will be appended onto the end of the thin pack, and both A, B are
supposed to be in the thin pack, PackWriter must consider the fact
that A's base B is an edge object that claims to be part of the
new pack, but is actually "external" and cannot be written first.
If the object reuse system considered B candidates fist this bug
does not arise, as B will be marked as edge due to it existing in
the cached pack. When the A candidates are later examined, A sees a
valid delta base is available as an edge, and will not later try to
"write base first" during the writing phase.
However, when the reuse system considers A candidates first they
see that B will be in the outgoing pack, as it is still part of
the thin pack, and arrange for A to be written first. Later when A
switches from being in-pack to being an edge object (as it is part
of the cached pack) the pointer in B does not get its type changed
from ObjectToPack to ObjectId, so B thinks A is non-edge.
We work around this case by also checking that the delta base B
is non-edge before writing the object to the pack. Later when A
writes its object header, delta base B's ObjectToPack will have
an offset == 0, which makes isWritten() = false, and the OBJ_REF
delta format will be used for A's header. This will be resolved by
the client to the copy of B that appears in the later cached pack.
Change-Id: Ifab6bfdf3c0aa93649468f49bcf91d67f90362ca
Packs can contain up to 2^32-1 objects, which exceeds the range of a
Java int. Try harder to accept higher object counts in some cases by
using long more often when we are working with the object count value.
This is a trivial refactoring, we may have to make even more changes
to the object handling code to support more than 2^31-1 objects.
Change-Id: I8cd8146e97cd1c738ad5b48fa9e33804982167e7
Annotated tags are relatively rare and currently are scheduled in a
pack file near the commits, decreasing the time it takes to resolve
client requests reading tags as part of a history traversal.
Putting them first before the commits allows the storage system to
page in the tag area, and have it relatively hot in the LRU when
the nearby commit area gets examined too. Later looking at the
tree and blob data will pollute the cache, making it more likely
the tags are not loaded and would require file IO.
Change-Id: I425f1f63ef937b8447c396939222ea20fdda290f
This counter always was running 1 higher, because it incremented
after the queue was exhausted (and every object was processed). Move
increments to be after the queue has provided a result, to ensure
we do not show a higher in-progress count than total count.
Change-Id: I97f815a0492c0957300475af409b6c6260008463
Make the code more clear with a simple refactoring of the boolean
logic into a method that describes the condition we are looking
for on each pack file. A cached pack is possible if there exists
a tips collection, and the collection is non-empty.
Change-Id: I4ac42b0622b39d159a0f4f223e291c35c71f672c
Double ' characters are needed for variables to appear in
single quotes. Variables surrounded with a s single ' will
not be replaced when formatted
Change-Id: I0182c1f679ba879ca19dd81bf46924f415dc6003
Signed-off-by: Kevin Sawicki <kevin@github.com>
As the method name and its javadoc clearly state that this method can
return null we can ignore this FindBugs warning.
Change-Id: I366435e26eda5d910f5d1a907db51f08efd4bb8c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Stored in a weak concurrent hash map, which we clean up while iterating.
Usually the weak reference behavior should not be necessary because
PackWriters should be released with release(), but we still want to
avoid leaks when dealing with broken client code.
Change-Id: I337abb952ac6524f7f920fedf04065edf84d01d2
Memory usage is dominated by three terms:
- The maximum memory allocated to each delta window.
- The maximum size of a single file held in memory during delta search.
- ObjectToPack instances owned by the writer.
For the first two terms, rather than doing complex instrumentation of
the DeltaWindows, we just overestimate based on the config parameters
(though we may underestimate if the maximum size is not set).
For the ObjectToPack instances, we do some rough byte accounting of the
underlying Java object representation.
Change-Id: I23fe3cf9d260a91f1aeb6ea22d75af8ddb9b1939
Exposes essentially the same state machine to the programmer as is
exposed to the client via a ProgressMonitor, using a wrapper around
beginTask()/endTask().
Change-Id: Ic3622b4acea65d2b9b3551c668806981fa7293e3
Decorators need to know whether folders in the working tree contain only
untracked files. This change enhances IndexDiffFilter to report such
folders. This works only together with treewalks which operate in
default traversal mode. For treewalks which process entries in
postorder mode (files are walked before their parent folder is walked)
this detection doesn't work.
Bug: 359264
Change-Id: I9298d1e3ccac0aec8bbd4e8ac867bc06a5c89c9f
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
Mark the treeWalk as recursive; otherwise following renames only works
for toplevel files.
Bug: 302549
Change-Id: I70867928eadf332b0942f8bf6877a3acb3828c87
Signed-off-by: Carsten Pfeiffer <carsten.pfeiffer@gebit.de>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
Refactored the three common transport configuration options:
credentials provider, timeout, and transport config callback
into a new TransportCommand base class which is now extended
by all commands that use a Transport object during execution.
Bug: 349188
Change-Id: I90c2c14fb4e3cc4712905158f9047153a0c235c2
Signed-off-by: Kevin Sawicki <kevin@github.com>
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
A few places were still using GitIndex. Replacing it was fairly
simple, but there is a difference in test outcome in
ReadTreeTest.testUntrackedConflicts. I believe the new behavior
is good, since we do not update neither the index, not the worktree.
Change-Id: I4be5357b7b3139dded17f77e07a140addb213ea7
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
This includes merging ReadTreeTest into DirCacheCheckoutTest and
converting IndexDiffTest to use DirCache only. The GitIndex specific
T0007GitIndex test remains.
GitIndex is deprecated. Let us speed up its demise by focusing the
DirCacheCheckout tests to using DirCache instead.
This also add explicit deprecation comments to methods that depend
on GitIndex in Repository and TreeEntry. The latter is deprecated in
itself.
Change-Id: Id89262f7fbfee07871f444378f196ded444f2783
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
Any RuntimeException or Error in this block will leave the lock
held by the caller thread, which can later result in deadlock or
just cache requests hanging forever because they cannot get to
the lock object.
Wrap everything in try/finally to prevent the lock from hanging,
even though a RuntimeException or Error should never happen in
any of these code paths.
Change-Id: Ibb3467f7ee4c06f617b737858b4be17b10d936e0
The cache starts with a single empty Ref that has no data, as the
clock list does not support being empty. When this Ref is removed,
the size has to be decremented from the associated DfsPackKey,
which was previously null. Make it always be non-null.
Change-Id: I2af99903e8039405ea6d67f383576ffa43839cff
Most unexpected exceptions are completely useless yielding message
like "null" or "3" or in the best cases something reasonable, but
still out of context.
Just declare the test as throwing an exception. That will retain
the full stack trace leading to the point of failure without using
a debugger or changing the code.
Change-Id: Id2454d328d1aa665606ae002de2c3805fe7baa8e
* changes:
DfsBlockCache: Update hits to not include contains()
Add a listener for changes to a DfsObjDatabase's pack files
Expose the reverse index size in the DfsPackDescription
Add a DfsPackFile method to get the number of cached bytes
Expose the list of pack files in the DfsBlockCache
Add a DFS repository description and reference it in each pack
Clarify the docstring of DfsBlockCache.reconfigure()
DFS: A storage layer for JGit
Intended for cross-request use, so only refers to
DfsRepositoryDescriptions rather than DfsRepositorys.
Change-Id: I2633e472c9264d91d632069f608d53d4bdd0fc09
Callers may want to inspect the contents of the cache, which this allows
them to do in a read-only fashion without any locking.
Change-Id: Ifd78e8ce34e26e5cc33e9dd61d70c593ce479ee0
Just as DfsPackDescription describes a pack but does not imply it is
open in memory, a DfsRepositoryDescription describes a repository at a
basic level without it necessarily being open.
Change-Id: I890b5fccdda12c1090cfabf4083b5c0e98d717f6
The docstring was copied from the local filesystem cache code, which
actually attempted to reconfigure the cache on the fly. The DFS cache is
designed to be "reconfigured" exactly once.
Change-Id: Ia0b01f5d6b6b3d3a68d65a5c229ff67c1cede5bc
In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Change-Id: Iec524abdf172e9ec5485d6c88ca6512cd8a6eafb
Actually this is not ok according to the RFC, but this implementation is
ment to be Git compatible. A '\' is needed when the authentication
requires or allows authentication to a Windows domain where the
user name can be specified as DOMAIN\user.
Change-Id: If02f258c032486f1afd2e09592a3c7069942eb8b
Open a repository for submodule entries that have a child .git
directory and use the resolved HEAD commit as the entry's id.
Change-Id: I68d6e127f018b24ee865865a2dd3011a0e21453c
Signed-off-by: Kevin Sawicki <kevin@github.com>