The test checks if an error is thrown when trying to create the same tag
for the second time.
Change-Id: I4ed2f6c997587f0ea23bd26a32fb64a2d48a980e
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
If a client attempts to create a branch that already exists on the
remote side, tell them "already exists" rather than repeat lots of
information about the reference. Previously the error looked like:
! [remote rejected] tags/1.3.1 -> 1.3.1 (Ref Ref[refs/tags/1.3.1=e3857ee05...] already exists)
Now it will simply say:
! [remote rejected] tags/1.3.1 -> 1.3.1 (already exists)
Change-Id: I96fc67ca8b650052de6e662449a3c5bc8bbc010b
Instead of just returning null when something was not parseable we
should throw a real ParseException. This allows us to distinguish
between specifications which are unparseable and those which represent
no date (e.g. "never")
Change-Id: Ib3c1aa64b65ed0e0270791a365f2fa72ab78a3f4
Implements a garbage collector for FileRepositories. Main ideas are
copied from the garbage collector for DFS based repos
(DfsGarbageCollector). Added functionalities are
- pruning loose objects
- handling of the index
- packing refs
- handling of reflogs (objects referenced from reflog will not be
pruned/)
These are features of a GC which are not handled in this change and
which should come with subsequent changes:
- unpacking packed objects into loose objects (to support that pruning
packed objects doesn't delete them until they are older than two weeks)
- expiration of reflogs
- support for configuration parameters (e.g. gc.pruneExpire)
Change-Id: I14ea5cb7e0fd1b5c50b994fd77f4e05bfbb9d911
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
When receiving a pack, data buffered after the pack can restored
to the InputStream if the stream supports mark and reset.
Change-Id: If04915c32c91be28db8df7e8491ed3e9fe0e1608
It is not always appropriate to use the .gitmodules file from the
working tree, for example if reading the modules at a specific commit.
And sometimes it is impossible, as in a bare repository.
When using the static factory methods, automatically set up the
appropriate root tree so lazy loading of the config file reads from
the appropriate place. Leave the current behavior of looking in the
working tree as a fallback for the case where walking the index.
Change-Id: I71b7ed3ba16c80b0adb8c5fd85b5c37fd4aef8eb
Let a Transport instance be opened with only a URI, for use in the
upcoming publish-subscribe feature.
Change-Id: I391c60c10d034b5c1c0ef19b1f24a9ba76b17bb5
This extracts the logic for writing to the reflog from
RefDirectory into a new ReflogWriter class. This class
creates a public API for writing reflog entries similar
to ReflogReader for reading reflog entries.
The new command supports rewriting the stash's log to remove
a configured entry followed by updating the stash ref to
the value at the bottom of the newly written log.
Change-Id: Icfcbc70e838666769a742a94196eb8dc9c7efcc7
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
Applies the changes in a stashed commit to the local working
directory and index
Bug: 309355
Change-Id: I9fd5ede8affc7f0060ffa7c5cec34573b6fa2b1b
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
Adds a new command to stash the index and working directory
changes in a commit stored in refs/stash
Bug: 309355
Change-Id: I2ce85b1601b74b07e286a3f99feb358dfbdfe29c
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
A '.git' file in a repository's working tree root is now parsed
as a ref to a folder located elsewhere. This supports submodules
having their repository location outside of the parent repository's
working directory such as in the parent repository's '.git/modules'
directory.
This adds support to BaseRepositoryBuilder for repositories created
with the '--separate-git-dir' option specified to 'git init'.
Change-Id: I73c538f6d845bdbc0c4e2bce5a77f900cf36e1a9
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Interpret submodule URLs that start with './' or '../' as
relative to either the configured remote for the HEAD branch,
or 'origin', or the parent repository working directory if no
remote URL is configured
Bug: 368536
Change-Id: Id4985824023b75cd45cd64a4dd9d421166391e10
Adds the following commands:
- Add
- Init
- Status
- Sync
- Update
This also updates AddCommand so that file patterns added that
are submodules can be staged in the index.
Change-Id: Ie5112aa26430e5a2a3acd65a7b0e1d76067dc545
Signed-off-by: Kevin Sawicki <kevin@github.com>
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
This will allows calling classes to handle lock failures
without checking against the message and will also provide
access to the file that could not be locked.
Change-Id: I95bc59e1330a7af71ae3b0485c4516299193f504
Revision strings such as 'master@{0}' can now be resolved
by Repository.resolve by reading the reflog for the ref and
returning the commit for the entry number specified.
This still throws an exception for cases not supported
such as 'master@{yesterday}'.
Change-Id: I6162777d6510e083565a77cac4545cda5a9aefb3
Throw a NoHeadException when Repository.getFullBranch
returns null
Bug: 351543
Change-Id: I666cd5b67781508a293ae553c6fe5c080c8f4d99
Signed-off-by: Kevin Sawicki <kevin@github.com>
ReceivePack (and PackParser) can be configured with the
maxObjectSizeLimit in order to prevent users from pushing too large
objects to Git. The limit check is applied to all object types
although it is most likely that a BLOB will exceed the limit. In all
cases the size of the object header is excluded from the object size
which is checked against the limit as this is the size of which a BLOB
object would take in the working tree when checked out as a file.
When an object exceeds the maxObjectSizeLimit the receive-pack will
abort immediately.
Delta objects (both offset and ref delta) are also checked against the
limit. However, for delta objects we will first check the size of the
inflated delta block against the maxObjectSizeLimit and abort
immediately if it exceeds the limit. In this case we even do not know
the exact size of the resolved delta object but we assume it will be
larger than the given maxObjectSizeLimit as delta is generally only
chosen if the delta can copy more data from the base object than the
delta needs to insert or needs to represent the copy ranges. Aborting
early, in this case, avoids unnecessary inflating of the (huge) delta
block.
Unfortunately, it is too expensive (especially for a large delta) to
compute SHA-1 of an object that causes the receive-pack to abort.
This would decrease the value of this feature whose main purpose is to
protect server resources from users pushing huge objects. Therefore
we don't report the SHA-1 in the error message.
Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e
Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Double ' characters are needed for variables to appear in
single quotes. Variables surrounded with a s single ' will
not be replaced when formatted
Change-Id: I0182c1f679ba879ca19dd81bf46924f415dc6003
Signed-off-by: Kevin Sawicki <kevin@github.com>
Exposes essentially the same state machine to the programmer as is
exposed to the client via a ProgressMonitor, using a wrapper around
beginTask()/endTask().
Change-Id: Ic3622b4acea65d2b9b3551c668806981fa7293e3
In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Change-Id: Iec524abdf172e9ec5485d6c88ca6512cd8a6eafb
Though it may seem less precise, "0 months" looks bad and the reference
Git implementation also does not display "0 months"
Change-Id: I488e9c97656f9941788ae88d7c5c1562ab6c26f0
Adds method into DiffEntry class that allows to specify whether changed
trees are included in scanning result list. By default changed trees
aren't added, but in some cases having changed tree would be useful.
Also adds check for tree count in TreeWalk and when it is different from
two it will thrown an IllegalArgumentException.
This change is required by egit
I7ddb21e7ff54333dd6d7ace3209bbcf83da2b219
Change-Id: I5a680a73e1cffa18ade3402cc86008f46c1da1f1
Signed-off-by: Dariusz Luksza <dariusz@luksza.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When trying to clone into a folder that already contains a cloned
repository native git will fail with a message "fatal: destination path
'folder' already exists and is not an empty directory.". Now JGit will
also fail in this situation throwing a JGitInternalException.
The test case was provided by Tomasz Zarna.
Bug: 347852
Change-Id: If9e9919a5f92d13cf038dc470c21ee5967322dac
Also-by: Tomasz Zarna <Tomasz.Zarna@pl.ibm.com>
Signed-off-by: Adrian Goerler <adrian.goerler@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
If an internal exception occurs while packing and the request
needs to abort, the HTTP response might already be committed due
to progress message having already been delivered to the client.
This prevents UploadPackServlet from resetting the response and
sending back an HTTP 500 response.
Try to catch all exceptions and report internal errors over the
sideband stream or as an ERR command during the initial ACK/NAK
negotiation phase. This allows JGit to transmit an error message
that the user will receive on their console without needing to
worry about resetting the (already gone) HTTP response.
Change-Id: Ie393fb8bb55d2b79ab1276adf71c781c1807f9fe
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If a fetch or push needs to apply more than a few references
to the local repository it may take more than 0.25 seconds to
process all of the updates. This is especially true in the DHT
storage system during an initial push of a project with many tags.
The backend database may need to use a transaction to ensure each
tag reference creation is unique, and there may be large delays
caused by these transactions.
Change-Id: Ib11a077adfbd525253e425d327f2e2c2380804c7
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
BlameGenerator digs through history and discovers the origin of each
line of some result file. BlameResult consumes the stream of regions
created by the generator and lays them out in a table for applications
to display alongside of source lines.
Applications may optionally push in the working tree copy of a file
using the push(String, byte[]) method, allowing the application to
receive accurate line annotations for the working tree version. Lines
that are uncommitted (difference between HEAD and working tree) will
show up with the description given by the application as the author,
or "Not Committed Yet" as a default string.
Applications may also run the BlameGenerator in reverse mode using the
reverse(AnyObjectId, AnyObjectId) method instead of push(). When
running in the reverse mode the generator annotates lines by the
commit they are removed in, rather than the commit they were added in.
This allows a user to discover where a line disappeared from when they
are looking at an older revision in the repository. For example:
blame --reverse 16e810b2..master -L 1080, org.eclipse.jgit.test/tst/org/eclipse/jgit/storage/file/RefDirectoryTest.java
( 1080) }
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1081)
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1082) /**
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1083) * Kick the timestamp of a local file.
Above we learn that line 1080 (a closing curly brace of the prior
method) still exists in branch master, but the Javadoc comment below
it has been removed by Christian Halstrick on May 20th as part of
commit 2302a6d3. This result differs considerably from that of C
Git's blame --reverse feature. JGit tells the reader which commit
performed the delete, while C Git tells the reader the last commit
that still contained the line, leaving it an exercise to the reader
to discover the descendant that performed the removal.
This is still only a basic implementation. Quite notably it is
missing support for the smart block copy/move detection that the C
implementation of `git blame` is well known for. Despite being
incremental, the BlameGenerator can only be run once. After the
generator runs it cannot be reused. A better implementation would
support applications browsing through history efficiently.
In regards to CQ 5110, only a little of the original code survives.
CQ: 5110
Bug: 306161
Change-Id: I84b8ea4838bb7d25f4fcdd540547884704661b8f
Signed-off-by: Kevin Sawicki <kevin@github.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Instead of looping over the objectsLists array, always set slot 0 to
null and explicitly work on the 4 indexes that matter. This kills
some loops and increases the length of the code slightly, but I've
always really disliked that dummy 0 slot.
Change-Id: I5ad938501c1c61f637ffdaff0d0d88e3962d8942
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of aborting hard with a server-side exception, report an error
to the client with "ERR %s" in a context where the client is expecting
ACK/NAK. Older clients will report this text to the user, but newer
ones know how to format this message in a more user-friendly way.
Change-Id: I1879b38988ba66f648c069c10dbfa14c3f34adb2
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The new TransportProtocol type describes what a particular Transport
implementation wants in order to support a connection. 3rd parties
can now plug into the Transport.open() logic by implementing their
own TransportProtocol and Transport classes, and registering with
Transport.register().
GUI applications can help the user configure a connection by looking
at the supported fields of a particular TransportProtocol type, which
makes the GUI more dynamic and may better support new Transports.
Change-Id: Iafd8e3a6285261412aac6cba8e2c333f8b7b76a5
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This change adds the --only/ -o option to the commit command.
Change-Id: I44352d56877f8204d985cb7a35a2e0faffb7d341
Signed-off-by: Philipp Thun <philipp.thun@sap.com>
Using a resolver and factory pattern for the anonymous git:// Daemon
class makes transport.Daemon more useful on non-file storage systems,
or in embedded applications where the caller wants more precise
control over the work tasks constructed within the daemon.
Rather than defining new interfaces, move the existing HTTP ones
into transport.resolver and make them generic on the connection
handle type. For HTTP, continue to use HttpServletRequest, and
for transport.Daemon use DaemonClient.
To remain compatible with transport.Daemon, FileResolver needs to
learn how to use multiple base directories, and how to export any
Repository instance at a fixed name.
Change-Id: I1efa6b2bd7c6567e983fbbf346947238ea2e847e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The most expensive part of packing a repository for transport to
another system is enumerating all of the objects in the repository.
Once this gets to the size of the linux-2.6 repository (1.8 million
objects), enumeration can take several CPU minutes and costs a lot
of temporary working set memory.
Teach PackWriter to efficiently reuse an existing "cached pack"
by answering a clone request with a thin pack followed by a larger
cached pack appended to the end. This requires the repository
owner to first construct the cached pack by hand, and record the
tip commits inside of $GIT_DIR/objects/info/cached-packs:
cd $GIT_DIR
root=$(git rev-parse master)
tmp=objects/.tmp-$$
names=$(echo $root | git pack-objects --keep-true-parents --revs $tmp)
for n in $names; do
chmod a-w $tmp-$n.pack $tmp-$n.idx
touch objects/pack/pack-$n.keep
mv $tmp-$n.pack objects/pack/pack-$n.pack
mv $tmp-$n.idx objects/pack/pack-$n.idx
done
(echo "+ $root";
for n in $names; do echo "P $n"; done;
echo) >>objects/info/cached-packs
git repack -a -d
When a clone request needs to include $root, the corresponding
cached pack will be copied as-is, rather than enumerating all of
the objects that are reachable from $root.
For a linux-2.6 kernel repository that should be about 376 MiB,
the above process creates two packs of 368 MiB and 38 MiB[1].
This is a local disk usage increase of ~26 MiB, due to reduced
delta compression between the large cached pack and the smaller
recent activity pack. The overhead is similar to 1 full copy of
the compressed project sources.
With this cached pack in hand, JGit daemon completes a clone request
in 1m17s less time, but a slightly larger data transfer (+2.39 MiB):
Before:
remote: Counting objects: 1861830, done
remote: Finding sources: 100% (1861830/1861830)
remote: Getting sizes: 100% (88243/88243)
remote: Compressing objects: 100% (88184/88184)
Receiving objects: 100% (1861830/1861830), 376.01 MiB | 19.01 MiB/s, done.
remote: Total 1861830 (delta 4706), reused 1851053 (delta 1553844)
Resolving deltas: 100% (1564621/1564621), done.
real 3m19.005s
After:
remote: Counting objects: 1601, done
remote: Counting objects: 1828460, done
remote: Finding sources: 100% (50475/50475)
remote: Getting sizes: 100% (18843/18843)
remote: Compressing objects: 100% (7585/7585)
remote: Total 1861830 (delta 2407), reused 1856197 (delta 37510)
Receiving objects: 100% (1861830/1861830), 378.40 MiB | 31.31 MiB/s, done.
Resolving deltas: 100% (1559477/1559477), done.
real 2m2.938s
Repository owners can periodically refresh their cached packs by
repacking their repository, folding all newer objects into a larger
cached pack. Since repacking is already considered to be a normal
Git maintenance activity, this isn't a very big burden.
[1] In this test $root was set back about two weeks.
Change-Id: Ib87131d5c4b5e8c5cacb0f4fe16ff4ece554734b
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>