A checked Exception is thrown instead.
The reason for throwing an Exception is that the state of the
repository is inconsistent in this case: There is a merge
configuration containing a non-existing local branch. Ideally the
deletion of a local branch should also delete the corresponding
merge configuration.
Bug: 337315
Change-Id: I8ed57d5aaed60aaab685fc11a8695e474e60215f
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Some clients coming through proxies may advertise a different
Accept-Encoding, for example "Accept-Encoding: gzip(proxy)".
Matching by substring causes us to identify this as a false positive;
that the client understands gzip encoding and will inflate the
response before reading it.
In this particular case however it doesn't. Its the reverse proxy
server in front of JGit letting us know the proxy<->JGit link can
be gzip compressed, while the client<->proxy part of the link is not:
client <-- no gzip --> proxy <-- gzip --> JGit
Use a more standard method of parsing by splitting the value into
tokens, and only using gzip if one of the tokens is exactly the
string "gzip". Add a unit test to make sure this isn't broken in
the future.
Change-Id: Ib4c40f9db177322c7a2640808a6c10b3c4a73819
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
isOutdated returns true iff the memory state differs from the index
file.
Change-Id: If35db06743f5f588ab19d360fd2a18a07c918edb
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
When pulling into a local branch that has no upstream configuration,
pull should try to used the default remote ("origin") instead of
throwing an Exception.
Bug: 336504
Change-Id: Ife75858e89ea79c0d6d88ba73877fe8400448e34
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
If the command contains spaces, it needs to be evaluated by the remote
shell. Quoting the command breaks this, making it impossible to run a
remote command that needs additional options.
Bug: 336301
Change-Id: Ib5d88f0b2151df2d1d2b4e08d51ee979f6da67b5
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
JGit did not use sh -c to run the receive-pack or upload-pack programs
locally, which caused errors if these strings contained spaces and
needed the local shell to evaluate them.
Win32 support using cmd.exe /c is completely untested, but seems like
it should work based on the limited information I could get through
Google search results.
Bug: 336301
Change-Id: I22e5e3492fdebbae092d1ce6b47ad411e57cc1ba
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The most expensive part of packing a repository for transport to
another system is enumerating all of the objects in the repository.
Once this gets to the size of the linux-2.6 repository (1.8 million
objects), enumeration can take several CPU minutes and costs a lot
of temporary working set memory.
Teach PackWriter to efficiently reuse an existing "cached pack"
by answering a clone request with a thin pack followed by a larger
cached pack appended to the end. This requires the repository
owner to first construct the cached pack by hand, and record the
tip commits inside of $GIT_DIR/objects/info/cached-packs:
cd $GIT_DIR
root=$(git rev-parse master)
tmp=objects/.tmp-$$
names=$(echo $root | git pack-objects --keep-true-parents --revs $tmp)
for n in $names; do
chmod a-w $tmp-$n.pack $tmp-$n.idx
touch objects/pack/pack-$n.keep
mv $tmp-$n.pack objects/pack/pack-$n.pack
mv $tmp-$n.idx objects/pack/pack-$n.idx
done
(echo "+ $root";
for n in $names; do echo "P $n"; done;
echo) >>objects/info/cached-packs
git repack -a -d
When a clone request needs to include $root, the corresponding
cached pack will be copied as-is, rather than enumerating all of
the objects that are reachable from $root.
For a linux-2.6 kernel repository that should be about 376 MiB,
the above process creates two packs of 368 MiB and 38 MiB[1].
This is a local disk usage increase of ~26 MiB, due to reduced
delta compression between the large cached pack and the smaller
recent activity pack. The overhead is similar to 1 full copy of
the compressed project sources.
With this cached pack in hand, JGit daemon completes a clone request
in 1m17s less time, but a slightly larger data transfer (+2.39 MiB):
Before:
remote: Counting objects: 1861830, done
remote: Finding sources: 100% (1861830/1861830)
remote: Getting sizes: 100% (88243/88243)
remote: Compressing objects: 100% (88184/88184)
Receiving objects: 100% (1861830/1861830), 376.01 MiB | 19.01 MiB/s, done.
remote: Total 1861830 (delta 4706), reused 1851053 (delta 1553844)
Resolving deltas: 100% (1564621/1564621), done.
real 3m19.005s
After:
remote: Counting objects: 1601, done
remote: Counting objects: 1828460, done
remote: Finding sources: 100% (50475/50475)
remote: Getting sizes: 100% (18843/18843)
remote: Compressing objects: 100% (7585/7585)
remote: Total 1861830 (delta 2407), reused 1856197 (delta 37510)
Receiving objects: 100% (1861830/1861830), 378.40 MiB | 31.31 MiB/s, done.
Resolving deltas: 100% (1559477/1559477), done.
real 2m2.938s
Repository owners can periodically refresh their cached packs by
repacking their repository, folding all newer objects into a larger
cached pack. Since repacking is already considered to be a normal
Git maintenance activity, this isn't a very big burden.
[1] In this test $root was set back about two weeks.
Change-Id: Ib87131d5c4b5e8c5cacb0f4fe16ff4ece554734b
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CGit pack-objects displays a totals line after the pack data
was fully written. This can be useful to understand some of
the decisions made by the packer, and has been a great tool
for helping to debug some of that code.
Track some of the basic values, and send it to the client when
packing is done:
remote: Counting objects: 1826776, done
remote: Finding sources: 100% (55121/55121)
remote: Getting sizes: 100% (25654/25654)
remote: Compressing objects: 100% (11434/11434)
remote: Total 1861830 (delta 3926), reused 1854705 (delta 38306)
Receiving objects: 100% (1861830/1861830), 386.03 MiB | 30.32 MiB/s, done.
Change-Id: If3b039017a984ed5d5ae80940ce32bda93652df5
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
It isn't strictly necessary to validate every reference's target
object is reachable in the repository before advertising it to a
client. This is an expensive operation when there are thousands of
references, and its very unlikely that a reference uses a missing
object, because garbage collection proceeds from the references and
walks down through the graph. So trying to hide a dangling reference
from clients is relatively pointless.
Even if we are trying to avoid giving a client a corrupt repository,
this simple check isn't sufficient. It is possible for a reference to
point to a valid commit, but that commit to have a missing blob in its
root tree. This can be caused by staging a file into the index,
waiting several weeks, then committing that file while also racing
against a prune. The prune may delete the blob, since its
modification time is more than 2 weeks ago, but retain the commit,
since its modification time is right now.
Such graph corruption is already caught during PackWriter as it
enumerates the graph from the client's want list and digs back
to the roots or common base. Leave the reference validation also
for that same phase, where we know we have to parse the object to
support the enumeration.
Change-Id: Iee70ead0d3ed2d2fcc980417d09d7a69b05f5c2f
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Because of change I28ae5713, the commit message lost the "into HEAD" and
caused the MergeCommandTest to fail. This change fixes it.
Bug: 336059
Change-Id: Ifac0138c6c6d66c40d7295b5e11ff3cd98bc9e0c
PushCommand now does not set a null credentials provider on
Transport because in this case the default provider is replaced with
null and the default mechanism for providing credentials is not
working.
Bug: 336023
Change-Id: I7a7a9221afcfebe2e1595a5e59641e6c1ae4a207
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
When MergeMessageFormatter was given a symbolic ref HEAD which points to
refs/heads/master (which is the case when merging a branch in EGit), it
would result in a merge message like the following:
Merge branch 'a' into HEAD
But it should print the following (as C Git does):
Merge branch 'a'
The solution is to use the leaf ref when checking for refs/heads/master.
Change-Id: I28ae5713b7e8123a0176fc6d7356e469900e7e97
There is no point in pushing all of the files within the edge
commits into the delta search when making a thin pack. This floods
the delta search window with objects that are unlikely to be useful
bases for the objects that will be written out, resulting in lower
data compression and higher transfer sizes.
Instead observe the path of a tree or blob that is being pushed
into the outgoing set, and use that path to locate up to WINDOW
ancestor versions from the edge commits. Push only those objects
into the edgeObjects set, reducing the number of objects seen by the
search window. This allows PackWriter to only look at ancestors
for the modified files, rather than all files in the project.
Limiting the search to WINDOW size makes sense, because more than
WINDOW edge objects will just skip through the window search as
none of them need to be delta compressed.
To further improve compression, sort edge objects into the front
of the window list, rather than randomly throughout. This puts
non-edges later in the window and gives them a better chance at
finding their base, since they search backwards through the window.
These changes make a significant difference in the thin-pack:
Before:
remote: Counting objects: 144190, done
remote: Finding sources: 100% (50275/50275)
remote: Getting sizes: 100% (101405/101405)
remote: Compressing objects: 100% (7587/7587)
Receiving objects: 100% (50275/50275), 24.67 MiB | 9.90 MiB/s, done.
Resolving deltas: 100% (40339/40339), completed with 2218 local objects.
real 0m30.267s
After:
remote: Counting objects: 61549, done
remote: Finding sources: 100% (50275/50275)
remote: Getting sizes: 100% (18862/18862)
remote: Compressing objects: 100% (7588/7588)
Receiving objects: 100% (50275/50275), 11.04 MiB | 3.51 MiB/s, done.
Resolving deltas: 100% (43160/43160), completed with 5014 local objects.
real 0m22.170s
The resulting pack is 13.63 MiB smaller, even though it contains the
same exact objects. 82,543 fewer objects had to have their sizes
looked up, which saved about 8s of server CPU time. 2,796 more
objects from the client were used as part of the base object set,
which contributed to the smaller transfer size.
Change-Id: Id01271950432c6960897495b09deab70e33993a9
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Sigend-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Some of this code predates making ObjectId.equals() final
and fixing RevObject.equals() to match ObjectId.equals().
It was therefore more complex than it needs to be, because
it tried to work around RevObject's broken equals() rules
by converting to ObjectId in a different collection.
Also combine setUpWalker() and findObjectsToPack() methods,
these can be one method and the code is actually cleaner.
Change-Id: I0f4cf9997cd66d8b6e7f80873979ef1439e507fe
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
The first 'Compressing objects' progress message is wrong, its
actually PackWriter looking up the sizes of each object in the
ObjectDatabase, so objects can be sorted correctly in the later
type-size sort that tries to take advantage of "Linus' Law" to
improve delta compression.
Rename the progress to say 'Getting sizes', which is an accurate
description of what it is doing.
Change-Id: Ida0a052ad2f6e994996189ca12959caab9e556a3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
When compressing objects, don't include the edges in the progress
meter. These cost almost no CPU time as they are simply pushed into
and popped out of the delta search window.
Change-Id: I7ea19f0263e463c65da34a7e92718c6db1d4a131
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Enhance the Git API to support cloning repositories.
Bug: 334763
Change-Id: Ibe1191498dceb9cbd1325aed85b4c403db19f41e
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
CGit push clients 1.6.6 and later support progress messages on the
side-band-64k channel during push, as this was introduced to handle
server side hook errors reported over smart HTTP.
Since JGit's delta resolution isn't always as fast as CGit's is,
a user may think the server has crashed and failed to report
status if the user pushed a lot of content and sees no feedback.
Exposing the progress monitor during the resolving deltas phase
will let the user know the server is still making forward progress.
This also helps BasePackPushConnection, which has a bounded timeout
on how long it will wait before assuming the remote server is dead.
Progress messages pushed down the side-band channel will reset the
read timer, helping the connection to stay alive and avoid timing
out before the remote side's work is complete.
Change-Id: I429c825e5a724d2f21c66f95526d9c49edcc6ca9
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Non-commits are added to a pending queue, but duplicates are
removed by checking a flag. During a reset that flag must be
stripped off the old roots, otherwise the caller cannot reuse
the old roots after the reset.
RevWalk already does this correctly for commits, but ObjectWalk
failed to handle the non-commit case itself.
Change-Id: I99e1832bf204eac5a424fdb04f327792e8cded4a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This was a mistake that was missed due to historical reasons.
"The first /r/ tells our Apache to redirect the request to Gerrit.
The second /r/ tells Gerrit that the thing following is a Git SHA-1
and it should try to locate the changes that use that commit object.
Nothing I can easily do about it now. The second /r/ is historical
and comes from Gerrit 1.x days."
Change-Id: Iec2dbf5e077f29c0e0686cab11ef197ffc705012
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
After consulting with Christian Halstrick, it turned out that the
handling of rebase during pull was implemented incorrectly.
Change-Id: I40f03409e080cdfeceb21460150f5e02a016e7f4
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
The new addIfAbsent() method combines get() with add(), but does
it in a single step so that the common case of get() returning null
for a new object can immediately insert the object into the map.
Change-Id: Ib599ab4de13ad67665ccfccf3ece52ba3222bcba
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This reverts commit f5fe2dca3c.
I regret adding this feature to the public API. Caches aren't always
the best idea, as they require work to maintain. Here the cache is
redundant information that must be computed, and when it grows stale
must be removed. The redundant information takes up more disk space,
about the same size as the pack-*.idx files are. For the linux-2.6
repository, that's more than 40 MB for a 400 MB repository. So the
cache is a 10% increase in disk usage.
The entire point of this cache is to improve PackWriter performance,
and only PackWriter performance, and only when sending an initial
clone to a new client. There may be better ways to optimize this, and
until we have a solid solution, we shouldn't be using a separate cache
in JGit.
Rebase must honor the upstream configuration
branch.<branchname>.rebase
Change-Id: Ic94f263d3f47b630ad75bd5412cb4741bb1109ca
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
This bug was hidden by an incomplete test: the current Rebase
implementation using the "git rebase -i" pattern does not work
correctly if fast-forwarding is involved. The reason for this is that
the log command does not return any commits in this case.
In addition, a check for already merged commits was introduced to
avoid spurious conflicts.
Change-Id: Ib9898fe0f982fa08e41f1dca9452c43de715fdb6
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
IOException constructor taking Exception as parameter is
new for JDK 6.
Change-Id: Iec349fc7be9e9fbaeb53841894883c47a98a7b29
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>