Delta search was discarding discovered deltas if an object appeared
near a type boundary in the delta search window. This has caused JGit
to produce larger pack files than other implementations of the packing
algorithm.
Delta search works by pushing prior objects into a search window, an
ordered list of objects to attempt to delta compress the next object
against. (The window size is bounded, avoiding O(N^2) behavior.)
For implementation reasons multiple object types can appear in the
input list, and the window. PackWriter commonly passes both trees and
blobs in the input list handed to the DeltaWindow algorithm. The pack
file format requires an object to only delta compress against the same
type, so the DeltaWindow algorithm must stop doing comparisions if a
blob would be compared to a tree.
Because the input list is sorted by object type and the window is
recently considered prior objects, once a wrong type is discovered in
the window the search algorithm stops and uses the current result.
Unfortunately the termination condition was discarding any found
delta by setting deltaBase and deltaBuf to null when it was trying
to break the window search.
When this bug occurs, the state of the DeltaWindow looks like this:
current
|
\ /
input list: tree0 tree1 blob1 blob2
window: blob1 tree1 tree0
/ \
|
res.prev
As the loop iterates to the right across the window, it first finds
that blob1 is a suitable delta base for blob2, and temporarily holds
this in the bestDelta/deltaBuf fields. It then considers tree1, but
tree1 has the wrong type (blob != tree), so the window loop must give
up and fall through the remaining code.
Moving the condition up and discarding the window contents allows
the bestDelta/deltaBuf to be kept, letting the final file delta
compress blob1 against blob0.
The impact of this bug (and its fix) on real world repositories is
likely minimal. The boundary from blob to tree happens approximately
once in the search, as the input list is sorted by type. Only the
first window size worth of blobs (e.g. 10 or 250) were failing to
produce a delta in the final file.
This bug fix does produce significantly different results for small
test repositories created in the unit test suite, such as when a pack
may contains 6 objects (2 commits, 2 trees, 2 blobs). Packing test
cases can now better sample different output pack file sizes depending
on delta compression and object reuse flags in PackConfig.
Change-Id: Ibec09398d0305d4dbc0c66fce1daaf38eb71148f
Disabling the garbage pack coalescing when garbageTtl > 0 can result in
lot of garbage packs if they are created within the garbageTtl time.
To avoid a large number of garbage packs, re-introducing garbage pack
coalescing for the packs that are created within a single calendar day
when the garbageTtl is more than one day or one third of the garbageTtl.
Change-Id: If969716aeb55fb4fd0ff71d75f41a07638cd5a69
Signed-off-by: Thirumala Reddy Mutchukota <thirumala@google.com>
In order to limit the number of directories we check for emptiness only
consider fanout directories which contained unreferenced loose objects
we deleted in the same gc run.
Change-Id: Idf8d512867ee1c8ed40bd55752122ce83a98ffa2
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Cover the case where the exception is wrapped up as a
cause, e.g., PackIndex#open(File).
Change-Id: I0df5b1e9c2ff886bdd84dee3658b6a50866699d1
Signed-off-by: Hongkai Liu <hongkai.liu@ericsson.com>
Sometimes, it is necessary to cancel a garbage collection operation.
When GC is called using the standalone executable, i.e., from a command
line, Control-Cing the process does the trick. When calling GC
programmatically, though, there is no mechanism to do it.
Add checks in the GC process so that a custom cancellable progress
monitor could be passed in order to cancel the operation at specific
points. In this case, the calling process set the cancel flag in the
progress monitor and the GC process will throw an exception that can
be caught and handled by the caller accordingly.
Change-Id: Ieaecf3dbdf244539ec734939c065735f6785aacf
Signed-off-by: Hector Caballero <hector.caballero@ericsson.com>
An orphan file is either a bitmap or an idx file in pack folder,
and its corresponding pack file is missing.
Change-Id: I3c4cb1f7aa99dd7b398bdb8d513f528d7761edff
Signed-off-by: Hongkai Liu <hongkai.liu@ericsson.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Using try-with-resource means that close() will automatically be
called on the Repository object. However, according to the javadoc
of Git#close():
If the repository was opened by a static factory method in this class,
then this method calls Repository#close() on the underlying repository
instance.
This means that Repository#close() is called twice, by Git.close()
and in the outer try-with-resource, leading to a corrupt use count.
Change-Id: I37ba517eb2cc67d1cd36813598772c70208d0bc9
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Otherwise these methods may produce unexpected results if used for
strings that are intended to be interpreted locale independently.
Examples are programming language identifiers, protocol keys, and HTML
tags. For instance, "TITLE".toLowerCase() in a Turkish locale returns
"t\u0131tle", where '\u0131' is the LATIN SMALL LETTER DOTLESS I
character.
See
https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#toLowerCase--http://blog.thetaphi.de/2012/07/default-locales-default-charsets-and.html
Bug: 511238
Change-Id: Id8d8f37d84d62239c918b81f8d883ed798d87656
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Logging the repository name makes it easier to track down what is
incorrectly closing a repository.
Change-Id: I42a8bdf766c0e67f100adbf76d9616584e367ac2
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
The Compacter and Garbage Collector will record the estimated size of
the newly going to be created compact, gc or garbage packs. This
information can be used by the clients to better make a call on how to
actually store the pack based on the approximated expected size.
Added a new protected method DfsObjDatabase.newPack(PackSource
packSource, long estimatedPackSize), so that the clients can override
this method to make use of the estimatedPackSize while creating a new
PackDescription object. The default implementation of this method is
equivalent to
newPack(packSource).setEstimatedPackSize(estimatedPackSize). I didn't
make it abstract because that would force all the existing sub classes
of DfsObjDatabase to implement this method. Due to this default
implementation, the estimatedPackSize is added to DfsPackDescription
using a setter instead of a constructor parameter (even though
constructor parameter would be a better choice as this value is set only
during the object creation).
Change-Id: Iade1122633ea774c2e842178a6a6cbb4a57b598b
Signed-off-by: Thirumala Reddy Mutchukota <thirumala@google.com>
Adds the param information to the private method. These are generated
via tooltip to resolve the compile errors.
Bug: 511043
Change-Id: I9ba551978eab750326d1a067b296e3ae93925871
Signed-off-by: Lars Vogel <Lars.Vogel@vogella.com>
These packages don't use @since tags because they are not part of the
stable public API. Some @since tags snuck in, though. Remove them to
make the convention easier to find for new contributors and the
expectations clearer for users.
Change-Id: I6c17d3cfc93657f1b33cf5c5708f2b1c712b0d31
An unreferenced object might appear in a pack. This could only happen
because it was previously referenced, and then later that reference
was removed. When we gc, we copy the referenced objects into a new
pack, and delete the old pack. This would remove the unreferenced
object. Now we first create a loose object from any unreferenced
object in the doomed pack. This kicks off the two-week grace period
for that object, after which it will be collected if it's not
referenced.
This matches the behavior of regular git.
Change-Id: I59539aca1d0d83622c41aa9bfbdd72fa868ee9fb
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Jonathan Nieder <jrn@google.com>
It can be considered a programming error to create a Future<T>
but do nothing with that object. There is an async computation
happening and without holding and checking the Future for done
or exception the caller has no idea if it has completed.
FS doesn't really care about these StreamGobblers finishing.
Instead use Runnable with execute(Runnable), which doesn't
return a Future.
Change-Id: I93b66d1f6c869e66be5c1169d8edafe781e601f6
The new --preserve-oldpacks option moves old pack files into the
preserved subdirectory instead of deleting them after repacking.
The new --prune-preserved option prunes old pack files from the
preserved subdirectory after repacking, but before potentially
moving the latest old packfiles to this subdirectory.
These options are designed to prevent stale file handle exceptions
during git operations which can happen on users of NFS repos when
repacking is done on them. The strategy is to preserve old pack files
around until the next repack with the hopes that they will become
unreferenced by then and not cause any exceptions to running processes
when they are finally deleted (pruned).
Change-Id: If3f729f0d9ce920ee2c3e6acdde46f2068be61d2
Signed-off-by: James Melvin <jmelvin@codeaurora.org>
The initial implementation only builds the packages consumed by
Gerrit Code Review.
Test build and execution is not implemented.
We prefer to consume maven_jar custom rule from bazlets repository,
for the same reasons as in the Gerrit project:
* Caching artifacts across different clones and projects
* Exposing source classifiers and neverlink artifact
TEST PLAN:
$ bazel build :all
$ unzip -t bazel-genfiles/all.zip
Archive: bazel-genfiles/all.zip
testing: libjgit-archive.jar OK
testing: libjgit-servlet.jar OK
testing: libjgit.jar OK
testing: libjunit.jar OK
No errors detected in compressed data of bazel-genfiles/all.zip.
Change-Id: Ia837ce95d9829fe2515f37b7a04a71a4598672a0
Signed-off-by: David Ostrovsky <david@ostrovsky.org>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Generic normalization method for a possible invalid branch name.
The method compresses dividers between spaces, then replaces spaces
and non word characters with underscores.
This method is needed in preparation for subsequent EGit changes.
Bug: 509878
Change-Id: Ic0d12f098f90f912a45bcc5693d6accf751d4e58
Signed-off-by: Wim Jongman <wim.jongman@remainsoftware.com>
If there are untracked changes, apply only the untracked tree
after a successful merge. The merge tree from merging untracked
with HEAD would also contain files already reset before (changes
in tracked files) and try to reset those again,leading to false
checkout conflicts.
Bug: 505804
Change-Id: Iaced4d277623334d11e3d1cca5969590d7c5093e
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
ObjectDirectory.getShallowCommits should throw an IOException
instead of an InvalidArgumentException if invalid SHAs are present
in .git/shallow (as this file is usually edited by a human).
Change-Id: Ia3a39d38f7aec4282109c7698438f0795fbec905
Signed-off-by: Marc Strapetz <marc.strapetz@syntevo.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
The 12 bytes `PACK...` header is written in PackWriter before reading
CachedPack files. In DfsPackFile#copyPackBypassCache, the header was not
skipped when the first block is not in cache.
Change-Id: Ibbe2e564d36b79922a936657f286addb1044d237
Signed-off-by: Zhen Chen <czhen@google.com>
Add new variation of TreeFilter in order to detect LFS pointer files in
the repository.
Additionally, update LfsPointer to support the legacy version URL [1] as
described in [2], and to allow arbitrary fields in the pointer file.
[1] https://hawser.github.com/spec/v1
[2] https://github.com/git-lfs/git-lfs/blob/master/docs/spec.md
Change-Id: I621eb058619fb1b78888a54c4b60bb110a722fc3
Signed-off-by: Dariusz Luksza <dariusz@luksza.org>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This fixes a nasty performance issue for repositories that have many
objects referenced through refs/tags/, but not in refs/heads/.
Situations like this can arise when a project has made releases like
refs/tags/v1.0, and then decides to orphan history and start over for
version 2. The v1.0 objects are not reachable from master anymore,
but are still live due to the v1.0 tag.
When tags are packed in the GC_OTHER pack, bitmaps are not able to
cover the repository's contents. This may cause very slow counting
times during git clone, as the server must enumerate the ancient
history under refs/tags/ to respond to the client.
Clients by default always ask for all tags when asking for all heads
during clone. This has been true since git-core commit 8434c2f1afedb
(Apr 27 2008), when clone was converted to a builtin. Including tags
in the main GC pack should still allow servers to benefit from the
fast full pack reuse path when serving a clone to a client.
Change-Id: I22e29517b5bc6fa3d6b19a19f13bef0c68afdca3
Previously it was looking for a keep file with the name of a pack file
(extenstion included) appended with a '.keep'. However, the keep file
name should be the pack file name with a '.keep' extension
Change-Id: I9dc4c7c393ae20aefa0b9507df8df83610ce4d42
Signed-off-by: James Melvin <jmelvin@codeaurora.org>
We only need the tree id to add it to a TreeWalk so change tree's type
to AnyObjectId.
Bug: 509385
Change-Id: I98dd5fef15cd173fe1fd84273f0f48e64e12e608
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>