createBareRepository adds the created repo to the list of repos to be
closed in the superclass's teardown. Wrapping it in try-with-resource
causes it to be closed too many times, resulting in a corrupt use
count.
Change-Id: I4c70630bf6008544324dda453deb141f4f89472c
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
The repositories get added to the "toClose" set by createBareRepository,
and are then closed in the superclass's tearDown method.
Explicitly closing them in this test class's teardown causes the use
count to go negative when subsequently closed again by the superclass.
Change-Id: Idcbb16b4cf4bf0640d7e4ac15d1926d8a27c1251
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
These methods add the created Repository into "toClose", and they are
then closed by LocalDiskRepositoryTestCase's tearDown method.
Calling them in try-with-resource causes them to first be closed in
the test method, and then again in tearDown, which results in the use
count going negative and a log message on the console.
While this is not a serious problem, having so many false positives
in the logs will potentially drown out real cases of Repository being
closed too many times.
Change-Id: Ib374445e101dc11cb840957b8b19ee1caf777392
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
The repositories are already closed in the superclass teardown due
to being added to the "toClose" set.
Change-Id: I768ba8a02fc585907687caf37e2e283434016c04
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Otherwise these methods may produce unexpected results if used for
strings that are intended to be interpreted locale independently.
Examples are programming language identifiers, protocol keys, and HTML
tags. For instance, "TITLE".toLowerCase() in a Turkish locale returns
"t\u0131tle", where '\u0131' is the LATIN SMALL LETTER DOTLESS I
character.
See
https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#toLowerCase--http://blog.thetaphi.de/2012/07/default-locales-default-charsets-and.html
Bug: 511238
Change-Id: Id8d8f37d84d62239c918b81f8d883ed798d87656
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The Compacter and Garbage Collector will record the estimated size of
the newly going to be created compact, gc or garbage packs. This
information can be used by the clients to better make a call on how to
actually store the pack based on the approximated expected size.
Added a new protected method DfsObjDatabase.newPack(PackSource
packSource, long estimatedPackSize), so that the clients can override
this method to make use of the estimatedPackSize while creating a new
PackDescription object. The default implementation of this method is
equivalent to
newPack(packSource).setEstimatedPackSize(estimatedPackSize). I didn't
make it abstract because that would force all the existing sub classes
of DfsObjDatabase to implement this method. Due to this default
implementation, the estimatedPackSize is added to DfsPackDescription
using a setter instead of a constructor parameter (even though
constructor parameter would be a better choice as this value is set only
during the object creation).
Change-Id: Iade1122633ea774c2e842178a6a6cbb4a57b598b
Signed-off-by: Thirumala Reddy Mutchukota <thirumala@google.com>
An unreferenced object might appear in a pack. This could only happen
because it was previously referenced, and then later that reference
was removed. When we gc, we copy the referenced objects into a new
pack, and delete the old pack. This would remove the unreferenced
object. Now we first create a loose object from any unreferenced
object in the doomed pack. This kicks off the two-week grace period
for that object, after which it will be collected if it's not
referenced.
This matches the behavior of regular git.
Change-Id: I59539aca1d0d83622c41aa9bfbdd72fa868ee9fb
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Jonathan Nieder <jrn@google.com>
The new --preserve-oldpacks option moves old pack files into the
preserved subdirectory instead of deleting them after repacking.
The new --prune-preserved option prunes old pack files from the
preserved subdirectory after repacking, but before potentially
moving the latest old packfiles to this subdirectory.
These options are designed to prevent stale file handle exceptions
during git operations which can happen on users of NFS repos when
repacking is done on them. The strategy is to preserve old pack files
around until the next repack with the hopes that they will become
unreferenced by then and not cause any exceptions to running processes
when they are finally deleted (pruned).
Change-Id: If3f729f0d9ce920ee2c3e6acdde46f2068be61d2
Signed-off-by: James Melvin <jmelvin@codeaurora.org>
Generic normalization method for a possible invalid branch name.
The method compresses dividers between spaces, then replaces spaces
and non word characters with underscores.
This method is needed in preparation for subsequent EGit changes.
Bug: 509878
Change-Id: Ic0d12f098f90f912a45bcc5693d6accf751d4e58
Signed-off-by: Wim Jongman <wim.jongman@remainsoftware.com>
If there are untracked changes, apply only the untracked tree
after a successful merge. The merge tree from merging untracked
with HEAD would also contain files already reset before (changes
in tracked files) and try to reset those again,leading to false
checkout conflicts.
Bug: 505804
Change-Id: Iaced4d277623334d11e3d1cca5969590d7c5093e
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
ObjectDirectory.getShallowCommits should throw an IOException
instead of an InvalidArgumentException if invalid SHAs are present
in .git/shallow (as this file is usually edited by a human).
Change-Id: Ia3a39d38f7aec4282109c7698438f0795fbec905
Signed-off-by: Marc Strapetz <marc.strapetz@syntevo.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
This fixes a nasty performance issue for repositories that have many
objects referenced through refs/tags/, but not in refs/heads/.
Situations like this can arise when a project has made releases like
refs/tags/v1.0, and then decides to orphan history and start over for
version 2. The v1.0 objects are not reachable from master anymore,
but are still live due to the v1.0 tag.
When tags are packed in the GC_OTHER pack, bitmaps are not able to
cover the repository's contents. This may cause very slow counting
times during git clone, as the server must enumerate the ancient
history under refs/tags/ to respond to the client.
Clients by default always ask for all tags when asking for all heads
during clone. This has been true since git-core commit 8434c2f1afedb
(Apr 27 2008), when clone was converted to a builtin. Including tags
in the main GC pack should still allow servers to benefit from the
fast full pack reuse path when serving a clone to a client.
Change-Id: I22e29517b5bc6fa3d6b19a19f13bef0c68afdca3
Previously it was looking for a keep file with the name of a pack file
(extenstion included) appended with a '.keep'. However, the keep file
name should be the pack file name with a '.keep' extension
Change-Id: I9dc4c7c393ae20aefa0b9507df8df83610ce4d42
Signed-off-by: James Melvin <jmelvin@codeaurora.org>
FileSnapshot.isModified may have reported a file to be clean although it
was actually dirty.
Imagine you have a FileSnapshot on file f. lastmodified and lastread are
both t0. Now time is t1 and you
1) modify the file
2) update the FileSnapshot of the file (lastModified=t1, lastRead=t1)
3) modify the file again
4) wait 3 seconds
5) ask the Filesnapshot whether the file is dirty or not. It erroneously
answered it's clean.
Any file which has been modified longer than 2.5 seconds ago was
reported to be clean. As the test shows that's not always correct.
The real-world problem fixed by this change is the following:
* A gerrit server using JGit to serve git repositories is processing
fetch requests while simultaneously a native git garbage collection
runs on the repo.
* At time t1 native git writes temporary files in the pack folder
setting the mtime of the pack folder to t1.
* A fetch request causes JGit to search for new packfiles and JGit
remembers this scan in a Filesnapshot on the packs folder. Since the gc
is not finished JGit doesn't see any new packfiles.
* The fetch is processed and the gc ends while the filesystem timer is
still t1. GC writes a new packfile and deletes the old packfile.
* 3 seconds later another request arrives. JGit does not yet know about
the new packfile but is also not rescanning the pack folder because it
cached that the last scan happened at time t1 and pack folder's mtime is
also t1. Now JGit will not be able to resolve any object contained in
this new pack. This behavior may be persistent if objects referenced by
the ref/meta/config branch are affected so gerrit can't read permissions
stored in the refs/meta/config branch anymore and will not allow any
pushes anymore. The pack folder will not change its mtime and therefore
no rescan will take place.
Change-Id: I3efd0ccffeb97b01207dc3e7a6b85c6b06928fad
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Fix JGits merge-base calculation in case of inconsistent commit times.
JGit was potentially failing to compute correct merge-bases when the
commit times where inconsistent (a parent commit was younger than a
child commit). The code in MergeBaseGenerator was aware of the fact that
sometimes the discovery of a merge base x can occur after the parents of
x have been seen (see comment in #carryOntoOne()). But in the light of
inconsistent commit times it was possible that these parents of a
merge-base have already been returned as a merge-base.
This commit fixes the bug by buffering all commits generated by
MergeBaseGenerator. It is expected that this buffer will be small
because the number of merge-bases will be small. Additionally a new
flag is used to mark the ancestors of merge-bases. This allows to filter
out the unwanted commits.
Bug: 507584
Change-Id: I9cc140b784c3231b972bd2c3de61a789365237ab
Use Oxygen M3 Orbit repository which provides the bundles built using
the new orbit-recipe based build.
CQ: 11658
Change-Id: I7f3dcc966732b32830c75d5daa55383bd028d182
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Java 8 fixed the silent flush during close issue by
FilterOutputStream (base class of BufferedOutputStream)
using try-with-resources to close the stream, getting a
behavior matching what JGit's SafeBufferedOutputStream
was doing:
try {
flush();
} finally {
out.close();
}
With Java 8 as the minimum required version to run JGit
it is no longer necessary to override close() or have
this class. Deprecate the class, and use the JRE's version
of close.
Change-Id: Ic0584c140010278dbe4062df2e71be5df9a797b3
This method pair allows the caller to read and modify the description
file that is traditionally used by gitweb and cgit when rendering a
repository on the web.
Gerrit Code Review has offered this feature for years as part of
its GitRepositoryManager interface, but its fundamentally a feature
of JGit and its Repository abstraction.
git-core typically initializes a repository with a default value
inside the description file. During getDescription() this string
is converted to null as it is never a useful description.
Change-Id: I0a457026c74e9c73ea27e6f070d5fbaca3439be5
In case a value is used which isn’t a power of 2 there will be a high
chance of java.lang.ArrayIndexOutBoundsException and
org.eclipse.jgit.errors.CorruptObjectException due to a mismatching
assumption for the DfsBlockCache#blockSizeShift parameter.
Change-Id: Ib348b3704edf10b5f93a3ffab4fa6f09cbbae231
Signed-off-by: Philipp Marx <smigfu@googlemail.com>
* GC.tooManyLooseObjects() always responded true since the loop missed
to advance the iterator so it always incremented until the threshold was
exceeded.
* Also fix loop exit criterion which was off by 1.
* Add some tests.
Change-Id: I70976dfaa026efbcf3c46bd45941f37277a18e04
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Previously, the streamFileThreshold, the threshold at which a file
would be streamed rather than loaded entirely into memory, was only
configurable on a global basis.
This commit makes this threshold configurable on a per-loader basis.
Bug: 490404
Change-Id: I492c18c3155dbf56eedda9044a61d76120fd75f9
Signed-off-by: Kevin Corcoran <kevin.corcoran@puppetlabs.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Git barfs on these (and they don't make any sense), so we certainly
shouldn't write them.
Change-Id: I3faf8554a05f0fd147be2e63fbe55987d3f88099
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Upgrade to match the version used on Gerrit's master branch.
Requires a couple of modifications to make the tests work:
- Remove source_under_test parameters from java_test calls.
- Add vm_args with explicit setting of tmpdir location for http
tests. This is needed due to upstream changes in temporary
directory handling [1].
[1] https://github.com/facebook/buck/issues/946
Change-Id: I5d5dd5edc335d44b118e8587f69ba89b83fc7fbb
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
If the repository close method was called twice (or more) for one open,
the usage count became negative and the repository was never be evicted
from the cache because the method checking if repository is expired was
not considering negative usage count.
Change-Id: I18a80c415c54c37d1b9def2b311ff2d0afa455ca
Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>
Symlinks on MacOS are written as UTF-8 NFD, but
readSymbolicLink().toString() converts to NFC with potentially fewer
bytes. May occur in particular if the link target has non-ASCII
characters for which the NFC and NFD encodings differ. This may lead
to an EOFException: Short read of block.
This causes all kinds of weird effects in EGit, ranging from failing
rebases (which report the exception to the user) to EGit decorations in
the navigator silently disappearing (and never coming back).
* Rename readContentAsNormalizedString() to readSymlinkTarget() as it's
called only for symlinks. Also make it protected.
* Fix by allowing the read to succeed even if less than the expected
number of bytes are returned by the entry's input stream.
* Override in FileTreeIterator to use fs.readSymlink() directly.
Includes a new MacOS-only test.
Change-Id: I264c5972d67b1cbb1ed690580f5706e671b9affd
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
CheckoutCommand was not returning updated and removed files in case of
an overall status of NONDELETED. That's status which occurs especially
on the Windows platform when Checkout wanted to delete files but the
filesystem doesn't allow this. The situation is more seldom on linux/mac
because open filehandles don't stop a deletion attempt and checkout
succeeds more often.
Change-Id: I4828008e58c09bd8f9edaf0f7eda0a79c629fb57
There was a bug when carrying over flags from a merge commit to its
non-first parents. The first parent of a merge commit was handled
differently and correct but the non-first parents are handled by a
recursive algorithm. Flags should be copied from the root merge commit
to parent-2, to grandparent-2, ... up to the limit of STACK_DEPTH==500
parents-levels. But the recursive algorithm was always copying only to
the direct parents of the merge commit and not the grand*-parents.
This seems to be no problem when commits are handled in a strict date
order because then copying only one level is no problem if children are
handled before parents. But when commits are not seperated anymore by
distinctive correct dates (e.g. because all commits have the same date)
then it may happen that a merge-parent is handled before the merge
commit and when dealing later with the merge commit one has to copy
flags down to more than one level
Bug: 501211
Change-Id: I2d79a7cf1e3bce21a490905ccd9d5e502d7b8421
Change-Id: I26d69fb6d35c6fb120360ef143d1b1f565d4014c
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Adds a JGit built-in implementation of the "git lfs smudge" filter. This
filter should do the same as the one described in [1] besides that it
only supports the local case when the lfs objects are already present in
the media directory. Remote cases where download of LFS objects from an
LFS server is needed will be done in a later commit.
[1] https://github.com/github/git-lfs/blob/master/docs/man/git-lfs-smudge.1.ronn
Change-Id: I8ff661d4edd3667ef7f86f3b4fa33e568eb4c8f4
Adds a JGit built-in implementation of the "git lfs clean" filter. This
filter should do the same as the one described in [1]. But since this
filter is written in Java and can be called by JGit without forking new
processes it should be much faster
[1]
https://github.com/github/git-lfs/blob/master/docs/man/git-lfs-clean.1.ronn
Change-Id: If60e387e97870245b4bd765eda6717eb84cffb1d
JGit supports smudge filters defined in repository configuration. The
filters are implemented as external programs filtering content by
accepting the original content (as seen in git's object database) on
stdin and which emit the filtered content on stdout. This content is
then written to the file in the working tree. To run such a filter JGit
has to start an external process and pump data into/from this process.
This commit adds support for built-in smudge filters which are
implemented in Java and which are executed by jgit's main thread. When a
filter is defined in the configuration as
"jgit://builtin/<filterDriverName>/smudge" then JGit will lookup in a
static map whether a builtin filter is registered under this name. If
found such a filter is called to do the filtering.
The functionality in this commit requires that a program using JGit
explicitly calls the JGit API to register built-in implementations for
specific smudge filters. In follow-up commits configuration parameters
will be added which trigger such registrations.
Change-Id: Ia743aa0dbed795e71e5792f35ae55660e0eb3c24
JGit supports clean filters defined in repository configuration. The
filters are implemented as external programs filtering content by
accepting the original content (as seen in the working tree) on stdin
and which emit the filtered content on stdout. To run such a filter JGit
has to start an external process and pump data into/from this process.
This commit adds support for clean filters which are implemented
in Java and which are executed by jgit's main thread. When a filter is
defined in the configuration as
"jgit://builtin/<filterDriverName>/clean" then JGit will lookup in a
static map whether a filter is registered under this name. If found
such a filter is called to do the filtering.
The functionality in this commit requires that a program using JGit
explicitly calls the JGit API to register built-in implementations for
specific clean filters. In follow-up commits configuration parameters
will be added which trigger such registrations. Other commits will add
implementations for lfs filters.
Change-Id: I0344d3c54801c9a46e5a606c5df17e5f2e17b2be
BranchConfig treated this config property as a boolean, but git also
allows the values "preserve" and "interactive". Config property
pull.rebase also allows the same values.
Replace private enum PullCommand.PullRebaseMode by new public enum
BranchConfig.BranchRebaseMode and adapt all uses. Add a new setter to
PullCommand.
Note: PullCommand will treat "interactive" like "true", i.e., as a
non-interactive rebase. Not sure how "interactive" should be handled.
At least it won't balk on it.
Bug: 499482
Change-Id: I7309360f5662b2c2efa1bd8ea6f112c63cf064af
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Change-Id: I6691b454404dd4db3c690ecfc7515de765bc2ef7
Signed-off-by: Martin Goellnitz <m.goellnitz@outlook.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
- enhance FS.readPipe to throw an exception if the external command
fails to enable the caller to handle the command failure
- reduce log level to warning if system git config does not exist
- improve log message
Bug: 476639
Change-Id: I94ae3caec22150dde81f1ea8e1e665df55290d42
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>