The native implementation of inflate() can set finished to return
true at the same time as it copies the last bytes into the buffer.
Check for finished on each iteration, terminating as soon as libz
knows the stream was completely inflated.
If not finished, it is likely input is required before the next
native call could do any useful work. Most invocations are passing
in a buffer large enough to store the entire result. A partial return
from inflate() will need more input before it can continue. Checking
right away that needsInput() is true saves a native call to determine
no bytes can be inflated without more input.
This should fix a rare infinite loop condition inside of inflation
when an object ends exactly at the end of a block boundary, and
the next block contains only the 20 byte trailing SHA-1.
When the stream is finished each new attempt to inflate() returns
n == 0, as no additional bytes were output. The needsInput() test
tries to add the length of the footer block to itself, but then loops
back around an reloads the same block as the block is smaller than
a full block size. A zero length input is set to the inflater,
which triggers needsInput() condition again.
Change-Id: I95d02bfeab4bf995a254d49166b4ae62d1f21346
This reverts commit b646578d89.
openInputStream() is never used in JGit, nor is it used by any
known working DFS implementation. The method was added as a
utility for reading back from a DfsInserter, but the final
implementation of that feature does not requrire this method.
Change-Id: I075ad95e40af49c92b554480f8993ef5658f7684
This allows callers performing multiple separate merges to reuse a
single ObjectInserter without flushing the inserter on each iteration
(which can be slow in the DFS case).
Change-Id: Icaff7d2bc2c20c873ce5a7d9af5002da84ae1c2b
This allows the RecursiveMerger to iteratively create new merge bases
without necessarily flushing packs to storage in the DFS case;
flushing only need happen at the end of the whole merge process.
Since Merger's walk now depends on its inserter, we need to construct
an inserter at Merger construction time. This should not be a
significant increase in overhead since unused inserters don't use any
resources (beyond a reference to the Repository).
We also must release and recreate the walk whenever setObjectInserter
is called, which can break usages where setObjectInserter is called in
the middle of stateful operations on the walk. No usages of this
method within JGit currently do this; the inserter is only ever set
before any stateful walk operations happen.
Change-Id: I9682a6aa4a2c37dccef8e163f132ddb791d79103
This kind of reverted 31148. URI.resolve actually can handle the absolute URL
well, the problem is only the missing "/".
Change-Id: Iee5866c005cbc1430dc20ee7db321b8b51afed30
Signed-off-by: Yuxuan 'fishy' Wang <fishywang@google.com>
In the DFS implementation, flushing an inserter writes a new pack to
the storage system and is potentially very slow, but was the only way
to ensure previously-inserted objects were available. For some tasks,
like performing a series of three-way merges, the total size of all
inserted objects may be small enough to avoid flushing the in-memory
buffered data.
DfsOutputStream already provides a read method to read back from the
not-yet-flushed data, so use this to provide an ObjectReader in the
DFS case.
In the file-backed case, objects are written out loosely on the fly,
so the implementation can just return the existing WindowCursor.
Change-Id: I454fdfb88f4d215e31b7da2b2a069853b197b3dd
Since 2badedcbe0 in-core merges can write up to 10 MiB
into a TemporaryBuffer.Heap strategy, where the data is stored
as a chain of byte[] blocks.
Support the inserter reading up to the streamFileThreshold (default 50
MiB) from the supplied input stream and hash the content to determine
if the merged result blob is already present in the repository. This
allows the inserter to avoid creating duplicate objects in more cases,
reducing repository pack file churn.
Change-Id: I38967e2a0cff14c0a856cdb46a2c8fedbeb21ed5
The base Merger class already has a single ObjectReader instance that
it handles releasing as necessary, so creating new readers is not
necessary.
Change-Id: I990ec43af7df448c7825fc1b10e62eadaa3e0c2a
Also, remove unused findbugs exclude filter in java7 bundle since latest
findbugs plugins raises an error on this.
Change-Id: I791fc054596e7d9aa9f3cc8126eb0162539c57bf
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Instead of always writing to disk use TemporaryBuffer.LocalFile to
store up to 10 MiB of merge result in RAM. Most source code will
fit into this limit, avoiding local disk IO for simple merges.
Larger files will automatically spool to a temporary file that
can be cleaned up in the finally, reducing the risk of leaving
them on disk and consuming space in /tmp.
Change-Id: Ieccbd9b354d4dd3d2bc1304857325ae7a9f34ec6
The only caller of writeMergedFile is updateIndex, and the only
user of this path object is the code within the method. This is
a no-op change that opens the door to refactoring the way temp
files are handled for inCore merges.
Change-Id: I863a303194689a806b667e55eb958e1decf046c1
When merging common ancestors to create a single virtual common
ancestor the commit does not need to be inserted into the Git
repository. Instead just mock it out in memory as part of the
merger's RevWalk pool.
Make the author and committer stable and predictable for any
given pair of merge bases. It is not necessary for the caller's
name or email to be used as the commit will not be written out.
Change-Id: I88d5ee4de121950e1b032a5c10486c9d2c42656c
Currently if the remote defined in repo manifest xml is non-relative (e.g.
"https://chromium.googlesource.com"), our code will break. This change fixed
that.
It also makes that remotes are ending with "/".
Change-Id: Icef46360b32227a9db1d9bb9e6d929c72aeaa8df
Signed-off-by: Yuxuan 'fishy' Wang <fishywang@google.com>
Instead of passing on the start point as is to CreateBranchCommand, the
resolved ObjectId was used. Given this, CreateBranchCommand did not set
up tracking.
This also fixes CreateBranchCommand with setStartPoint(null) to use HEAD
(instead of NPEing), as documented in the Javadoc.
Bug: 441153
Change-Id: I5ed82b4a4b4a32a81a7fa2854636b921bcb3d471
Signed-off-by: Robin Stocker <robin@nibor.org>
Configure Maven surefire plugin to run tests concurrently using
multiple processes. By default use one process per core available
on the machine running the tests.
Running this on my MacBook Pro i7 with 8 cores 16 GB RAM SSD
I get the following results:
forkCount "mvn clean install" on org.eclipse.jgit.test
--------------------------------------------------------
1 02:36 min
2 01:26 min
4 00:56 min
6 00:49 min
8 00:49 min
10 00:51 min
When on-access scan of McAfee virus scanner is enabled the optimum
is at half the number of cores (4 of 8) on my MacBook.
Set system property "test-fork-count" to override the default.
Either set it to an absolute number e.g. "4" or in units of
cores available on the system running the build. E.g. "0.5C" to
use 4 cores on a 8 core system or "1C" to use all 8 cores on
a 8 core system.
E.g.
$ mvn clean install -Dtest-fork-count=0.5C
to use half the number of cores processes to run tests.
Change-Id: Ifdbd1ae8153f2033922eae6ae8942c36db1f9806
Signed-off-by: Sohn <matthias.sohn@sap.com>
Previously an equality check was performed so an exception would
be thrown if any other options were set.
Change-Id: I36b60e2c0a8aef9fcfe663055dba520192996872
With these, more code can use BranchConfig instead of directly accessing
the raw configuration values.
Change-Id: I4b52f97ff0e3fc8f097512806f043c615a3d2594
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
JGit handled this case improperly which these tests demonstrate. Fixed
by I25915880f304090fe90584c79bddf021231227a2.
Bug: 440537
Change-Id: Ia29c1d6cf8c0ce724cc3ff5ed9e0b396949b44bf
Signed-off-by: Laurent Goubet <laurent.goubet@obeo.fr>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
If the IndexDiffFilter is asked whether it should include or filter out
a certain path and for that path there is a dircache entry with a stage
different from 0, then the filter should never filter out this entry.
IndexDiffFilter is an optimized version of AnyDiffFilter and there is no
case where the index contains non-0 stages but we still don't see any
diff for that path.
Change-Id: I25915880f304090fe90584c79bddf021231227a2
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
In change If45bc3d078b3d3de87b758e71d7379059d709603 a new parameter was
added to 3 protected methods of ResolveMerger. This breaks the code of
developers which have subclassed ResolveMerger. The API baseline check
in Eclipse reports this as API breakage.
Since this will break only providers but not consumers of the API this
should be allowed also in minor versions. According to OSGi semantic
versioning
http://www.osgi.org/wiki/uploads/Links/SemanticVersioning.pdf
breaking providers in a minor version update is ok.
Therefore silence these errors using API filter rules.
Bug: 440757
Change-Id: Icabbd0e1de7e877c66a5c4a2c8391473f992a1aa
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
It's an internal package which isn't part of the API. Mark it x-internal
to silence @since tag warnings which are only raised for new API.
Bug: 440757
Change-Id: Id05deaca43f135cd1bfe83cf1f29787cbbdbecac
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The fix is to move the new head commit to the newly-created revert
commit, so that additional revert commits will use the correct head.
Change-Id: I5de3a9a2a4c276e60af732e9c507cbbdfd1a4652
Signed-off-by: Maik Schreiber <blizzy@blizzy.de>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>