When a user tried to use a service not enabled in the remote server
a misleading error message was given:
fatal: remote error: Git access forbidden
This patch modifies the error message to make the cause clearer
to the user. Now, when the user tries to use a not enabled service,
the message error clearly states it:
fatal: remote error: Service not enabled
Change-Id: If096c4ddd17c5aae0e99e3ea6eea4b69bd3c5466
Signed-off-by: Hector Oswaldo Caballero <hector.caballero@ericsson.com>
This was deprecated and should only be used by DirCacheCheckout and
friends. Other classes should use SystemReader.checkPath() instead.
Change-Id: I37cf753b1f081602dee9f0f47979eff39d735f92
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
- close RevWalk allocated in scan()
- replace use of deprecated ObjectReader.release() method
Change-Id: I41b2b10a1a44270a6ceaa1741e996c0921439852
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Replaces use of deprecated release() methods.
Change-Id: I0211bcf0a76a2fccc2c85fa74778e20c256984ba
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
- replaces use of deprecated ObjectInserter.release()
- auto-close TreeWalk
Change-Id: I540ee711b8c3430a71fdff07add506b7d9c039dc
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Most callers/users of TemporaryBuffer are sizing the in-memory
portion large enough that most outputs fit into RAM. With this
assumption they don't pay close attention to the size of IOs
being written, as it "should" just be a copy from one byte array
to another.
Overflow sets up a local file handle, which is costly to write to
for small IO units. Wrap the local file in a BufferedOutputStream
to combine small writes together. Larger writes can still bypass the
buffer as BOS automatically avoids copying for larger writes.
Change-Id: I09f4136dd65c48830cfda86d9101bc647581018a
When reading back from an overflowed TemporaryBuffer the InputStream
must be closed to close the FileInputStream that is reading from
the backing file.
Change-Id: Id83d8f16f5b2c2618a9f841ec3508508455a6ae1
By writing the temporary overflow merge result to $GIT_DIR JGit
can ensure the same filesystem permissions apply to protect the
file contents.
If no directory is available from the repository (e.g. DfsRepository)
null will be passed and the system temporary directory will be used
instead.
Change-Id: I95532aa092676d18f1dc1e3fdbe6dcb1f91b782e
Formatting merge conflicts one byte at a time is going to be very
slow when the final OutputStream is a FileOutputStream and the JVM
is making system calls for each byte output.
When outputting a range of bytes from a byte[] the bol (beginning
of line) value only depends on the value of the last byte written.
Other bytes in the array can be passed directly to the lower stream
for more efficient output.
Change-Id: I3415f9a390ee215210a17bb5bf39164d197e1348
This should considerably speed up the treewalk on larger repositories.
Found by discussing new EGit API to support model merge in change
eda23bb556d342f421f03b93c7faa160998598aa
Change-Id: I822721c76c64e614f87a080ced2457941f53adcd
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
cc: Laurent Delaigue <laurent.delaigue@obeo.fr>
This unintentionally was changed from severity debug to error which is
causing unexpected log entries.
Bug: 463349
Change-Id: I4b6d42a1420652ab6824e237bd231ba86896acbf
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Bug: 390833
Change-Id: I29f7b79b241929877c93ac485c677487a91bb77b
Signed-off-by: André de Oliveira <andre.oliveira@liferay.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
If a non interactive rebase is launched, stopping after a conflict
should set the repository state to RepositoryState.REBASING_MERGE
instead of RepositoryState.REBASING_INTERACTIVE.
Bug: 452623
Change-Id: Ie885aab6d71dabd158a718af0d14fff643c9b850
Also-by: Arthur Daussy <arthur.daussy@obeo.fr>
Signed-off-by: Laurent Delaigue <laurent.delaigue@obeo.fr>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When RecursiveMerger found that there are multiple base-commits for the
commits to be merged it tries to temporarily merge the base commits. But
if these base commits have no common predecessor there was a bug in JGit
leading to a NPE. This commit fixes this by enforcing that an empty tree
is used as base when merging two unrelated base commits.
This logic was already there when merging two commits which have no
common predecessor (ThreeWayMerger.mergeBase()). But the code which was
computing a new temporary base commit in case of criss-cross merges
didn't take care to pick an empty tree when no common predecessor can be
found.
Bug: 462671
Change-Id: Ibd96302f5f81383f36d3b1e3edcbf5822147b1a4
CherryPickCommand only works on a non-bare repository, as it must
modify the working tree and index in case of a merge conflict. In
tests, being able to recover from a merge conflict is less important,
as the caller should be able to control the full contents of files in
advance of the cherry-pick.
Change-Id: Ic332e44df1308b9336e884666b08c1f6db64513d
As we moved minimum Java version to 7 we don't need a separate bundle
and feature for JGit features depending on Java 7 anymore.
Change-Id: Ib5da61b0886ddbdea65298f1e8c6d65c9879ced1
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This reverts commit 6bc48cdc62.
Until git v1.7.10.2~29^2~1 (builtin/merge.c: reduce parents early,
2012-04-17), C git merge would make merge commits with duplicate parents
when asked to with a series of commands like the following:
git checkout origin/master
git merge --no-ff origin/master
Nowadays "git merge" removes redundant parents more aggressively
(whenever one parent is an ancestor of another and not just when
duplicates exist) but merges with duplicate parents are still permitted
and can be created with git fast-import or git commit-tree and history
viewers need to be able to cope with them.
CommitBuilder is an interface analagous to commit-tree, so it should
allow duplicate parents. (That said, an option to automatically remove
redundant parents would be useful.)
Reported-by: Dave Borowitz <dborowitz@google.com>
Change-Id: Ia682238397eb1de8541802210fa875fdd50f62f0
Signed-off-by: Jonathan Nieder <jrn@google.com>
The block pointer list may have been relatively large, so no need to
make more garbage. Instead, just clear the list and null out all the
elements.
Another possible motivation: a caller may have provided an inaccurate
estimated size, so the list might have been resized several times. If
the list is reused later for a similarly underestimated workload, this
fix will prevent additional resizing on subsequent usages.
Change-Id: I511675035dcff1117381a46c294cc11aded10893
Callers may wish to use TemporaryBuffer as an essentially unbounded
buffer by passing Integer.MAX_VALUE as the size. (This makes it
behave like ByteArrayOutputStream, only without requiring contiguous
memory.) Unfortunately, it was always allocating an array in the
backing block pointer list to hold enough blocks to MAX_VALUE--all
262,016 of them. It wasn't allocating the blocks themselves, but this
array was still extremely wasteful, using about 2MiB of memory on a
64-bit system.
Tweak the interface to specify an estimated size, and only allocate
the block pointer list enough entries to hold that size. It's an
ArrayList, so if that estimate was wrong, it'll grow. We assume the
cost of finding enough contiguous memory to grow that array is
acceptable.
While we're in there, fix an off-by-one error: due to integer division
we were undercounting the number of blocks needed to store n bytes of
data as (n / SZ).
Change-Id: I794eca3ac4472bcc605b3641e177922aca92b9c0