This is useful when the result needs to be displayed and it's only of
interest if the operation was successful or not (in egit, it could be
used in MultiPullResultDialog).
Change-Id: Icfc9a9c76763f8a777087a1262c8d6ad251a9068
Signed-off-by: Robin Stocker <robin@nibor.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The offset32 format is used for objects <= 2^31-1, while the offset64
format is used for all other objects. This condition was missing
the = needed to ensure an object placed exactly at 2^31 would have
its 64 bit offset in the index.
Change-Id: I293fac0e829c9baa12cb59411dffde666051d6c5
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
A non-thin pack does not need to worry about preferred bases, the pack
will be self-contained and all required delta base objects will appear
within the pack itself. Obtaining the path buffer and length from the
ObjectWalk to build the preferred base table is "expensive", so avoid
the cost unless a thin pack is being constructed.
Change-Id: I16e30cd864f4189d4304e7957a7cd5bdb9e84528
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The previous comment stated that the value set was used
to keep track of the branch in the remote repository
which was incorrect.
Updated the method comment to match the format used
for the PushCommand.setRemote and FetchCommand.setRemote
methods.
Change-Id: I11b81eb3125958af29247b485da56fd88c3bfdf5
Signed-off-by: Kevin Sawicki <kevin@github.com>
If the "Finding sources" phase will complete in <1 second with no
delta compression enabled, don't bother showing the progress meter for
this phase. Small repositories on the local filesystem tend to rip
through this phase always subsecond and the ProgressMonitor display
can actually slow the operation down.
If delta compression is enabled, there are two phases that may run
very quickly. Set the timer to 500 milliseconds instead, reducing the
risk that the user has to wait longer than 1 second before any sort of
output from the packer occurs.
Change-Id: I58110f17e2a5ffa0134f9768b94804d16bbb8399
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If delta resolution completes in < 1000 milliseconds, don't bother
showing the progress meter. This is actually very common for a Gerrit
Code Review server, where the client is probably sending 1 commit and
only a few trees/blobs modified... and the base objects are hot in the
process buffer cache.
The 1000 millisecond delay is just a guess at a reasonable time to wait.
Change-Id: I440baa64ab0dfa21be61deae8dcd3ca061bed8ce
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This progress meter never reached 100% as it did not update while
resolving the external bases in thin packs.
Instead of updating in batches at the top level, update once per delta
that is resolved. The batching progress meter type should smooth out
the frequent updates to an update rate that is more reasonable to send
to the UI, while also ensuring a successful pack parse always reaches
100% deltas resolved.
Change-Id: Ic77dcac542cfa97213a6b0194708f9d3c256d223
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Repository inspection tools may find building a reverse index on a
pack useful, as they can then locate an object by offset. As both
C Git and JGit sometimes produce error messages with the offset
rather than the SHA-1, it may be useful to expose this type.
Change-Id: I487bf32e85a8985cf8ab382d4c82fcbe1fc7da6c
There isn't a good reason to hide all of these as package-private.
Make them public so applications can inspect pack files.
Change-Id: Ia418daf65d63e9e015b8dafdf3d06a1ed91d190b
Some embeddings of UploadPack (e.g. Gerrit Code Review) set their own
PackConfig from a server-wide configuration, overriding any JGit
defaults or settings that may exist at the local repository level.
Make a copy constructor form of PackConfig so this server-wide
configuration object can be copied and then merged with repository
specific configuration data.
Change-Id: I4463c95aeaf7d6536c3ab132dec9c50ee528d9e0
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The cached object databases should not require a close to release
their cached resources. Most object databases just return their
own reference for newCachedDatabase(), so a close() here kills
the real database's internal caches, and possibly underlying files,
resulting in poor performance for the callers of PackParser like
ReceivePack or FetchProcess trying to then go look up objects that
were just parsed, or that current references point to.
Change-Id: Ia4a239093866e5b9faf82744f729fb73f4373f1a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
A set of ref names like ('a/b' and 'a+b') would cause the RefDirectory
to think that the set of refs have changed because it traversed the
'a' directory in the subtree before looking at 'a+b', but it then
compared with the know refs which are sorted with 'a+b' first.
Fix this by traversing the refs tree in another order. Treat a directory
as if they ends with a '/' before deciding on the order to traverse
the refs tree.
Bug: 348834
Change-Id: I23377f8df00c7252bf27dbcfba5da193c5403917
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
Repository.writeMergeCommitMsg(null) no longer fails if the MERGE_MSG
file is missing. This was done to avoid CommitCommand to fail in case of
a missing MERGE_MSG file.
Bug: 352243
Change-Id: Iddf43533d133f8f22199ed6e2393a552670e7d1f
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
Reset command should works recursively and allows reset all changed
files in given directory.
Bug: 348524
Change-Id: I441db34f226be36548c61cef77958995971498de
Signed-off-by: Dariusz Luksza <dariusz@luksza.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Creation of a branch X from an annotated tag, as the starting point,
resulted into .git/refs/heads/X containing the ID of the annotated tag
instead of the ID of the tagged commit.
This fix peels the tag ref before using it as the starting point for
the newly created branch.
Bug: 340836
Change-Id: I01c7325770ecb37f5bf8ddb2a22f802466524f24
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
When trying to clone into a folder that already contains a cloned
repository native git will fail with a message "fatal: destination path
'folder' already exists and is not an empty directory.". Now JGit will
also fail in this situation throwing a JGitInternalException.
The test case was provided by Tomasz Zarna.
Bug: 347852
Change-Id: If9e9919a5f92d13cf038dc470c21ee5967322dac
Also-by: Tomasz Zarna <Tomasz.Zarna@pl.ibm.com>
Signed-off-by: Adrian Goerler <adrian.goerler@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
If no refSpec is explicitly set, the PushCommand should first check the
remote config and then as a fallback use the current behavior.
Change-Id: I2bc648abc517b1d01b2de15d383423ace2081e72
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
DirCacheCheckout did not unlock the index if e.g. an IOException occured
during checkout.
Bug: 350677
Change-Id: Ie9fa09f7a404080da7cdccafb9be3a8c845e4869
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
ObjectDirectoryInserter was always creating a temporary file,
writing the complete compressed contents of a tree, fsync()'ing
that to stable storage, and only then checking to see if there
was already an object with the same SHA-1 in the repository.
For commits this strategy makes some sense, the commit is very
unlikely to exist in the repository, as there are embedded times
and these change with each commit.
However for trees coming out of DirCache, it is more common for the
tree to already exist in the repository. Most subdirectories are
not modified in any given commit. Doing all of this local file IO
for things that already exist is very slow.
Try to detect cases where the object is "small enough" that it can
be processed entirely in memory, and avoid doing disk IO entirely
if the object already exists.
Also increase the size of the output buffer for the deflation.
This should boost the average write(2) syscall size from 512 bytes
to 8192 bytes, making streaming of large compressed contents to
disk slightly more efficient.
Change-Id: I1d40364e8725468522435814631916d73174c92b
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
I had the conditions wrong here, causing the in-memory InputStream
to always appear to be at EOF.
Change-Id: I6811d6187a34eaf1fd6c5002550d631decdfc391
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Adds a git-reflog command and associated tests.
Bug: 347859
Change-Id: Iba146ac842cc9ca0be43d3381b4082c9e92bf56f
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
It's useful to have ReflogEntry refactored out so it can be
used by clients via the JGit API.
Change-Id: I03044df9af9f9547777545b7c9b93bdf5f8b7cb5
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
If an internal exception occurs while packing and the request
needs to abort, the HTTP response might already be committed due
to progress message having already been delivered to the client.
This prevents UploadPackServlet from resetting the response and
sending back an HTTP 500 response.
Try to catch all exceptions and report internal errors over the
sideband stream or as an ERR command during the initial ACK/NAK
negotiation phase. This allows JGit to transmit an error message
that the user will receive on their console without needing to
worry about resetting the (already gone) HTTP response.
Change-Id: Ie393fb8bb55d2b79ab1276adf71c781c1807f9fe
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If a fetch or push needs to apply more than a few references
to the local repository it may take more than 0.25 seconds to
process all of the updates. This is especially true in the DHT
storage system during an initial push of a project with many tags.
The backend database may need to use a transaction to ensure each
tag reference creation is unique, and there may be large delays
caused by these transactions.
Change-Id: Ib11a077adfbd525253e425d327f2e2c2380804c7
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
BlameGenerator digs through history and discovers the origin of each
line of some result file. BlameResult consumes the stream of regions
created by the generator and lays them out in a table for applications
to display alongside of source lines.
Applications may optionally push in the working tree copy of a file
using the push(String, byte[]) method, allowing the application to
receive accurate line annotations for the working tree version. Lines
that are uncommitted (difference between HEAD and working tree) will
show up with the description given by the application as the author,
or "Not Committed Yet" as a default string.
Applications may also run the BlameGenerator in reverse mode using the
reverse(AnyObjectId, AnyObjectId) method instead of push(). When
running in the reverse mode the generator annotates lines by the
commit they are removed in, rather than the commit they were added in.
This allows a user to discover where a line disappeared from when they
are looking at an older revision in the repository. For example:
blame --reverse 16e810b2..master -L 1080, org.eclipse.jgit.test/tst/org/eclipse/jgit/storage/file/RefDirectoryTest.java
( 1080) }
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1081)
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1082) /**
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1083) * Kick the timestamp of a local file.
Above we learn that line 1080 (a closing curly brace of the prior
method) still exists in branch master, but the Javadoc comment below
it has been removed by Christian Halstrick on May 20th as part of
commit 2302a6d3. This result differs considerably from that of C
Git's blame --reverse feature. JGit tells the reader which commit
performed the delete, while C Git tells the reader the last commit
that still contained the line, leaving it an exercise to the reader
to discover the descendant that performed the removal.
This is still only a basic implementation. Quite notably it is
missing support for the smart block copy/move detection that the C
implementation of `git blame` is well known for. Despite being
incremental, the BlameGenerator can only be run once. After the
generator runs it cannot be reused. A better implementation would
support applications browsing through history efficiently.
In regards to CQ 5110, only a little of the original code survives.
CQ: 5110
Bug: 306161
Change-Id: I84b8ea4838bb7d25f4fcdd540547884704661b8f
Signed-off-by: Kevin Sawicki <kevin@github.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
There is no reason for this type to contain an ArrayList and try to
hide the implementation. It only slows down execution by adding an
extra layer of method dispatch to each invocation.
Instead subclass from ArrayList.
Change-Id: Ifbb9c7060c2fe3d5a7397c1aa85fbade14088637
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>