Instead of using the current thread's stack to recurse through the
delta chain, use a linked list that is stored in the heap. This
permits the any thread to load a deep delta chain without running out
of thread stack space.
Despite needing to allocate a stack entry object for each delta
visited along the chain being loaded, the object allocation count is
kept the same as in the prior version by removing the transient
ObjectLoaders from the intermediate objects accessed in the chain.
Instead the byte[] for the raw data is passed, and null is used as a
magic value to signal isLarge() and enter the large object code path.
Like the old version, this implementation minimizes the amount of
memory that must be live at once. The current delta instruction
sequence, the base it applies onto, and the result are the only live
data arrays. As each level is processed, the prior base is discarded
and replaced with the new result.
Each Delta frame on the stack is slightly larger than the standard
ObjectLoader.SmallObject type that was used before, however the Delta
instances should be smaller than the old method stack frames, so total
memory usage should actually be lower with this new implementation.
Change-Id: I6faca2a440020309658ca23fbec4c95aa637051c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We will need to iterate over all notes of a NoteMap, at least this will be
needed for testing purposes. This change also implied making the Note class
public.
Change-Id: I9b0639f9843f457ee9de43504b2499a673cd0e77
Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
This is almost reverted cherry-pick, and the implementation is
almost identical. It orders the input to merge differently to get
the effect and produces a different commit message with the
default author, rather than the original author.
Change-Id: I39970091d9f7406ae7168b8efaab23a5e2c16bad
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
Eclipse has some problem re-running single JUnit tests if
the tests are in Junit 3 format, but the JUnit 4 launcher
is used. This was quite unnecessary and the move was not
completed. We still have no JUnit4 test.
This completes the extermination of JUnit3. Most of the
work was global searce/replace using regular expression,
followed by numerous invocarions of quick-fix and organize
imports and verification that we had the same number of
tests before and after.
- Annotations were introduced.
- All references to JUnit3 classes removed
- Half-good replacement for getting the test name. This was
needed to make the TestRngs work. The initialization of
TestRngs was also made lazily since we can not longer find
out the test name in runtime in the @Before methods.
- Renamed test classes to end with Test, with the exception
of TestTranslateBundle, which fails from Maven
- Moved JGitTestUtil to the junit support bundle
Change-Id: Iddcd3da6ca927a7be773a9c63ebf8bb2147e2d13
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
These settings are stored in <prefix>/etc/gitconfig. The C Git
binary is installed in <prefix>/bin, so we look for the C Git
executable to find this location, first by looking at the PATH
environment variable and then by attemting to launch bash as
a login shell to find out.
Bug: 333216
Change-Id: I1bbee9fb123a81714a34a9cc242b92beacfbb4a8
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
Replace 'method' with 'heap'-based recursion for resolving deltas.
Git packfile delta-chain depth can exceed 50 levels in certain files
(the packfile of the JGit project itself has >800 objects with
chain-length >50). Using method-based recursion on such packfiles will
quickly throw a StackOverflowError on VMs with constrained stack.
Benefits:
* packfile delta-resolution no longer limited by the maximum number
of stack frames permitted on the current thread.
* slight performance improvement
(3% speed increase on the packfile of the JGit project)
Change-Id: I1d9b3a8ba3c6d874d83cb93ebf171c6ab193e6cc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We cannot use SystemReader to get the time, unless we do that consistently,
which is harder to do and be sure we are really testing what we want.
Then we need to update our lastRead variable whenever we conclude that
our file is not racily clean according to lastRead. It may well be clean,
but we do not know that until we check the system clock again.
Finally add a test for this class.
Change-Id: I1894b032b9bd359d1b5325e5472d48e372599e4c
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
If the 'TREE' extension contains an invalid subtree that has
been removed, DirCacheIterator still tried to access it due to
an invalid childCnt field within the parent DirCacheTree object.
This is easy for a user to do, they just need to move all files
out of a subdirectory.
For example, the input for the JUnit test case for this bug was
built using the following C Git sequence:
mkdir -p a/b
touch a/b/c q
git add a/b/c q
git write-tree
git mv a/b/c a/a
After the last step, the subdirectory a/b is empty, as its only
file was moved into the parent directory. Because of the earlier
`git write-tree` operation, there is a 'TREE' extension present, but
the a and a/b subdirectories have been marked invalid by the rename.
When JGit tried to iterate over the a tree, it tried to correct
childCnt to be zero as a/b no longer exists, but it failed to
update childCnt.
Change-Id: I7a0f78fc48a36b1a83252d354618f6807fca0426
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
As discussed in
http://egit.eclipse.org/r/#change,2127
we should use paths relative the working directory instead of Files to
notify the caller about conflicts and nondeleted files.
Change-Id: I034c7bd846f0df78d97bc246f38d411f29713dde
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
The CheckoutCommand does not handle names other than local branch
names properly; it must detach HEAD if such a name is encountered (for
example a commit ID or a remote tracking branch).
Change-Id: I5d55177f4029bcc34fc2649fd564b125a2929cc4
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
This is needed by callers to determine checkout conflicts and
possible files that were not deleted during the checkout so that they
can present the end user with a better Exception description and retry
to delete the undeleted files later, respectively.
Change-Id: I037930da7b1a4dfb24cfa3205afb51dc29e4a5b8
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
This wrongly returns the same as getConflicts()
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
Change-Id: Id37c625458fc5a9b3987f05b684620e24fdfe852
Relying only on the last modified time for a file can be tricky.
The "racy git" problem may cause some modifications to be missed.
Use the new FileSnapshot code to track when a configuration file
has been modified, and needs to be reloaded in memory.
Change-Id: Ib6312fdd3b2403eee5af3f8ae711294b0e5f9035
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Pulling the last modified checking logic out of ObjectDirectory
makes it possible to reuse this code for other files, such as
the $GIT_DIR/config or $GIT_DIR/packed-refs files.
Change-Id: If2f27a89fc3b7adde7e65ff40bbca5d55b98b772
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Each time getConfig() is called on FileRepository, it checks the
last modified time of both ~/.gitconfig and $GIT_DIR?config. If
$GIT_DIR/config appears to have been modified, it is read back in
from disk and the current config is wiped out.
When mutating a configuration file, this may cause in-memory edits
to disappear. To avoid that callers need to avoid calling getConfig
until after the configuration has been saved to disk.
Unfortunately the API is still horribly broken. Configuration should
be modified only while a lock is held on the configuration file, very
similar to the way a ref is updated via its locking protocol. But our
existing API is really broken for that so we'll have to defer cleaning
up the edit path for a future change.
Change-Id: I5888dd97bac20ddf60456c81ffc1eb8df04ef410
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The tortoiseplink command does not understand -batch, even though
it smells like the putty plink command that does use it. Don't add
-batch if GIT_SSH is tortoiseplink.
Change-Id: I638532a02faa2caf8c39d482094e7ff4f4ec7e78
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When GIT_SSH is set to use plink, the correct option name is "-batch"
and not "--batch". This was a typo introduced when we added support
for plink via GIT_SSH.
Change-Id: I391660e38f5d208bba11e3f2a8f25922de2af878
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When checking whether a file in the working tree has been modified -
WorkingTreeIterator.isModified() - we should not trust the filemode
in case of symbolic links, but check the timestamp and also the
content, if requested. Without this fix symlinks will always be shown
in EGit as modified files on Windows systems.
Change-Id: I367c807df5a7e85e828ddacff7fee7901441f187
Signed-off-by: Philipp Thun <philipp.thun@sap.com>
Sasa pointed out we only ever use the length here, so instead of
holding onto the AbbreviatedObjectId, lets just hold onto the length
as a primitive int.
Change-Id: I2444f59f9fe5ddcaea4a3537d3f1064736ae3215
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Sasa Zivkov <zivkov@gmail.com>
JGit's internal implementation of the HTTP digest authentication
method wasn't conforming to RFC 2617 (HTTP Authentication: Basic
and Digest Access Authentication), resulting in authentication
failures when connecting to a digest protected site.
The code now more accurately matches section 3.2.2 (The Authorization
Request Header) from the standards document.
Change-Id: If41b5c2cbdd59ddd6b2dea143f325e42cd58c395
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The java.io.File methods for creating directories report failure by
returning false. To ease proper checking of return values provide
utility methods wrapping mkdir() and mkdirs() which throw IOException
on failure.
Also fix the tests to store test data under a trash folder and cleanup
after test.
Change-Id: I09c7f9909caf7e25feabda9d31e21ce154e7fcd5
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
If DiffFormatter is asked to compare the index to the working tree,
it can go faster by using the cached stat information to compare
the two entries rather than relying on SHA-1 computation alone.
Change-Id: Icb21c15b8279ee8cee382e5e179e0cf8903aee4d
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This is needed to ensure interoperability with the command line: if
the git-rebase-todo file was created manually (by git rebase -i in the
command line), and any commands other than pick are used (reword,
edit, fixup, squash) JGit must abort as it does not understand these
commands yet.
The same is true if an unknown command is found (e.g. due to a typo);
this is the same behavior as shown by the command line.
Change-Id: I2322014f69460361f7fc09da223e8a5c31f100dd
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
Always use streaming (for SHA-checksum & collision detection)
when indexing whole blobs, regardless of their size.
Positives:
* benefits of bugfix #312868 will apply to all runtimes, without
additional conf for mem-constrained JVMs (5MB huge for some)
* no byte array allocation
(re-uses readBuffer instead of allocating new full-size array)
* mildly better overall performance
(given the usual blob-does-not-need-collision-checking case)
* removes unnecessary code
Negative:
* doubles the disk IO for a blob comparision
(comparitively rare occurance)
I perf-tested a range of threshold sizes against a random selection
of packfiles I found on my harddrive, the results are here:
https://spreadsheets.google.com/ccc?key=tLCQElyyd2RKN9QevfvgwGQ&hl=en_GB#gid=1
My interpretation of the results is that the streaming size threshold
isn't beneficial (actually seems to be very slightly detrimental) -so
we should just get rid of it. This tallies with some of the comments
Shawn & I had for the default value of streamFileThreshold in the
review for I862afd4c:
http://egit.eclipse.org/r/#patch,sidebyside,2040,2,org.eclipse.jgit/src/org/eclipse/jgit/transport/IndexPack.java
The perf-test code is here: https://gist.github.com/735402
It's a bit scruffy but basically does 10 runs (in randomised order)
for each threshold size on various packfiles, waiting a second
between each pack-indexing to allow GC to catch up. I know it's not
perfect - proper perf testing is hard to do :-)
For convenience provide an option to skip deletion of non-existing
files. Also add some tests for deletion methods in FileUtils.
Change-Id: I33e355cfcdc19367d50208150ee49a4a06394890
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Sasa and I were reviewing this code today and Sasa pointed out we
can simplify the conflict logic, as the two cases (subtree and file)
are logically identical.
Change-Id: Ie0d40b2dd15605785eff453a846b1d20a2d021fc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Reviewed-by: Sasa Zivkov <zivkov@gmail.com>
Rebase would update the original HEAD to the wrong commit when
"skipping" the last commit after a merged commit.
Includes a test for the specific situation.
Change-Id: I087314b1834a3f11a4561f04ca5c21411d54d993
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
If a treewalk walks also over index and the workingtree then the
IndexDiffFilter filter can be used which works much faster then
the semantically equivalent ANY_DIFF filter. This is because this
filter can better avoid computing SHA-1 ids over the content of
working-tree files which is very costly.
This fix will significantly improve the performance of e.g.
EGit's commit dialog.
Change-Id: I2a51816f4ed9df2900c6307a54cd09f50004266f
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Philipp Thun <philipp.thun@sap.com>
For --continue, the Rebase command asserts that there are no unmerged
paths in the current repository. Then it checks if a commit is needed.
If yes, the commit message and author are taken from the author_script
and message files, respectively, and a commit is performed before the
next step is applied.
For --skip, the workspace is reset to the current HEAD before applying
the next step.
Includes some tests and a refactoring that extracts Strings in the
code into constants.
Change-Id: I72d9968535727046e737ec20e23239fe79976179
Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Instead of setting a boolean when a difference record is found, return
false from diff() only if all of the collections are empty. When all
of them are empty, no difference was found.
Change-Id: I555fef37adb764ce253481751071c53ad12cf416
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The isModified() is more efficient because it can skip over files that
are stat clean, without needing to scan them.
This is useful to efficently work on paths that were already staged
and thus differ between HEAD and the index, but not between the index
and the working tree.
Change-Id: I4418202e612f0571974e0898050d987c6c280966
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When comparing the ObjectIds for two tree entries its faster
to use the raw buffer compares over allocating ObjectIds and
then performing equals on their contents.
However, this also needs to consider the raw modes. It is possible
for a path to change modes but not ObjectId (e.g. making a file
executable), and in this case its still a staged change to report back
to the caller.
Change-Id: I1a267254c04b3273a97f63c71d1e6718cd9d2fa8
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the caller really needs the list of files that are flagged as
assume-unchanged (aka assume-valid in the DirCache), we should give
them the complete list and not just those that we wrongly identified
as being modified during diff().
This change is necessary because diff() is slightly broken and is
discovering differences on files that it shouldn't have considered.
Change-Id: Ibe464c1a0e51c19dc287a4bc5348b7b07f4d840b
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The TreeWalk is configured to be recursive, which means subtrees are
never presented to the application. Therefore the working tree file
mode can never be a subtree/subdirectory at this point in the code.
Change-Id: Ie842ddc147957d09205c0d2ce87b25c566862fd9
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of asking the individual iterators for their path string, use
the TreeWalk's generic getPathString() method. Its just as fast
because it uses the path of the current matching iterator.
Change-Id: I9b827fbbafce1c78f09d5527cdc64fbe9022a16e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We add either 3 or 4 filters. If we are adding only 3 filters,
allocating the array for 4 isn't a huge waste of memory, but it
does simplify our code.
Change-Id: I7df29b414f6d5cfcf533edb1405083e6fcec32cf
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
To improve runtime performance, caching the WorkingTreeOptions inside
of the Config object using the Config.SectionParser API allows
the WorkingTreeOptions to be accessed more efficiently whenever a
FileTreeIterator is constructed for the Repository.
Instead of passing the filemode handling option into isModified(),
the WorkingTreeIterator should always honor whatever setting has
been configured in this repository, as defined by its own copy of
the WorkingTreeOptions. This simplifies all of the callers as they
no longer need to lookup core.filemode on their own.
A few locations were changed from always using a hardcoded "true"
on the file mode to passing what is actually configured in the
repository. This is a behavior change, but corrects what should be
considered to be bugs as the core.filemode variable wasn't always
being used.
Change-Id: Idb176736fa0dc97af372f1d652a94ecc72fb457c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When indexing large blobs that are stored whole (non-delta form),
avoid allocating the entire blob in memory and instead stream it
through the SHA-1 checksum computation. This reduces the size
of memory required by IndexPack when processing very big blobs,
such as a 500 MiB uncompressable binary.
If the large blob already exists in the local repository, its
contents needs to be compared byte-for-byte after the entire pack
has been indexed, to ensure there isn't an unexpected SHA-1 collision
which may result in later data corruption. This compare is performed
as a streaming compare, again avoiding the large object allocation.
This change doesn't improve on memory utilization for large objects
stored as deltas. The change also doesn't improve handling for
any large commits, trees or annotated tags. There isn't much to
be done here for those objects, because they need to be passed down
to the ObjectChecker as a byte[]. Fortunately it isn't common for
these object types to be that large,
Bug: 312868
Change-Id: I862afd4cb78013ee033d4ec68c067b1774a05be8
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
CC: Roberto Tyley <roberto.tyley@guardian.co.uk>
Some method parameters in WorkingTreeIterator are never used. Remove
them. Especially the removal of the FS parameter in isModified()
simplifies upcoming performance optimizations.
Change-Id: I7c449589283a4a6b6e23f2586cd784febdca8bcd
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Its confusing that a new TreeWalk() needs to have reset() invoked
on it before addTree(). This is a historical accident caused by
how TreeWalk was abused within ObjectWalk.
Drop the initial empty tree from the TreeWalk and thus remove a
number of pointless reset() operations from unit tests and some of
the internal JGit code.
Existing application code which is still calling reset() will simply
be incurring a few unnecessary field assignments, but they should
consider cleaning up their code in the future.
Change-Id: I434e94ffa43491019e7dff52ca420a4d2245f48b
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>