If a tree is visited during pack and filtered out with tree:<depth>, we
may need to include it if it is visited again at a lower depth.
Until now we revisit it no matter what the depth is. Now, avoid
visiting it if it has been visited at a lower or equal depth.
Change-Id: I68cc1d08f1999a8336684a05fe16e7ae51898866
Signed-off-by: Matthew DeVore <matvore@gmail.com>
If we are traversing a tree which is too deep, then there is no need to
traverse the children. Skipping children is much faster than traversing
the possibly thousands of objects which are directly or indirectly
referenced by the tree.
Change-Id: I6d68cc1d35da48e3288b9cc80356a281ab36863d
Signed-off-by: Matthew DeVore <matvore@gmail.com>
This is used when fetching, and in particular to populate a partial
clone or a virtual file system cache as the user navigates. With this,
a client can pre-fetch a few directories deeper than only the current
directory.
depth:0 will omit all trees, and is useful if you only want to fetch
the commits of a repository, or fetch just a single tree or blob object.
depth:1 will fetch only the root tree of all commits fetched. depth:2
will fetch the root tree and all blobs and tree objects directly
referenced from it. depth:3 gets one more level, and so on. depth:#
will not filter a blob or tree that is explicitly marked wanted.
Bitmaps are disabled when this filter is used.
This implementation is quite slow because it iterates over all omitted
objects rather than skipping them. This will be addressed in follow-up
commits.
Change-Id: Ic312fee22d60e32cfcad59da56980e90ae2cae6a
Signed-off-by: Matthew DeVore <matvore@gmail.com>
Replace simple uses of Iterator with a corresponding for-loop.
Also add missing braces on loops as necessary.
Change-Id: I708d82acdf194787e3353699c07244c5ac3de189
Signed-off-by: Carsten Hammer <carsten.hammer@t-online.de>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Test that an exception is raised for an invalid group header:
[group "foo" ]
foo = bar
i.e. where there is a space between the group subsection name
and the closing ']'.
Change-Id: I8933ae100b77634b0afb66bb8aa43d24c955799e
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Do not reload packfiles when their associated filesnapshot is not
modified on disk compared to the one currently stored in memory.
Fix the regression introduced by fef78212 which, in conjunction with
core.trustfolderstats = false, caused any lookup of objects inside
the packlist to loop forever when the object was not found in the pack
list.
Bug: 546190
Change-Id: I38d752ebe47cefc3299740aeba319a2641f19391
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Add resolveTipSha1, an inverse of exactRef(String ...), to RefDatabase
and provide a default implementation that runs in O(n) time where n is
the number of refs. For RefTable, provide an implementation that runs
in O(log(n)) time.
[ifrade@google.com: with tests in InMemoryRepositoryTest to exercise
the reftable code path, too]
Change-Id: I2811ccd0339cdc1c74b42cce2ea003f07a2ce9e1
Signed-off-by: Patrick Hiesel <hiesel@google.com>
Signed-off-by: Ivan Frade <ifrade@google.com>
Replace usage with the recommended setFilterSpec(FilterSpec).
Change-Id: Icc528d175f25234eeb2daa6b4c29a67a7a6d1e0a
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
The new references created in the peeling do not receive the update
index. In other words, the update index of a reference (if set) is lost
in the peeling.
Pass-through the update index to the newly created references.
Tested via InMemoryRepository, which uses DfsReftableDatabase.
Change-Id: I7ff7c737a9c3366fdec296a4d9b2e51d10227957
Signed-off-by: Ivan Frade <ifrade@google.com>
This increases type-safety and is ground work for support of the
"tree:<depth>" filter.
Change-Id: Id19eacdcdaddb9132064c642f6d554b1060efe9f
Signed-off-by: Matthew DeVore <matvore@gmail.com>
HashMap<String, Ref> has a memory overhead for refs. Use RefMap.
Change-Id: I3fb4616135dacf687cc3bc2b473effc66ccef5e6
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
The prune method did not delete empty fanout directories when loose
objects moved to a new pack file but only when loose unreferenced
objects were pruned.
Change-Id: Ia068f4914c54d9cf9f40b75e8ea50759402b5000
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Due to finite filesystem timestamp resolution the last modified
timestamp of files cannot detect file changes which happened in the
immediate past (less than one filesystem timer tick ago).
Read and consider file size also, so that differing file size can help
to more accurately detect file changes without reading the file content.
Use bulk read to avoid multiple stat calls to retrieve file attributes.
Change-Id: I974288fff78ac78c52245d9218b5639603f67a46
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This isolates the test from the concrete system it's running on.
SshSessionFactory reads the user also through SystemReader.
Change-Id: I1c796aa1c498fe3967456d8589e6be0a82ab8f44
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
With text=auto or core.autocrlf=true, git does not normalize upon
check-in if the file in the index contains already CR/LFs. The
documentation says: "When text is set to "auto", the path is
marked for automatic end-of-line conversion. If Git decides that
the content is text, its line endings are converted to LF on
checkin. When the file has been committed with CRLF, no conversion
is done."[1]
Implement the last bit as in canonical git: check the blob in the
index for CR/LFs. For very large files, we check only the first 8000
bytes, like RawText.isBinary() and AutoLFInputStream do.
In Auto(CR)LFInputStream, ensure that the buffer is filled as much as
possible for the isBinary() check.
Regarding these content checks, there are a number of inconsistencies:
* Canonical git considers files containing lone CRs as binary.
* RawText checks the first 8000 bytes.
* Auto(CR)LFInputStream checks the first 8096 (not 8192!) bytes.
None of these are changed with this commit. It appears that canonical
git will check the whole blob, not just the first 8k bytes. Also
note: the check for CR/LF text won't work with LFS (neither in JGit
nor in git) since the blob data is not run through the smudge filter.
C.f. [2].
Two tests in AddCommandTest actually tested that normalization was
done even if the file was already committed with CR/LF.These tests
had to be adapted. I find the git documentation unclear about the
case where core.autocrlf=input, but from [3] it looks as if this
non-normalization also applies in this case.
Add new tests in CommitCommandTest testing this for the case where
the index entry is for a merge conflict. In this case, canonical git
uses the "ours" version.[4] Do the same.
[1] https://git-scm.com/docs/gitattributes
[2] https://github.com/git/git/blob/3434569fc/convert.c#L225
[3] https://github.com/git/git/blob/3434569fc/convert.c#L529
[4] https://github.com/git/git/blob/f2b6aa98b/read-cache.c#L3281
Bug: 470643
Change-Id: Ie7310539fbe6c737d78b1dcc29e34735d4616b88
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Deprecate the method in favor of setEncoding(Charset).
Update the only caller in the code base that was still using
the deprecated variant.
Change-Id: I6357f2d0c727007013c72e9d5b7c72a3f5f3f2b1
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Bug caused the pack to be 12 bytes short when cold cache. Also added
test for copyPackAsIs method.
Change-Id: Idf8fb0e50d1215245d4b032e2e00df4b218c115f
Signed-off-by: Minh Thai <mthai@google.com>
This method tried to iterate spurious files which may exist in the
.git/refs folder, e.g. on Mac a .DS_Store may have been created there by
inspecting the folder using the finder application. This led to a
NotDirectoryException when deleteEmptyRefsFolders tried to create an
iterator for such a file entry. Skip files contained in the refs folder
to ensure the method only tries to iterate contained folders but not
files.
Change-Id: I5f31e733072a35db1e93908a9c69a8891ae5c206
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Single-branch-clone should be able to clone a single tag. Enhance
CloneCommand to accept also full refs of tags in setBranchesToClone().
Make sure we also include fetch ref specs for the fetch command for
tags. This mimics the behavior of native git's single-branch clone:
git clone --branch <tag> --single-branch <URI>
Bug: 542611
Change-Id: I285cf043751d9b0ba71258ee8214c0e5d1191428
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>