When the add parameter is set all modified and deleted files
are staged prior to commit.
Change-Id: Id23bc25730fcdd151386cd495a7cdc0935cbc00b
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
This change is mainly done for a subsequent commit
which will introduce the "all" parameter to the Commit
command.
Bug: 318439
Change-Id: I85a8a76097d0197ef689a289288ba82addb92fc9
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
An utility method which was in RacyGitTests has been moved to
RepositoryTestCase. Also the javadoc has been improved.
This method allows to wait long enough until the
filesystem-timer has advanced. This is useful when it has to
be guaranteed that two files modifications have different
modification timestamps.
Change-Id: I2ebd7cd7818feba6acffb3f835101d8fd281bd5a
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
The indexState() method was enhanced to be more configurable. A bitmask
controls which of the optional parts are reported. All data about
the worktree is not reported anymore by this method which makes the
interface more cleaner for users wanting to test only the state of the
index.
This was done because the previous version reported always so much
additional data that it was hard to write good assertions against it.
Change-Id: I9b481e97f8fcf3fcdbb785b801dc07bfa85dcc33
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
getEntryLastModified returned wrong values for
timestamps on boundaries.
Change-Id: I0457ff030946fda7ecc70671bc65db02a8c206ee
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
Its only available in Java 6, and JGit still supports Java 5.
Change-Id: I6c8fac06106ffe20ac8b4b595f34e027f2d29cc2
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
In order to test racy git situations we have to be able to control
the last-modification timestamps of the filesystem. Since we already
access the modification timestamps of files through an abstraction
(the WorkingTreeIterator) I add a new implementation of this iterator
which allows to map timestamp-ranges to single constant timestamps.
For users of this iterator it looks like all files in that range
have been modified at exactly the same time.
With the help of this iterator a test has been writting which
checkes for racy git handling (smudging, unsmudging, dirty-detection).
Additionally add a method to RepositoryTestCase which encodes the
current index state in one String. This should include info about
pathes, file/index modtime, smudgeState, clean-state. Make
sure timestamps are presented in a way that it is easy to
write assertions against this strings (no concrete milliseconds
but t0,t1,...).
These two topics depend circulary on each other: thats why they have
been squashed in one commit.
Change-Id: I115c3f2f20fca9b481830bdc6b9d1ade2c3abdcf
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
This refactoring permits applications to configure global per-process
settings for all packing and easily pass it through to per-request
PackWriters, ensuring that the process configuration overrides the
repository specific settings.
For example this might help in a daemon environment where the server
wants to cap the resources used to serve a dynamic upload pack
request, even though the repository's own pack.* settings might be
configured to be more aggressive. This allows fast but less bandwidth
efficient serving of clients, while still retaining good compression
through a cron managed `git gc`.
Change-Id: I58cc5e01b48924b1a99f79aa96c8150cdfc50846
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Add a method isDirectoryFileConflict() to NameConflictTreeWalk which
tells whether the current path is part of a directory/file conflict.
Change-Id: Iffcc7090aaec743dd6f3fd1a333cac96c587ae5d
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
File pairs that are very dissimilar during a diff were not being
broken apart into their constituent ADD/DELETE pairs. The leads to
sub-optimal rename detection. Take, for example, this situation:
A file exists at src/a.txt containing "foo". A user renames src/a.txt
to src/b.txt, then adds a new src/a.txt containing "bar".
Even though the old a.txt and the new b.txt are identical, the
rename detection algorithm would not detect it as a rename since
it was already paired in a MODIFY. I added code to split all
MODIFYs below a certain score into their constituent ADD/DELETE
pairs. This allows situations like the one I described above to be
more correctly handled.
Change-Id: I22c04b70581f206bbc68c4cd1ee87a1f663b418e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Add methods to the Repository class which write into MERGE_HEAD
and MERGE_MSG files. Since we have the read methods in the same
class this seems to be the right place.
Change-Id: I5dd65306ceb06e008fcc71b37ca3a649632ba462
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
LockFile.commit fails if another thread concurrently reads
the base file. The problem is fixed by retrying the rename
operation if it fails.
Change-Id: I6bb76ea7f2e6e90e3ddc45f9dd4d69bd1b6fa1eb
Bug: 308506
Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
There were some broken links, incorrect uses of @value, an invalid
tag and an outdated comment.
Change-Id: I22886bcc869a4b62bd606ebed40669f7b4723664
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The working tree iterator has perfect knowledge of the path structure
as well as immediate information about whether or not an ignore file
even exists at this level. We can exploit that to simplify the
logic and running time for testing ignored file status by pushing
all of the checks down into the iterator itself.
Change-Id: I22ff534853e8c5672cc5c2d9444aeb14e294070e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Charley Wang <chwang@redhat.com>
CC: Chris Aniszczyk <caniszczyk@gmail.com>
CC: Stefan Lay <stefan.lay@sap.com>
CC: Matthias Sohn <matthias.sohn@sap.com>
If we have two adds of the same object but no deletes the detector
threw an NPE because the entry that came back from the deleted map
was null (no matching objects). In this case we need to put the
adds all back onto the list of left over additions since they did
not match a delete.
Change-Id: Ie68fbe7426b4dc0cb571a08911c7adbffff755d5
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Jeffrey Schumacher" <jeffschu@google.com>
During the exact rename detection phase in RenameDetector, ties were
resolved on a first-found basis. I added support for file path based
tie breaking during that phase. Basically, there are four situations
that have to be handled:
One add matching one delete:
In this simple case, we pair them as a rename.
One add matching many deletes:
Find the delete whos path matches the add the closest, and
pair them as a rename.
Many adds matching one delete:
Similar to the above case, we find the add that matches the
delete the closest, and pair them as a rename. The other adds
are marked as copies of the delete.
Many adds matching many deletes:
Build a scoring matrix similar to the one used for content-
based matching, scoring instead by file path. Some of the
utility functions in SimilarityRenameDetector are used in
this case, as we use the same encoding scheme. Once the
matrix is built, scan it for the best matches, marking them
as renames. The rest are marked as copies.
I don't particularly like the idea of using utility functions right
out of SimilarityRenameDetector, but it works for the moment. A later
commit will likely refactor this into a common utility class, as well
as bringing exact rename detection out of RenameDetector and into a
separate class, much like SimilarityRenameDetector.
Change-Id: I1fb08390aebdcbf20d049aecf402a36506e55611
When the path is prefixed with ~ the URI parser thought about this
as /~. Strip the / if the next character is the tilde.
Bug: 307017
Change-Id: I58203e5617956b46d83e8987d1f8042beddffac3
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
The new Add command adds files to the Git Index.
It uses the DirCache to access the git index. It
works also in case of an existing conflict.
Fileglobs (e.g. *.c) are not yet supported.
The new Add command does add ignored files because
there is no gitignore support in jgit yet.
Bug: 318440
Change-Id: If16fdd4443e46b27361c2a18ed8f51668af5d9ff
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
After refactoring ReadTreeTest the tests failed for filesystems
with coarse modification time granularity. This is fixed by
explicitly telling the repo to reread the index after we build
a new index.
Additionally the test testDirectoryFileSimple was simplified
by using buildTree() instead of misusing GitIndex to construct
trees.
Change-Id: I20d2f097491e4cc8c657a696beabc7026b485017
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
This patch adds ignore compatibility to jgit. It encompasses
exclude files as well as .gitignore. Uses TreeWalk and
FileTreeIterator to find nodes and parses .gitignore
files when required. The patch includes a simple cache that
can be used to save results and avoid excessive gitignore
parsing.
CQ: 4302
Bug: 303925
Change-Id: Iebd7e5bb534accca4bf00d25bbc1f561d7cad11b
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
I added test cases to cover the majority of the code. It's not 100%
coverage yet, but the remaining bits are small.
Change-Id: Ib534c8e94b13358b8b22cf54e2ff84132bae6d14
The scoring method was not taking into account the similarity of
the file paths and file names. I changed the metric so that it is 99%
based on content (which used to be 100% of the old metric), and 1%
based on path similarity. Of that 1%, half (.5% of the total final
score) is based on the actual file names (e.g. "foo.java"), and half
on the directory (e.g. "src/com/foo/bar/").
Change-Id: I94f0c23bf6413c491b10d5625f6ad7d2ecfb4def
DeltaIndex is a simple pack style delta generator. The function works
by creating a compact index of a source buffer's blocks, and then
walking a sliding window along a desired result buffer, searching for
the window in the index. When a match is found, the window is
stretched to the longest possible length that is common with the
source buffer, and a copy instruction is created.
Rabin's polynomial hash function is used to compute the hash for a
block, permitting efficient sliding of the window in single byte
increments. The update function to slide one byte originated from
David Mazieres' work in LBFS, and our implementation of the update
step was certainly inspired by the initial work Geert Bosch proposed
for C Git in http://marc.info/?l=git&m=114565424620771&w=2.
To ensure the encoder runs in linear time with respect to the size of
the two input buffers (source and result), the maximum number of
blocks that can share the same position in the index's hashtable is
capped at a constant number. This prevents bad inputs from causing
the encoder to run in quadratic time, but comes with a penalty of
creating a longer delta due to fewer considered copy positions.
Strange hackery is used to cap the amount of memory used by the index
to be no more than 12 bytes for every 16 bytes of source buffer, no
matter what the JVM per-object overhead is. This permits an index to
always be no larger than 1.75x the source buffer length, which is an
important feature to support large windows of candidates to match
against while packing. Here the strange hackery is nothing more than
a manually managed chained hashtable, where pointers are array indexes
into storage arrays rather than object references.
Computation of the hash function for a single fixed sized block is
done through an unrolled loop, where the first 4 iterations have been
manually reduced down to eliminate unnecessary instructions. The
pattern is derived from ObjectId.equals(byte[], int, byte[], int),
where we have unrolled the loop required to compare two 20 byte
arrays. Hours of testing with the Sun 1.6 JRE concluded that the
non-obvious "foo[idx + 1]" style of reference is faster than
"foo[idx++]", and so that is what we use here during hashing.
Change-Id: If9fb2a1524361bc701405920560d8ae752221768
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the heap limit was set to something smaller than 8 KiB, we were
still allocating the full 8 KiB block size, and accepting up to
the amount we allocated by. Instead actually put a hard cap on
the limit.
Change-Id: Id1da26fde2102e76510b1da4ede8493928a981cc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Added support for converting DiffEntrys to FileHeaders. FileHeaders
are DiffEntrys with a buffer containing the diff output as well as
a list of HunkHeaders. The HunkHeaders contain EditLists. The
createFileHeader(DiffEntry) method in DiffFormatter performs a Myers
Diff on the files refered to by the DiffEntry, then puts the returned
EditList into a single HunkHeader, which is then put into the
FileHeader to be returned. It also generates the appropriate diff
header an puts it into the FileHeader's buffer. The rest of the diff
output, which would normally be parsed to generate the HunkHeaders,
is not generated. In fact, the purpose of this method is to avoid
the costly diff output generation and parsing normally required to
create a FileHeader.
Change-Id: I7d8b18c0f6c85e3d02ad58995d3d231e69af5887
ReadTreeTest was hardcoded to test WorkDirCheckout. Since we want
alternative checkout implementations (especially DirCacheCheckout)
this class has been refactored so that the tests can be reused
to test other implementations
The following changes have been done:
- abstract methods for checkout and prescanTwoTrees have been
introduced. Parameters are only the two trees. As index we
will implicitly use the current index of the repo.
- whenever tests needed a manipulated index before checkout
and prescanTwoTrees it was ensured that the correct index was
persisted (before we could use not-persisted instantiations of GitIndex
passed as parameters to checkout, prescanTwoTrees
- abstract methods for getting updated, conflicting, removed entries
resulting from the last checkout, prescanTwoTrees have been introduced
- an implementation for all these abstract methods using WorkDirCheckout
has been added
- method to assert a certain state of the index and the working tree has
been added
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Change-Id: Icf177cf8043487169a32ddd72b6f8f9246a433f7
We want to get rid of these APIs, because they don't perform as well
as DirCache/TreeWalk, or don't offer nearly as many features.
Bug: 319145
Change-Id: I2b28f9cddc36482e1ad42d53e86e9d6461ba3bfc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We need to validate the stream state after the InflaterInputStream
thinks the stream is done. Git expects a higher level of service from
the Inflater than the InflaterInputStream usually gives, we need to
ensure the embedded CRC is valid, and that there isn't trailing
garbage at the end of the file.
Change-Id: I1c9642a82dbd76b69e607dceccf8b85dc869a3c1
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Passing around the OutputStream and the Repository is crazy. Instead
put the stream in the constructor, since this formatter exists only to
output to the stream, and put the repository as a member variable that
can be optionally set.
Change-Id: I2bad012fee7f40dc1346700ebd19f1e048982878
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Content similarity based rename detection is performed only after
a linear time detection is performed using exact content match on
the ObjectIds. Any names which were paired up during that exact
match phase are excluded from the inexact similarity based rename,
which reduces the space that must be considered.
During rename detection two entries cannot be marked as a rename
if they are different types of files. This prevents a symlink from
being renamed to a regular file, even if their blob content appears
to be similar, or is identical.
Efficiently comparing two files is performed by building up two
hash indexes and hashing lines or short blocks from each file,
counting the number of bytes that each line or block represents.
Instead of using a standard java.util.HashMap, we use a custom
open hashing scheme similiar to what we use in ObjecIdSubclassMap.
This permits us to have a very light-weight hash, with very little
memory overhead per cell stored.
As we only need two ints per record in the map (line/block key and
number of bytes), we collapse them into a single long inside of
a long array, making very efficient use of available memory when
we create the index table. We only need object headers for the
index structure itself, and the index table, but not per-cell.
This offers a massive space savings over using java.util.HashMap.
The score calculation is done by approximating how many bytes are
the same between the two inputs (which for a delta would be how much
is copied from the base into the result). The score is derived by
dividing the approximate number of bytes in common into the length
of the larger of the two input files.
Right now the SimilarityIndex table should average about 1/2 full,
which means we waste about 50% of our memory on empty entries
after we are done indexing a file and sort the table's contents.
If memory becomes an issue we could discard the table and copy all
records over to a new array that is properly sized.
Building the index requires O(M + N log N) time, where M is the
size of the input file in bytes, and N is the number of unique
lines/blocks in the file. The N log N time constraint comes
from the sort of the index table that is necessary to perform
linear time matching against another SimilarityIndex created for
a different file.
To actually perform the rename detection, a SxD matrix is created,
placing the sources (aka deletions) along one dimension and the
destinations (aka additions) along the other. A simple O(S x D)
loop examines every cell in this matrix.
A SimilarityIndex is built along the row and reused for each
column compare along that row, avoiding the costly index rebuild
at the row level. A future improvement would be to load a smaller
square matrix into SimilarityIndexes and process everything in that
sub-matrix before discarding the column dimension and moving down
to the next sub-matrix block along that same grid of rows.
An optional ProgressMonitor is permitted to be passed in, allowing
applications to see the progress of the detector as it works through
the matrix cells. This provides some indication of current status
for very long running renames.
The default line/block hash function used by the SimilarityIndex
may not be optimal, and may produce too many collisions. It is
borrowed from RawText's hash, which is used to quickly skip out of
a longer equality test if two lines have different hash functions.
We may need to refine this hash in the future, in order to minimize
the number of collisions we get on common source files.
Based on a handful of test commits in JGit (especially my own
recent rename repository refactoring series), this rename detector
produces output that is very close to C Git. The content similarity
scores are sometimes off by 1%, which is most probably caused by
our SimilarityIndex type using a different hash function than C
Git uses when it computes the delta size between any two objects
in the rename matrix.
Bug: 318504
Change-Id: I11dff969e8a2e4cf252636d857d2113053bdd9dc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
During code review, Alex raised a few comments about commit
532421d989 ("Refactor repository construction to builder class").
Due to the size of the related series we aren't going to go back
and rebase in something this minor, so resolve them as a follow-up
commit instead.
Change-Id: Ied52f7a8f7252743353c58d20bfc3ec498933e00
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We default this to 1 MiB for now, but we allow users to modify
it through the Repository's configuration file to be a different
value. A new repository listener is used to identify when the
setting has been updated and trigger a reconfiguration of any
active ObjectReaders.
To prevent a horrible explosion we cap core.streamFileThreshold
at no more than 1/4 of the maximum JVM heap size. We do this
because we need at least 2 byte arrays equal in size to the
stream threshold for the worst case delta inflation scenario,
and our host application probably also needs some amount of the
heap for their working set size.
Change-Id: I103b3a541dc970bbf1a6d92917a12c5a1ee34d6c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Very large delta instruction streams, or deltas which use very large
base objects, are now streamed through as large objects rather than
being inflated into a byte array.
This isn't the most efficient way to access delta encoded content, as
we may need to rewind and reprocess the base object when there was a
block moved within the file, but it will at least prevent the JVM from
having its heap explode.
When streaming a delta we have an inflater open for each level in the
delta chain, to inflate the instruction set of the delta, as well as
an inflater for the base level object. The base object is buffered,
as is the top level delta requested by the application, but we do not
buffer the intermediate delta streams. This keeps memory usage lower,
so its closer to 1024 bytes per level in the chain, without having an
adverse impact on raw throughput as the top-level buffer gets pushed
down to the lowest stream that has the next region.
Delta instructions transparently collapse here, if the top level does
not copy a region from its base, the base won't materialize that part
from its own base, etc. This allows us to avoid copying around a lot
of segments which have been deleted from the final version.
Change-Id: I724d45245cebb4bad2deeae7b896fc55b2dd49b3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Similar to the loose object support, whole packed objects can
now be streamed back to the caller. The streaming is less
efficient as we copy the data from the cached window array
into the InflaterInputStream's internal buffer, then inflate
it there before returning to the application.
Like with unpacked objects, there is plenty of room for some
optimization, especially for the copyTo method, where we don't
necessarily need so much buffering to exist.
Change-Id: Ie23be81289e37e24b91d17b0891e47b9da988008
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The class is identical, but ObjectLoader.SmallObject is part of our
public API for storage implementations to build on top of.
Change-Id: I381a3953b14870b6d3d74a9c295769ace78869dc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Big loose objects can now be streamed if they are over the large
object size threshold. This prevents the JVM heap from exploding
with a very large byte array to hold the slurped file, and then
again with its uncompressed copy.
We may have slightly slowed down the simple case for small
loose objects, as the loader no longer slurps the entire thing
and decompresses in memory. To try and keep good performance
for the very common small objects that are below 8 KiB in size,
buffers are set to 8 KiB, causing the reader to slurp most of the
file anyway. However the data has to be copied at least once,
from the BufferedInputStream into the InflaterInputStream.
New unit tests are supplied to get nearly 100% code coverage on the
unpacked code paths, for both standard and pack style loose objects.
We tested a fair chunk of the code elsewhere, but these new tests
are better isolated to the specific branches in the code path.
Change-Id: I87b764ab1b84225e9b5619a2a55fd8eaa640e1fe
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
JGit does not currently do rename detection during diffs. I added
a class that, given a TreeWalk to iterate over, can output a list
of DiffEntry's for that TreeWalk, taking into account renames. This
class only detects renames by SHA1's. More complex rename detection,
along the lines of what C Git does will be added later.
Change-Id: I93606ce15da70df6660651ec322ea50718dd7c04
Instead of creating the DirCache from a static factory method, use
an instance method on Repository, permitting the implementation to
override the method with a completely different type of DirCache
reading and writing. This would better support a repository in the
cloud strategy, or even just an in-memory unit test environment.
Change-Id: I6399894b12d6480c4b3ac84d10775dfd1b8d13e7
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Using a custom exception type makes it easire for an application
developer to understand why an exception was thrown out of a method
we declare. To remain compatiable with existing callers, we still
extend off IllegalStateException.
Change-Id: Ideeef2399b11ca460a2dbb3cd80eb76aa0a025ba
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We don't actually need a Repository object here, just an ObjectReader
that can load content for us. So change the API to depend on that.
However, this breaks the asCommit and asTag legacy translation methods
on RevCommit and RevTag, so we still have to keep the Repository
inside of RevWalk for those two types. Hopefully we can drop those in
the future, and then drop the Repository off the RevWalk.
Change-Id: Iba983e48b663790061c43ae9ffbb77dfe6f4818e
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Added code to support ignoring leading, trailing, and changed
whitespace when performing a diff operation. I also added command
line options to Diff to enable the various whitespace ignoring
methods. These match the flags for git diff.
Change-Id: Ie56301aafad59ee3f0fe5de62719f5023cd702c8
We drop the "Object" suffix, because its pretty clear here that
we want to open an object, given that we pass in AnyObjectId as
the main parameter. We also fix the calling convention to throw
a MissingObjectException or IncorrectObjectTypeException, so that
callers don't have to do this error checking themselves.
Change-Id: I72c43353cea8372278b032f5086d52082c1eee39
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rather than taking the ProgressMonitor objects in our constructor and
carrying them around as instance fields, take them as arguments to the
actual time consuming operations we need to run.
Change-Id: I2b230d07e277de029b1061c807e67de5428cc1c4
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
JGit did not have support for skipping whitespace when comparing
lines in RawText objects. I added a subclass of RawText that skips
whitespace in its equals and hashCode methods. I used a subclass
rather than adding functionality into RawText so that performance
would not be impacted by extra logic.
This class only supports ignoring all whitespace. Others will follow
that allow other forms of whitespace ignoring.
Change-Id: Ic2f79e85215e48d3fd53ec1b4ad13373dd183a4a
The ObjectReader API demands that we release the reader when we are
done with it. PackWriter contains a reader, which it uses for the
entire packing session. Expose the release of the reader through
a release method on the writer.
This still doesn't address the RevWalk and TreeWalk users, who
don't correctly release their reader. But its a small step in the
right direction.
Change-Id: I5cb0b5c1b432434a799fceb21b86479e09b84a0a
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Similar to what we did with the file code, move the pack writer
into its own package so the related classes and their package
private methods are hidden from the rest of the library.
Change-Id: Ic1b5c7c8c8d266e90c910d8d68dfc8e93586854f
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We no longer need an ObjectLoader to be lazy and try to delay
the materialization of the object content. That was done only
to support PackWriter searching for a good reuse candidate.
Instead, simplify the code base by doing the materialization
immediately when the loader asks for it, because any caller
asking for the loader is going to need the content.
Change-Id: Id867b1004529744f234ab8f9cfab3d2c52ca3bd0
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>