If the client is only following the remote repository and has not
created any new non-common commits, the client will wind up sending
a "have %s" line for each tag in the repository. For some projects
like git.git, this is 339 tags and growing, resulting in more than
16 KiB needing to be POSTed over 12 HTTP requests.
Teach UploadPack (server side) to always execute the okToGiveUp()
logic at least once per negotiation round to determine if the server
can compute a pack right now. If it can, shove in an "ACK %s ready"
message to tell the client this and try to prevent receiving ancient
tags in future negotiation rounds.
Teach BasePackFetchConnection (client side) to honor a "ACK %s ready"
from the remote and break out of its SEND_HAVE loop once the remote
knows it can create a pack. This avoids sending the remaining 307
tags of git.git.
These two changes together reduce the number of HTTP RPCs from 13
down to 3 in order to fetch from git.git over smart HTTP. If either
side is missing the change, the older behavior (and its 13 RPCs)
is used.
Change-Id: I64736318fd0abf9ee5e56bd0b737707adb580b37
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This permits an application to create its own copy of FS.DETECTED
before manually setting the userHome or gitPrefix.
Bug: 337101
Change-Id: Ieea33c8d0ebdc801a4656b829d2a4b398559fd45
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This permits callers to modify the meaning of userHome, which
may be useful if their application allows the user to select
different user settings locations.
Bug: 337101
Change-Id: I076815edeec1c20dea028f7840be3930337dff77
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This permits callers to modify the meaning of gitPrefix, which
may be useful if their application allows the user to select
the location where C Git is installed.
Bug: 337101
Change-Id: I07362a5772da4955e01406bdeb8eaf87416be1d6
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This allows callers to perform the logic that constructed the
current FS.DETECTED value.
Change-Id: Id8517d131dcc3f675c60b2d935730872695ed1b0
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
C Git always fetches tags during clone, even if the tag doesn't
point to an object that was fetched by the branch specifications.
Match that behavior, as users expect it.
Bug: 326611
Change-Id: I81a82b7359a9649f18a172219da44ed54e77ca2f
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If no RefSpec was specified, push the branch that is currently
checked out as HEAD.
Change-Id: I6f13ef6346188698a14e000fc590850afbc34b21
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rather than copying the entire list, just replace each element
with the version that has setForceUpdate(true) invoked on it.
Change-Id: I2eaa4466d497cb2408ce61dc62ca26e0c32fe841
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This better matches with PackFile and CachedPack's methods
that return the same value.
Change-Id: Idb9b7c71d2048dd2344a62c2cde20b4e34529ab7
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
PackWriter incorrectly returned 0 from getObjectsNumber() when the
pack has not been written yet. This caused dumb transports like
amazon-s3:// and sftp:// to abort early and never write out a pack,
under the assumption that the pack had no objects.
Until the pack header is written to the output stream, compute the
current object count each time it is requested. Once the header is
started, use the object count from the stats object.
Change-Id: I041a2368ae0cfe6f649ec28658d41a6355933900
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
OwnerMap is about 200 ms faster than SubclassMap, more friendly to the
GC, and uses less storage: testing the "Counting objects" part of
PackWriter on 1886362 objects:
ObjectIdSubclassMap:
load factor 50%
table: 4194304 (wasted 2307942)
ms spent 36998 36009 34795 34703 34941 35070 34284 34511 34638 34256
ms avg 34800 (last 9 runs)
ObjectIdOwnerMap:
load factor 100%
table: 2097152 (wasted 210790)
directory: 1024
ms spent 36842 35112 34922 34703 34580 34782 34165 34662 34314 34140
ms avg 34597 (last 9 runs)
The major difference with OwnerMap is entries must extend from
ObjectIdOwnerMap.Entry, where the OwnerMap has injected its own
private "next" field into each object. This allows the OwnerMap to use
a singly linked list for chaining collisions within a bucket. By
putting collisions in a linked list, we gain the entire table back for
the SHA-1 bits to index their own "private" slot.
Unfortunately this means that each object can appear in at most ONE
OwnerMap, as there is only one "next" field within the object instance
to thread into the map. For types that are very object map heavy like
RevWalk (entity RevObject) and PackWriter (entity ObjectToPack) this
is sufficient, these entity types are only put into one map by their
container. By introducing a new map type, we don't break existing
applications that might be trying to use ObjectIdSubclassMap to track
RevCommits they obtained from a RevWalk.
The OwnerMap uses less memory. Each object uses 1 reference more (so
we're up 1,886,362 references), but the table is 1/2 the size (2^20
rather than 2^21). The table itself wastes only 210,790 slots, rather
than 2,307,942. So OwnerMap is wasting 200k fewer references.
OwnerMap is more friendly to the GC, because it hardly ever generates
garbage. As the map reaches its 100% load factor target, it doubles in
size by allocating additional segment arrays of 2048 entries. (So the
first grow allocates 1 segment, second 2 segments, third 4 segments,
etc.) These segments are hooked into the pre-allocated directory of
1024 spaces. This permits the map to grow to 2 million objects before
the directory itself has to grow. By using segments of 2048 entries,
we are asking the GC to acquire 8,204 bytes in a 32 bit JVM. This is
easier to satisfy then 2,307,942 bytes (for the 512k table that is
just an intermediate step in the SubclassMap). By reusing the
previously allocated segments (they are re-hashed in-place) we don't
release any memory during a table grow.
When the directory grows, it does so by discarding the old one and
using one that is 4x larger (so the directory goes to 4096 entries on
its first grow). A directory of size 4096 can handle up to 8 millon
objects. The second directory grow (16384) goes to 33 million objects.
At that point we're starting to really push the limits of the JVM
heap, but at least its many small arrays. Previously SubclassMap would
need a table of 67108864 entries to handle that object count, which
needs a single contiguous allocation of 256 MiB. That's hard to come
by in a 32 bit JVM. Instead OwnerMap uses 8192 arrays of about 8 KiB
each. This is much easier to fit into a fragmented heap.
Change-Id: Ia4acf5cfbf7e9b71bc7faa0db9060f6a969c0c50
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Use the Java 6 like services approach to find all supported
TransportProtocols within the CLASSPATH and load them all for use.
This allows users to inject additional protocol implementations simply
by putting their JARs on the application CLASSPATH, provided the
protocol author has written the proper services file.
Change-Id: I7a82d8846e4c4ed012c769f03d4bb2461f1bd148
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The new TransportProtocol type describes what a particular Transport
implementation wants in order to support a connection. 3rd parties
can now plug into the Transport.open() logic by implementing their
own TransportProtocol and Transport classes, and registering with
Transport.register().
GUI applications can help the user configure a connection by looking
at the supported fields of a particular TransportProtocol type, which
makes the GUI more dynamic and may better support new Transports.
Change-Id: Iafd8e3a6285261412aac6cba8e2c333f8b7b76a5
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This change adds the --only/ -o option to the commit command.
Change-Id: I44352d56877f8204d985cb7a35a2e0faffb7d341
Signed-off-by: Philipp Thun <philipp.thun@sap.com>
During a review of the class, Josh Bloch pointed out we can use
"i = (i + 1) & mask" to wrap around at the end of the table, instead
of a conditional with a branch. This is generally faster due to one
less branch that will be mis-predicted by the CPU.
Change-Id: Ic88c00455ebc6adde9708563a6ad4d0377442bba
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
readPipe() may consume rather much time, so
gitPrefix should be cached. If the git executable changes,
users should run FS.detect() again to get a new
instance of FS_Win32.
Ensure the JIT knows the table cannot be changed during the critical
inner loop of get() or insert() by loading the field into a final
local variable. This shouldn't be necessary, but the instance member
is declared non-final (to resizing) and it is not very obvious to the
JIT that the table cannot be modified by AnyObjectId.equals().
Simplify the JIT's decision making by making it obvious, these
values cannot change during the critical inner loop, allowing
for better register allocation.
Change-Id: I0d797533fc5327366f1207b0937c406f02cdaab3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This method is trivial in definition, and is called in only 3
places. Inline the method manually to ensure its really going
to be inlined by the JIT at runtime.
Change-Id: I128522af8167c07d2de6cc210573599038871dda
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
32 is way to small for the map. Most applications using the map
will need to load more than 16 objects just from the root refs
being read from the Repository.
Default the initial size to 2048. This cuts out 6 expansions in
the early life of the table, reducing garbage and rehashing time.
Change-Id: I6dd076ebc0b284f1755855d383b79535604ac547
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the table needs to be grown, do it before the current insertion
rather than after. This is a tiny micro-optimization that allows
the compiler to reuse the result of "++size" to compare against
previously pre-computed size at which the table should rehash itself.
Change-Id: Ief6f81b91c10ed433d67e0182f558ca70d58a2b0
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Bitwise and is faster than integer modulus operations, and since
the table size is always a power of 2, this is simple to use for
index operation.
Change-Id: I83d01e5c74fd9e910c633a98ea6f90b59092ba29
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
obj_hash doesn't match our naming conventions, camelCaseNames
are the preferred format.
Change-Id: I72da199daccb60a98d17b6af1e498189bf149515
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
A standard HashSet was being used to store the list of subsections as
they were being parsed. This was changed to use a LinkedHashSet so
that iterating over the set would return values in the same order as
they are listed in the config file.
Change-Id: I4251f95b8fe0ad59b07ff563c9ebb468f996c37d
Javadoc for ScheduledThreadPoolExecutor says [1]:
While ScheduledThreadPoolExecutor inherits from ThreadPoolExecutor, a
few of the inherited tuning methods are not useful for it. In
particular, because it acts as a fixed-sized pool using corePoolSize
threads and an unbounded queue, adjustments to maximumPoolSize have no
useful effect.
[1]
http://download.oracle.com/javase/6/docs/api/java/util/concurrent/ScheduledThreadPoolExecutor.html
Change-Id: I8eccb7d6544aa6e27f5fa064c19dddb2a706523f
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Instead of resizing an ArrayList until all objects have been added,
append objects into a specialized List type that uses small arrays
of 1024 entries for each 1024 objects added.
For a large repository like linux-2.6, PackWriter will now allocate
1,758 smaller arrays to hold the object list, without creating any
garbage from the intermediate states due to list expansion.
1024 was chosen as the block size (and initial directory size) as this
is a reasonable balance for the PackWriter code. Each block uses
approximately 4096 bytes in a 32 bit JVM, as does the default top
level block directory. The top level directory doesn't expand until 1
million items have been added to the list, which for linux-2.6 won't
yet occur as the lists are per-object-type and are thus bounded to
about 1/3 of 1.8 million.
Change-Id: If9e4092eb502394c5d3d044b58cf49952772f6d6
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The mapTree() routines have been deprecated for a long time, and their
sibilings for mapCommit() and mapTag() were already removed from the
main Repository API.
Remove mapTree(). Application callers who only need the tree's name
can use resolve("^{tree}") syntax to resolve to the tree ObjectId, or
fail if the input is not a tree.
Applications that want to read a tree should use DirCache or TreeWalk.
Change-Id: I85726413790fc87721271c482f6636f81baf8b82
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This type and its associated methods has been deprecated for a while
now. Time to remove it. Applications can use a TreeWalk instead to
access the elements of any tree-like object.
Change-Id: I047e552ac77b77e2de086f63cb4fb318da57c208
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This interface has been deprecated for a while now.
Applications can use a TreeWalk instead.
Change-Id: I751d6e919e4b501c36fc36e5f816b8a8c5379cb9
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This has been deprecated for some time now. Applications should
instead use DirCache within a TreeWalk.
Change-Id: I8099d93f07139c33fe09bdeef8d739782397da17
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This class has been deprecated for a long time now.
Time to remove it. Applications can use the newer
DirCache.writeTree() as a replacement.
Change-Id: I91dc9507668d8a3ecadd6acd4f1c8b7bd7760cc3
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This class has been deprecated for a long time now.
Time to remove it. Applications can use the newer
DirCacheCheckout class as a replacement.
Change-Id: Id66d29fcca5a7286b8f8838303d83f40898918d2
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This interface has been deprecated for a long time now.
Time to remove it.
Change-Id: I29a938657e4637b2a9d0561940b38d70866613f7
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When I disabled validation I broke the code that handled copying small
objects whose contents were below 8192 bytes in size but spanned over
the end of one window and into the next window. These objects did not
ever populate the temporary write buffer, resulting in garbage writing
into the output stream instead of valid object contents.
Change-Id: Ie26a2aaa885d0eee4888a9b12c222040ee4a8562
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When fetch TagOpt is AUTO_FOLLOW do not follow refs/tags/ names that
point directly to commits which are on unreleated side branches.
Change-Id: Iea6eee5a05ae7402a7f256fd9c1e3d3b5ccb58dd
Reported-by: Slawomir Ginter <sginter@atlassian.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
During unit tests and most likely elsewhere, updates come too fast for
a simple timestamp comparison (with one seconds resolution) to work.
I.e. DirCache thinks it hasn't changed.
Use FileSnapshot instead which has more advanced logic.
Change-Id: Ib850f84398ef7d4b8a8a6f5a0ae6963e37f2b470
Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
When parsing a string such as "foo-gbed2" resolve() was assuming the
suffix was from git describe output. This lead to JGit trying to find
the completion for the object abbreviation "bed2", rather than using
the current value of the reference. If there was only one such object
in the repository, JGit might actually use the wrong value here, as
resolve() would return the completion of the abbreviation "bed2"
rather than the current value of the reference "refs/heads/foo-gbed2".
Move the parsing of git describe abbreviations out of the operator
portion of the resolve() method and into the simple portion that is
supposed to handle only object ids or reference names, and only do the
describe parsing after all other approaches have already failed to
provide a resolution.
Add new unit tests to verify the behavior is as expected by users.
Bug: 338839
Change-Id: I52054d7b89628700c730f9a4bd7743b16b9042a9
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Applications may already have a Ref or ObjectId on hand that they want
the remote to be updated to. Instead of converting these into a
String and relying on the parsing rules of resolve(), allow the
application to supply the Ref or ObjectId directly.
Bug: 338839
Change-Id: If5865ac9eb069de1c8f224090b6020fc422f9f12
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If object reuse validation is enabled, the output pack is going to
probably be stored locally. When reusing an existing cached pack
to save object enumeration costs, ensure the cached pack has not
been corrupted by checking its SHA-1 trailer. If it has, writing
will abort and the output pack won't be complete. This prevents
anyone from trying to use the output pack, and catches corruption
before it can be carried any further.
Change-Id: If89d0d4e429d9f4c86f14de6c0020902705153e6
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
There is no need to validate the object contents during
copyObjectAsIs if the result is going to be parsed by unpack-objects
or index-pack. Both programs will compute the SHA-1 of the object,
and also validate most of the pack structure. For git daemon
like servers, this work is already done on the client end of the
connection, so the server doesn't need to repeat that work itself.
Disable object validation for the 3 transport cases where we know
the remote side will handle object validation for us (push, bundle
creation, and upload pack). This improves performance on the server
side by reducing the work that must be done.
Change-Id: Iabb78eec45898e4a17f7aab3fb94c004d8d69af6
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Annotated tags need to be parsed by many viewing tools, but putting
them at the end of the pack hurts because kernel prefetching might
not have loaded them, since they are so far from the commits they
reference.
Position tags right behind the commits, but before the trees.
Typically the annotated tag set for a repository is very small,
so the extra prefetch burden it puts on tools that don't need
annotated tags (but do need commits and trees) is fairly low.
Change-Id: Ibbabdd94e7d563901c0309c79a496ee049cdec50
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This simple refactoring makes it easier to pre-process each of the
object lists before its handed into the actual write routine.
Change-Id: Iea95e5ecbc7374f6bcbb43d1c75285f4f564d09d
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
JGit doesn't generate deltas for commit or tag objects when it packs
a repository from scratch. This is an explicit design decision that
is (mostly) justified by the fact that these objects do not delta
compress well.
Annotated tags are made once on stable points of the project history,
it is unlikely they will ever appear again with sufficient common
text to justify using a delta over just deflating the raw content.
JGit never tries to delta compress annotated tags and I take the
stance that these are best stored as non-deltas given how frequently
they might be accessed by repository viewers.
Commits only have sufficient common text when they are cherry-picked
to forward-port or back-port a change from one branch to another.
Even in these cases the distance between the commits as returned
by the log traversal has to be small enough that they would both
appear in the delta search window at the same time in order to
delta compress one of the messages against the other. JGit never
tries to delta compress commits, as it requires a lot of CPU time
but typically does not produce a smaller pack file.
Avoid reusing deltas for either of these types when constructing a
new pack. To avoid killing performance during serving of network
clients, UploadPack disables this code change by allowing PackWriter
to reuse delta commits. Repositories that were already repacked by
C Git will not have their delta commits decompressed and recompressed
on the fly during object writing, saving server-side CPU resources.
Change-Id: I749407e7c5c677e05e4d054b40db7656cfa7fca8
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This is a tiny optimization to how delta search works. Checking for
isReuseAsIs() avoids doing delta compression search on non-delta
objects already stored in packs within the repository. Such objects
are not likely to be delta compressable, as they were already delta
searched when their containing pack was generated and they were
not delta compressed at that time. Doing delta compression now is
unlikely to produce a different result, but would waste a lot of CPU.
The isReuseAsIs() flag is checked before isDoNotDelta() because it
is very common to reuse objects in the output pack. Most objects
get reused, and only a handful have the isDoNotDelta() bit set.
Moving the check earlier allows the loop to more quickly skip
through objects that will never need to be considered.
Change-Id: Ied757363f775058177fc1befb8ace20fe9759bac
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The alarm queue threads were started with an empty task body, which
meant the thread started and terminated immediately, leaving the
queue itself with no worker.
Change-Id: I2a9b5fe9c2bdff4a5e0f7ec7ad41a54b41a4ddd6
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of polling the system clock on every update(1) method call,
use a scheduled executor to toggle a volatile once per second until
the task is done. Check the volatile on each update(int), looking
to see if output should occur.
This limits progress output to either once per 1% complete, or once
per second. To save time during update calls the timer isn't reset
during each 1% of output, which means we may see one unnecessary
output trigger if at least 1% completed during the one second of the
alarm time.
Change-Id: I8fdd7e31c37bef39a5d1b3da7105da0ef879eb84
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>