Gerrit's superproject subscription feature uses RefSpecs to formalize
the ACLs of when the superproject subscription feature is allowed.
As this is a slightly different use case than describing a local/remote
pair of refs, we need to be more permissive. Specifically we want to allow:
refs/heads/*
refs/heads/*:refs/heads/master
refs/heads/master:refs/heads/*
Introduce a new constructor, that allows constructing these RefSpecs.
Change-Id: I46c0bea9d876e61eb2c8d50f404b905792bc72b3
Signed-off-by: Stefan Beller <sbeller@google.com>
We had a case in Gerrits superproject subscriptions where
'refs/heads/' was configured with the intention to mean 'refs/heads/*'.
The first expression lacks the '*', which is why it is not considered
a wildcard but it was considered valid and so was not found early to be
a typo.
Refs are not allowed to end with '/' anyway, so add a check for that.
Change-Id: I3ffdd9002146382acafb4fbc310a64af4cc1b7a9
Signed-off-by: Stefan Beller <sbeller@google.com>
Example usage:
$ ./jgit push \
--push-option "Reviewer=j.doe@example.org" \
--push-option "<arbitrary string>" \
origin HEAD:refs/for/master
Stefan Beller has also made an equivalent change to CGit:
http://thread.gmane.org/gmane.comp.version-control.git/299872
Change-Id: I6797e50681054dce3bd179e80b731aef5e200d77
Signed-off-by: Dan Wang <dwwang@google.com>
What's invalidated when an object database is "dirty" is not the whole
database, but rather a specific list of packs. If there is a race
between getting the pack list and setting the volatile dirty flag
where the packs are rescanned, we don't need to mark the new pack list
as dirty.
This is a fine point that only really applies if the decision of
whether or not to mark dirty actually requires introspecting the pack
list (say, its timestamps). The general operation of "take whatever
is the current pack list and mark it dirty" may still be inherently
racy, but the cost is not so high.
Change-Id: I159e9154bd8b2d348b4e383627a503e85462dcc6
This variable has been populated and never used since it was
introduced in commit 5cf53fdacf
(Speed up clone/fetch with large number of refs, 2013-02-18).
Noted by FindBugs:
"BatchRefUpdate.java:359, UC_USELESS_OBJECT, Priority: Normal"
Change-Id: I7aacb49540aaee4a83db3d38b15633bb6c4773d0
Signed-off-by: Dan Wang <dwwang@google.com>
Currently, there is a race where a user of a DfsRepository in a single
thread may get unexpected MissingObjectExceptions trying to look up an
object that appears as the current value of a ref:
1. Thread A scans packs before scanning refs, for example by reading
an object by SHA-1.
2. Thread B flushes an object and updates a ref to point to that
object.
3. Thread A looks up the ref updated in (2). Since it is scanning refs
for the first time, it sees the new object SHA-1.
4. Thread A tries to read the object it found in (3), using the cached
pack list it got from (1). The object appears missing.
Allow implementations to work around this by marking the object
database's current pack list as "dirty." A dirty pack list means that
DfsReader will rescan packs and try again if a requested object is
missing. Implementations should mark objects as dirty any time the ref
database reads or scans refs that might be newer than a previously
cached pack list.
Change-Id: I06c722b20c859ed1475628ec6a2f6d3d6d580700
We observe in Gerrit 2.12 that useCnt can become negative in rare cases.
Log this to help finding the bug.
Change-Id: Ie91c7f9d190a5d7cf4733d4bf84124d119ca20f7
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When Repository.close() decrements the useCount to 0 currently the cache
immediately evicts the repository from WindowCache and RepositoryCache.
This leads to I/O overhead on busy repositories because pack files and
references are inserted and deleted from the cache frequently.
This commit defers the eviction of a repository from the caches until
last use of the repository is older than time to live. The eviction is
handled by a background task running periodically.
Add two new configuration parameters:
* core.repositoryCacheExpireAfter: cache entries are evicted if the
cache entry wasn't accessed longer than this time in milliseconds
* core.repositoryCacheCleanupDelay: defines the interval in milliseconds
for running a background task evicting expired cache entries. If set to
-1 the delay is set to min(repositoryCacheExpireAfter, 10 minutes). If
set to 0 the time based eviction is switched off and no background task
is started. If time based eviction is switched off the JVM can still
evict cache entries if heap memory is running low.
Change-Id: I4a0214ad8b4a193985dda6a0ade63b70bdb948d7
Also-by: Matthias Sohn <matthias.sohn@sap.com>
Also-by: Hugo Arès <hugo.ares@ericsson.com>
Also-by: Sasa Zivkov <sasa.zivkov@sap.com>
These try-with-resources blocks close the underlying output stream
twice: once when closing the CountingOutputStream wrapper, then again
when closing the DfsOutputStream out.
Simplify by only closing the CountingOutputStream.
In practice this shouldn't matter because the close() method of a
Closable is required to be idempotent, but avoiding the redundant
extra close makes the code simpler to read and understand.
Change-Id: I1778c4fc8ba075a2c6cd2129528bb272cb3a1af7
If the client sent a well-formed enough request to see it wants to use
side-band-64k for status reporting (meaning its a modern client), but
any other command record was somehow invalid (e.g. corrupt SHA-1)
report the parsing exception using channel 3. This allows clients to
see the failure and know the server will not be continuing.
git-core and JGit clients send all commands and then start a sideband
demux before sending the pack. By consuming all commands first we get
the client into a state where it can see and respond to the channel 3
server failure.
This behavior is useful on HTTPS connections when the client is buggy
and sent a corrupt command, but still managed to request side-band-64k
in the first line.
Change-Id: If385b91ceb9f024ccae2d1645caf15bc6b206130
The "shallow $id" parsing can also throw InvalidObjectIdException,
just like parseCommand. Move it into its own method with a proper
try-catch block to convert to the checked PackProtocolException.
Change-Id: I6839b95f0bc4fbf4d2c213331ebb9bd24ab2af65
Instead of deferring until after command parsing, enable the
capabilities after the first pkt-line has been read from the client.
This allows the server to setup the side-band-64k channel immediately.
Change-Id: I141b7fc92e983a41d3a58da8e1464a6917422b6c
If the push client has requested side-band support the server can
signal a fatal error parsing the pack using the error channel (3)
and then hang up. This may cause the PackWriter to fail to write to
data onto the network socket, which throws a misleading error back
up to the application and the user.
During a write failure poll the input to see if the side band system
can parse out an error message off channel 3. This should be fast as
there will either be an error present in the buffer, or the remote will
also have hung-up on the side band channel. In the case of a hang-up
just rethrow the original IOException as its a network error.
This roughly matches what C git does; once commands are sent and the
packer is started a new thread runs in the background to decode any
possible server error during unpacking on the remote peer
Change-Id: Idb37a4a122a565ec4b59130e08c27d963ba09395
The more specific type InvalidObjectIdException is thrown by
ObjectId.fromString(). Use it here in ReceivePack as the more
generic IAE is never thrown by the body of the try-catch block.
Change-Id: I53fc13c561c7d429a50b5eb82773f1a670431c54
A RefAdvertiser writing to the network includes both the reference's
ObjectId and its peeled ObjectId in the advertised set. In smart HTTP
negotiation requests may bypass the RefAdvertiser and quickly build
the set based on current refs; include the peeled ObjectIds to match
behavior with the normal bidirectional protocols on git:// and SSH.
Change-Id: I5371bed60da36e8d12c4ad9a5c1d91a0f0ad486b
This field was being set twice within the block. Setting it just once
is sufficient. writeString() does not examine the field so it is fine
to set it after the call.
Change-Id: Ib4c74df4f1304e9df3015885bf360bf0d7bc6ca2
Example use case is to set a different IdentityRepository, for example,
a RemoteIdentityRepository to allow SSH Agent supplied credentials.
Change-Id: Id4a4afd64464e452ffe6d6ad49036f9e283b4655
Signed-off-by: markdingram <markdingram@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
DfsGarbageCollector will now enforce a maximum time to live (TTL) for
UNREACHABLE_GARBAGE packs. The default TTL is 1 day, which should be
enough time to avoid races with other processes that are inserting
data into the repository.
Change-Id: Id719e6e2a03cfc9a0c0aef8ed71d261dda14bd0c
Signed-off-by: Mike Williams <miwilliams@google.com>
1f86350 added initial support for include.path. Relative path and path
with tilde are not yet supported but config load was failing if one of
those 2 unsupported options was encountered. Another problem was that
config load was failing if the include.path file did not exist.
Change the behavior to be consistent with native git. Ignore unsupported
or nonexistent include.path.
Bug: 495505
Bug: 496732
Change-Id: I7285d0e7abb6389ba6983e9c46021bea4344af68
Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>
Set all packs written by the DfsGarbageCollector to use the same
starting timestamp as lastModified. This makes it easier to see
which packs came from the same DfsGarbageCollector run, as they
share the same timestamp.
Change-Id: Id633573fbc3f0f360887b4745cacf33d6fc09320
Force reads to use a search ordering of:
INSERT / RECEIVE
COMPACT
GC (heads)
GC_REST (non-heads)
GC_TXN (refs/txn)
UNREACHABLE_GARBAGE
This has provided decent performance for object lookups. Starting
from an arbitrary reference may find the content in a newer pack
created by DfsObjectInserter or a ReceivePack server. Compaction of
recent packs also contains newer content, and then most interesting
data is in the "main" GC pack. As the GC pack is self-contained (has
no edges leading outside) readers typically do not need to go further.
Adding a new GC_REST PackSource allows the DfsGarbageCollector to
identify to the pack ordering code which pack is which, so the
non-heads are scanned second during reads. This removes a hack that
was unique to Google's implementation that enforced this behavior by
fixing up the lastModified timestamp.
Renumber the PackSource's categories to reflect this search ordering.
Change-Id: I19fdaab8a8d6687cbe8c88488e6daa0630bf189a
Treewalk has a member 'attr' which caches the attributes for the current
entry. We did not reset the cache always when moving to next entry. The
effect was that when there are no attributes for an entry 'a' but 'a'
was skipped by a Treewalk filter then Treewalk stopped looking for
attributes until TreeWalk.next() was called again.
Change-Id: Ied39b7fb5f56afe7a237da17801003d0abe6b1c7
Problem occurs when the checkout wants to create a file 'd/f' but
the workingtree contains a dirty file 'd'. In order to create d/f the
file 'd' would have to be deleted and since the file is dirty that
content would be lost. This should lead to a CheckoutConflictException
for d/f when failOnConflict was set to true.
This fix also changes jgit checkout semantics to be more like native
gits checkout semantics. If during a checkout jgit wants to delete a
folder but finds that the working tree contains a dirty file at this
path then JGit will now throw an exception instead of silently keeping
the dirty file. Like in this example:
git init
touch b
git add b
git commit -m addB
mkdir a
touch a/c
git add a/c
git commit -m addAC
rm -fr a
touch a
git checkout HEAD~
Change-Id: I9089123179e09dd565285d50b0caa308d290cccd
Signed-off-by: Rüdiger Herrmann <ruediger.herrmann@gmx.de>
Also-by: Rüdiger Herrmann <ruediger.herrmann@gmx.de>
The native wire protocol sends ref advertisements in the pkt-line
format, which requires encoding the ObjectId and ref name onto a byte
sequence. Busy servers show this is a very high source of garbage,
which pushes the garbage collector harder when there are many refs in
the repository (e.g. 70k, in a Gerrit managed repository).
Optimize the side band advertiser by retaining the CharsetEncoder,
minimizing the amount of temporary garbage built during encoding.
Change-Id: I406c654bf82c1eb94b38862da2425e98396134cb
Before this fix, ref directory removal did not work. That was because
the ref lock file was still in the leaf directory at deletion time.
Hence no deep ref directories were ever deleted, which negatively
impacted performance under large directory structure circumstances.
This fix removes the ref lock file before attempting to delete the ref
directory (which includes it). The other deep parent directories are
therefore now successfully deleted in turn, since leaf's content
(lock file) gets removed first.
So, given a structure such as refs/any/directory[/**], this fix now
deletes all empty directories up to -and including- 'directory'. The
'any' directory (e.g.) does not get deleted even if empty, as before.
The ref lock file is still also removed in the calling block's finally
clause, just in case, as before. Such double-unlock brought by this
fix is harmless (a no-op).
A new (private) RefDirectory#delete method is introduced to support
this #pack-specific case; other RefDirectory#delete callers remain
untouched.
Change-Id: I47ba1eeb9bcf0cb93d2ed105d84fea2dac756a5a
Signed-off-by: Marco Miller <marco.miller@ericsson.com>
Git servers supporting HTTP transport can send multiple WWW-Authenticate
challenges [1] for different authentication schemes the server supports.
If authentication fails now retry all authentication types proposed by
the server.
[1] https://tools.ietf.org/html/rfc2617#page-3
Bug: 492057
Change-Id: I01d438a5896f9b1008bd6b751ad9c7cbf780af1a
Signed-off-by: Christian Pontesegger <christian.pontesegger@web.de>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
setInput should always push at least 1 byte into the Inflater. If 0
bytes (or negative!) are being sent the DfsBlock is inconsistent with
the position passed in. This indicates a severe programming problem
in the caller, and may cause an infinite loop in DfsReader.
Today we saw a handful of live examples of this but don't know what
the cause is. Guard against this error condition and throw with a
more verbose failure, which may prevent an infinite loop. Callers
will eventually catch DataFormatException and rethrow with more detail
about the object that cannot be inflated, with the DFE in the chain.
Change-Id: I64ed2a471520e48283675c6210c6db8a60634635
When using a DfsInserter for high-throughput insertion of many
objects (analogous to git-fast-import), we don't necessarily want to
do a random object lookup for each. It'll be faster from the
inserter's perspective to insert the duplicate objects and let a later
GC handle the deduplication.
Change-Id: Ic97f5f01657b4525f157e6df66023f1f07fc1851
Git core learned about the submodule.<name>.shallow option in
.gitmodules files, which is a recommendation to clone a submodule
shallow. A repo manifest may record a clone depth recommendation as
an optional field, which contains more information than a binary
shallow/nonshallow recommendation, so any attempted conversion may be
lossy. In practice the clone depth recommendation is either '1' or doesn't
exist, which is the binary behavior we have in Git core.
Change-Id: I51aa9cb6d1d9660dae6ab6d21ad7bae9bc5325e6
Signed-off-by: Stefan Beller <sbeller@google.com>
Git core learned about attributes in pathspecs:
pathspec: allow querying for attributes
The pathspec mechanism is extended via the new
":(attr:eol=input)pattern/to/match" syntax to filter paths so that it
requires paths to not just match the given pattern but also have the
specified attrs attached for them to be chosen.
(177161a5f7, 2016-05-20)
We intend to use these pathspec attribute patterns for submodule
grouping, similar to the grouping in repo. So the RepoCommand which
translates repo manifest files into submodules should propagate this
information along. This requires writing information to the
.gitattributes file instead of the .gitmodules file. For now we just
overwrite any existing .gitattributes file and do not care about prior
attributes set. If this becomes an issue we need to figure out how to
correctly amend the grouping information to an existing .gitattributes
file.
Change-Id: I0f55b45786b6b8fc3d5be62d7f6aab9ac00ed60e
Signed-off-by: Stefan Beller <sbeller@google.com>