This consolidates the sideband stream creation code and the error
handling code for the sideband-allowed part in the Git protocol to one
place.
Change-Id: I0e3e94564f50d1be32006f9d8bcd1ef1ce6bf07e
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
ErrorWriter writes an error message to the user. The implementation is
swapped once it detects that the client supports sideband. By default it
uses the protocol level ERR packet, which was introduced recently.
In total the error output is done in two different places;
UploadPack#upload and UploadPack#sendPack. These will be consolidated in
the next change.
Change-Id: Ia8d72e31170bbeafc8ffa8ddb92702196af8a587
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
MergedReftable is not used as an AutoCloseable, because closing tables
is currently handled by DfsReftableStack#close.
Encode that a MergedReftable is a list of ReftableReaders. The previous
code suggested that we could form nested trees of MergedReftables,
which is not how we use reftables.
Change-Id: Icbe2fee8a5a12373f45fc5f97d8b1a2b14231c96
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
In the Config#StringReader we relied on ArrayIndexOutOfBoundsException
to detect the end of the input. Creation of exception with (deep) stack
trace can significantly degrade performance in case when we read
thousands of config files, like in the case when Gerrit reads all
external ids from the NoteDb.
Use the buf.length to detect the end of the input.
Change-Id: I12266f25751373a870ce3fa623cf2a95d882d521
Otherwise the paths modified by a cherry-pick with conflicts won't be
reported as modified via WorkingTreeModifiedEvents.
Change-Id: I875b67c0d2f68efdf90a9c32b80a2e074ed3570d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
This makes the intended use of the classes more clear. It also
simplifies generic functions that write reftables: they only need a
ReftableWriter as argument, as the stream is carried within the
ReftableWriter.
Change-Id: Idbb06f89ae33100f0c0b562cc38e5b3b026d5181
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The functionality in ReftableStack is specific to DFS.
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Change-Id: If6003d104b1ecb0f3ca7e9c3815b233fa0abf077
This makes maxUpdateIndex() available in MergedReftable, so we can
know generically at which index to create the next reftable in a
stack.
Change-Id: Ia2314bc57c8b5dd7e69d5e61096fdce1d35abd11
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Older JGit stored only milliseconds timestamps in the index. Newer
JGit may get finer timestamps from the file system. This leads to
slow index diffs when a new JGit runs against an index produced
by older JGit because many timestamps will differ and JGit will
then do many content checks. See [1].
Handle this migration case by only comparing milliseconds if the
index entry has only millisecond precision.
The inverse may also occur; also compare only milliseconds if the
file timestamp has only millisecond precision.
Do the same also for microsecond resolution. On Windows, NTFS may
provide 100ns resolution and may be used by external programs writing
the index, but Java's WindowsFileAttributes may provide only
microseconds.
File timestamp precision in Java depends not only on the Java APIs
used by different JGit versions but may also change when running the
same Java code on different VMs. And of course the resolution may
vary among operating and file systems. Moreover, timestamp precision
in the index depends on the program that wrote the index. Canonical
git may use a different resolution, maybe even different between git
versions.
[1] https://www.eclipse.org/forums/index.php/t/1100344/
Change-Id: Idfd08606c883cb98787b2138f9baf0cc89a57b56
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The removed code was trying to avoid mistakenly reporting differences
when core.autocrlf was set to "input" but a file had already been
committed with CR-LF. It did that by running the blob from the cache
through a CRLF-to-LF filter because older JGit would also run the file
from the working tree through such a filter.
The real fix for this case was done in commit 60cf85a. Since then files
are not normalized if they have already been committed with CR-LF and
this old fix attempt from bug 372834 is no longer needed.
Change-Id: Ib4facc153d81325cb48b4ee956a596b423f36241
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
This removes one use of DFS specific code in this class.
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Change-Id: I3ef6a4b98357cc6dc480892244ddc51d2fd751a2
This lets us write reftables generically with functions that take
just ReftableWriter argument
Change-Id: I7285951f62f9bd4c78e8f0de194c077d51fa4e51
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
allRefs determined the end of the ref block without accounting for
index or log blocks. This could cause other blocks to be interpreted
as ref blocks, leading to "invalid block" error messages.
Change-Id: I7b9323e7d5e0e7d64535b3ec1efd576aed1e9870
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
If CheckStat is MINIMAL or timestamps have no nanosecond part
WorkingTreeIterator.compareMetaData only checks the second part of
timestamps and ignores nanoseconds which may have ended up in the index
by using native git.
If
fileLastModified.getEpochSecond() == cacheLastModified.getEpochSecond()
we currently proceed comparing fileLastModified and cacheLastModified
with full precision which is wrong since we determined that we detected
reduced timestamp resolution.
Fix this and also handle smudged index entries for CheckStat.MINIMAL.
Change-Id: I6149885903ac63d79b42d234cc02aa4e19578f3c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
We don't need to update time atomically since it's only used to order
cache entries in LRU order.
Change-Id: I756fa6d90b180c519bf52925f134763744f2c1f1
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Since commit c3b0dec509fe136c5417422f31898b5a4e2d5e02, Git has
disallowed writing a non-commit to refs/heads/* refs. JGit still
allows that, which can put users in a bad situation. For example,
git push origin v1.0:master
pushes the tag object v1.0 to refs/heads/master, instead of the
intended commit object v1.0^{commit}.
Prevent that by validating that the target of the ref points to a
commit object when pushing to refs/heads/*.
Git performs the same check at a lower level (in the RefDatabase). We
could do the same here, but for now let's start conservatively by
handling it in pushes first.
[jn: fleshed out commit message]
Change-Id: I8f98ae6d8acbcd5ef7553ec732bc096cb6eb7c4e
Signed-off-by: Yunjie Li <yunjieli@google.com>
Signed-off-by: Jonathan Nieder <jrn@google.com>
PackedBatchRefUpdate was creating a new packed-refs list that was
potentially unsorted. This would be papered over when the list was
read back from disk in parsePackedRef, which detects unsorted ref
lists on reading, and sorts them. However, the BatchRefUpdate also
installed the new (unsorted) list in-memory in
RefDirectory#packedRefs.
With the timestamp granularity code committed to stable-5.1, we can
more often accurately decide that the packed-refs file is clean, and
will return the erroneous unsorted data more often. Unluckily timed
delays also cause the file to be clean, hence this problem was
exacerbated under load.
The symptom is that refs added by a BatchRefUpdate would stop being
visible directly after they were added. In particular, the Gerrit
integration tests uses BatchRefUpdate in its setup for creating the
Admin group, and then tries to read it out directly afterward.
The tests recreates one failure case. A better approach would be to
revise RefList.Builder, so it detects out-of-order lists and
automatically sorts them.
Fixes https://bugs.eclipse.org/bugs/show_bug.cgi?id=548716 and
https://bugs.chromium.org/p/gerrit/issues/detail?id=11373.
Bug: 548716
Change-Id: I613c8059964513ce2370543620725b540b3cb6d1
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Add the constant, and implement hashing of known host names in
OpenSshServerKeyDatabase. Add a test verifying that the hashing
works.
Bug: 548492
Change-Id: Iabe82b666da627bd7f4d82519a366d166aa9ddd4
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Move the handling of cached user and system config to getSystemConfig
and getUserConfig methods and revert the implementation of
openSystemConfig and openUserConfig to the old stateless
implementation.
This ensures the open methods respect the passed-in parent config, which
may be different on each invocation. Additionally, returning a new
instance matches the behavior of the previous implementation of the
default system reader, which downstream callers may be depending on.
Move the implementation of the new caching methods getSystemConfig and
getUserConfig up to SystemReader. This avoids that we break the ABI for
subclasses of SystemReader.
Also see [1] which fixed a similar problem with Gerrit's custom
SystemReader.
[1] https://gerrit-review.googlesource.com/c/gerrit/+/225458
Change-Id: If54a2491932d8fc914d4649cb73c9e837c5b8ad0
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
These warnings were missed to address in a0048208 which introduced them.
Change-Id: Ia2d15fdce72c10378d020682b80fe7fc548c0d4c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
In order to support implementations of CachedPackUriProvider (which need
to supply, among other things, the checksum of the packfile
corresponding to a CachedPack), in a004820858 ("UploadPack: support
custom packfile-to-URI mapping", 2019-08-20), the method getCheckSum()
was added to PackIndex. However, there is no way to access the PackIndex
from a DfsCachedPack.
Therefore, add an accessor for the DfsPackFile stored in the
DfsCachedPack. Now, a user who has a DfsCachedPack can use
DfsCachedPack#getPackFile then DfsPackFile#getPackIndex then
PackIndex#getCheckSum to obtain the checksum of a pack.
Change-Id: Ia010c016f6cac0f058ee20eff4c10f57338bfefc
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>