Browse Source
Content similarity based rename detection is performed only after a linear time detection is performed using exact content match on the ObjectIds. Any names which were paired up during that exact match phase are excluded from the inexact similarity based rename, which reduces the space that must be considered. During rename detection two entries cannot be marked as a rename if they are different types of files. This prevents a symlink from being renamed to a regular file, even if their blob content appears to be similar, or is identical. Efficiently comparing two files is performed by building up two hash indexes and hashing lines or short blocks from each file, counting the number of bytes that each line or block represents. Instead of using a standard java.util.HashMap, we use a custom open hashing scheme similiar to what we use in ObjecIdSubclassMap. This permits us to have a very light-weight hash, with very little memory overhead per cell stored. As we only need two ints per record in the map (line/block key and number of bytes), we collapse them into a single long inside of a long array, making very efficient use of available memory when we create the index table. We only need object headers for the index structure itself, and the index table, but not per-cell. This offers a massive space savings over using java.util.HashMap. The score calculation is done by approximating how many bytes are the same between the two inputs (which for a delta would be how much is copied from the base into the result). The score is derived by dividing the approximate number of bytes in common into the length of the larger of the two input files. Right now the SimilarityIndex table should average about 1/2 full, which means we waste about 50% of our memory on empty entries after we are done indexing a file and sort the table's contents. If memory becomes an issue we could discard the table and copy all records over to a new array that is properly sized. Building the index requires O(M + N log N) time, where M is the size of the input file in bytes, and N is the number of unique lines/blocks in the file. The N log N time constraint comes from the sort of the index table that is necessary to perform linear time matching against another SimilarityIndex created for a different file. To actually perform the rename detection, a SxD matrix is created, placing the sources (aka deletions) along one dimension and the destinations (aka additions) along the other. A simple O(S x D) loop examines every cell in this matrix. A SimilarityIndex is built along the row and reused for each column compare along that row, avoiding the costly index rebuild at the row level. A future improvement would be to load a smaller square matrix into SimilarityIndexes and process everything in that sub-matrix before discarding the column dimension and moving down to the next sub-matrix block along that same grid of rows. An optional ProgressMonitor is permitted to be passed in, allowing applications to see the progress of the detector as it works through the matrix cells. This provides some indication of current status for very long running renames. The default line/block hash function used by the SimilarityIndex may not be optimal, and may produce too many collisions. It is borrowed from RawText's hash, which is used to quickly skip out of a longer equality test if two lines have different hash functions. We may need to refine this hash in the future, in order to minimize the number of collisions we get on common source files. Based on a handful of test commits in JGit (especially my own recent rename repository refactoring series), this rename detector produces output that is very close to C Git. The content similarity scores are sometimes off by 1%, which is most probably caused by our SimilarityIndex type using a different hash function than C Git uses when it computes the delta size between any two objects in the rename matrix. Bug: 318504 Change-Id: I11dff969e8a2e4cf252636d857d2113053bdd9dc Signed-off-by: Shawn O. Pearce <spearce@spearce.org>stable-0.9
Shawn O. Pearce
15 years ago
11 changed files with 1362 additions and 258 deletions
@ -0,0 +1,137 @@ |
|||||||
|
/* |
||||||
|
* Copyright (C) 2010, Google Inc. |
||||||
|
* and other copyright owners as documented in the project's IP log. |
||||||
|
* |
||||||
|
* This program and the accompanying materials are made available |
||||||
|
* under the terms of the Eclipse Distribution License v1.0 which |
||||||
|
* accompanies this distribution, is reproduced below, and is |
||||||
|
* available at http://www.eclipse.org/org/documents/edl-v10.php
|
||||||
|
* |
||||||
|
* All rights reserved. |
||||||
|
* |
||||||
|
* Redistribution and use in source and binary forms, with or |
||||||
|
* without modification, are permitted provided that the following |
||||||
|
* conditions are met: |
||||||
|
* |
||||||
|
* - Redistributions of source code must retain the above copyright |
||||||
|
* notice, this list of conditions and the following disclaimer. |
||||||
|
* |
||||||
|
* - Redistributions in binary form must reproduce the above |
||||||
|
* copyright notice, this list of conditions and the following |
||||||
|
* disclaimer in the documentation and/or other materials provided |
||||||
|
* with the distribution. |
||||||
|
* |
||||||
|
* - Neither the name of the Eclipse Foundation, Inc. nor the |
||||||
|
* names of its contributors may be used to endorse or promote |
||||||
|
* products derived from this software without specific prior |
||||||
|
* written permission. |
||||||
|
* |
||||||
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND |
||||||
|
* CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, |
||||||
|
* INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES |
||||||
|
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE |
||||||
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR |
||||||
|
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, |
||||||
|
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT |
||||||
|
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; |
||||||
|
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER |
||||||
|
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, |
||||||
|
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) |
||||||
|
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF |
||||||
|
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |
||||||
|
*/ |
||||||
|
|
||||||
|
package org.eclipse.jgit.diff; |
||||||
|
|
||||||
|
import junit.framework.TestCase; |
||||||
|
|
||||||
|
import org.eclipse.jgit.lib.Constants; |
||||||
|
|
||||||
|
public class SimilarityIndexTest extends TestCase { |
||||||
|
public void testIndexing() { |
||||||
|
SimilarityIndex si = hash("" //
|
||||||
|
+ "A\n" //
|
||||||
|
+ "B\n" //
|
||||||
|
+ "D\n" //
|
||||||
|
+ "B\n" //
|
||||||
|
); |
||||||
|
|
||||||
|
int key_A = keyFor("A\n"); |
||||||
|
int key_B = keyFor("B\n"); |
||||||
|
int key_D = keyFor("D\n"); |
||||||
|
assertTrue(key_A != key_B && key_A != key_D && key_B != key_D); |
||||||
|
|
||||||
|
assertEquals(3, si.size()); |
||||||
|
assertEquals(2, si.count(si.findIndex(key_A))); |
||||||
|
assertEquals(4, si.count(si.findIndex(key_B))); |
||||||
|
assertEquals(2, si.count(si.findIndex(key_D))); |
||||||
|
} |
||||||
|
|
||||||
|
public void testCommonScore_SameFiles() { |
||||||
|
String text = "" //
|
||||||
|
+ "A\n" //
|
||||||
|
+ "B\n" //
|
||||||
|
+ "D\n" //
|
||||||
|
+ "B\n"; |
||||||
|
SimilarityIndex src = hash(text); |
||||||
|
SimilarityIndex dst = hash(text); |
||||||
|
assertEquals(8, src.common(dst)); |
||||||
|
assertEquals(8, dst.common(src)); |
||||||
|
|
||||||
|
assertEquals(100, src.score(dst)); |
||||||
|
assertEquals(100, dst.score(src)); |
||||||
|
} |
||||||
|
|
||||||
|
public void testCommonScore_EmptyFiles() { |
||||||
|
SimilarityIndex src = hash(""); |
||||||
|
SimilarityIndex dst = hash(""); |
||||||
|
assertEquals(0, src.common(dst)); |
||||||
|
assertEquals(0, dst.common(src)); |
||||||
|
} |
||||||
|
|
||||||
|
public void testCommonScore_TotallyDifferentFiles() { |
||||||
|
SimilarityIndex src = hash("A\n"); |
||||||
|
SimilarityIndex dst = hash("D\n"); |
||||||
|
assertEquals(0, src.common(dst)); |
||||||
|
assertEquals(0, dst.common(src)); |
||||||
|
} |
||||||
|
|
||||||
|
public void testCommonScore_SimiliarBy75() { |
||||||
|
SimilarityIndex src = hash("A\nB\nC\nD\n"); |
||||||
|
SimilarityIndex dst = hash("A\nB\nC\nQ\n"); |
||||||
|
assertEquals(6, src.common(dst)); |
||||||
|
assertEquals(6, dst.common(src)); |
||||||
|
|
||||||
|
assertEquals(75, src.score(dst)); |
||||||
|
assertEquals(75, dst.score(src)); |
||||||
|
} |
||||||
|
|
||||||
|
private static SimilarityIndex hash(String text) { |
||||||
|
SimilarityIndex src = new SimilarityIndex() { |
||||||
|
@Override |
||||||
|
void hash(byte[] raw, int ptr, final int end) { |
||||||
|
while (ptr < end) { |
||||||
|
int hash = raw[ptr] & 0xff; |
||||||
|
int start = ptr; |
||||||
|
do { |
||||||
|
int c = raw[ptr++] & 0xff; |
||||||
|
if (c == '\n') |
||||||
|
break; |
||||||
|
} while (ptr < end && ptr - start < 64); |
||||||
|
add(hash, ptr - start); |
||||||
|
} |
||||||
|
} |
||||||
|
}; |
||||||
|
byte[] raw = Constants.encode(text); |
||||||
|
src.setFileSize(raw.length); |
||||||
|
src.hash(raw, 0, raw.length); |
||||||
|
src.sort(); |
||||||
|
return src; |
||||||
|
} |
||||||
|
|
||||||
|
private static int keyFor(String line) { |
||||||
|
SimilarityIndex si = hash(line); |
||||||
|
assertEquals("single line scored", 1, si.size()); |
||||||
|
return si.key(0); |
||||||
|
} |
||||||
|
} |
@ -0,0 +1,295 @@ |
|||||||
|
/* |
||||||
|
* Copyright (C) 2010, Google Inc. |
||||||
|
* and other copyright owners as documented in the project's IP log. |
||||||
|
* |
||||||
|
* This program and the accompanying materials are made available |
||||||
|
* under the terms of the Eclipse Distribution License v1.0 which |
||||||
|
* accompanies this distribution, is reproduced below, and is |
||||||
|
* available at http://www.eclipse.org/org/documents/edl-v10.php
|
||||||
|
* |
||||||
|
* All rights reserved. |
||||||
|
* |
||||||
|
* Redistribution and use in source and binary forms, with or |
||||||
|
* without modification, are permitted provided that the following |
||||||
|
* conditions are met: |
||||||
|
* |
||||||
|
* - Redistributions of source code must retain the above copyright |
||||||
|
* notice, this list of conditions and the following disclaimer. |
||||||
|
* |
||||||
|
* - Redistributions in binary form must reproduce the above |
||||||
|
* copyright notice, this list of conditions and the following |
||||||
|
* disclaimer in the documentation and/or other materials provided |
||||||
|
* with the distribution. |
||||||
|
* |
||||||
|
* - Neither the name of the Eclipse Foundation, Inc. nor the |
||||||
|
* names of its contributors may be used to endorse or promote |
||||||
|
* products derived from this software without specific prior |
||||||
|
* written permission. |
||||||
|
* |
||||||
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND |
||||||
|
* CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, |
||||||
|
* INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES |
||||||
|
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE |
||||||
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR |
||||||
|
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, |
||||||
|
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT |
||||||
|
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; |
||||||
|
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER |
||||||
|
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, |
||||||
|
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) |
||||||
|
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF |
||||||
|
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |
||||||
|
*/ |
||||||
|
|
||||||
|
package org.eclipse.jgit.diff; |
||||||
|
|
||||||
|
import java.util.Arrays; |
||||||
|
|
||||||
|
import org.eclipse.jgit.lib.ObjectLoader; |
||||||
|
|
||||||
|
/** |
||||||
|
* Index structure of lines/blocks in one file. |
||||||
|
* <p> |
||||||
|
* This structure can be used to compute an approximation of the similarity |
||||||
|
* between two files. The index is used by {@link SimilarityRenameDetector} to |
||||||
|
* compute scores between files. |
||||||
|
* <p> |
||||||
|
* To save space in memory, this index uses a space efficient encoding which |
||||||
|
* will not exceed 1 MiB per instance. The index starts out at a smaller size |
||||||
|
* (closer to 2 KiB), but may grow as more distinct blocks within the scanned |
||||||
|
* file are discovered. |
||||||
|
*/ |
||||||
|
class SimilarityIndex { |
||||||
|
/** The {@link #idHash} table stops growing at {@code 1 << MAX_HASH_BITS}. */ |
||||||
|
private static final int MAX_HASH_BITS = 17; |
||||||
|
|
||||||
|
/** The {@link #idHash} table will not grow bigger than this, ever. */ |
||||||
|
private static final int MAX_HASH_SIZE = 1 << MAX_HASH_BITS; |
||||||
|
|
||||||
|
/** Prime just before {@link #MAX_HASH_SIZE}. */ |
||||||
|
private static final int P = 131071; |
||||||
|
|
||||||
|
/** |
||||||
|
* Shift to apply before storing a key. |
||||||
|
* <p> |
||||||
|
* Within the 64 bit table record space, we leave the highest bit unset so |
||||||
|
* all values are positive, and we need {@link #MAX_HASH_BITS} bits for the |
||||||
|
* keys. The lower 32 bits are used to count bytes impacted. |
||||||
|
*/ |
||||||
|
private static final int KEY_SHIFT = 64 - 1 - MAX_HASH_BITS; |
||||||
|
|
||||||
|
/** Total size of the file we hashed into the structure. */ |
||||||
|
private long fileSize; |
||||||
|
|
||||||
|
/** Number of non-zero entries in {@link #idHash}. */ |
||||||
|
private int idSize; |
||||||
|
|
||||||
|
/** |
||||||
|
* Pairings of content keys and counters. |
||||||
|
* <p> |
||||||
|
* Slots in the table are actually two ints wedged into a single long. The |
||||||
|
* upper {@link #MAX_HASH_BITS} bits stores the content key, and the |
||||||
|
* remaining lower bits stores the number of bytes associated with that key. |
||||||
|
* Empty slots are denoted by 0, which cannot occur because the count cannot |
||||||
|
* be 0. Values can only be positive, which we enforce during key addition. |
||||||
|
*/ |
||||||
|
private long[] idHash; |
||||||
|
|
||||||
|
SimilarityIndex() { |
||||||
|
idHash = new long[256]; |
||||||
|
} |
||||||
|
|
||||||
|
long getFileSize() { |
||||||
|
return fileSize; |
||||||
|
} |
||||||
|
|
||||||
|
void setFileSize(long size) { |
||||||
|
fileSize = size; |
||||||
|
} |
||||||
|
|
||||||
|
void hash(ObjectLoader obj) { |
||||||
|
byte[] raw = obj.getCachedBytes(); |
||||||
|
setFileSize(raw.length); |
||||||
|
hash(raw, 0, raw.length); |
||||||
|
} |
||||||
|
|
||||||
|
void hash(byte[] raw, int ptr, final int end) { |
||||||
|
while (ptr < end) { |
||||||
|
int hash = 5381; |
||||||
|
int start = ptr; |
||||||
|
|
||||||
|
// Hash one line, or one block, whichever occurs first.
|
||||||
|
do { |
||||||
|
int c = raw[ptr++] & 0xff; |
||||||
|
if (c == '\n') |
||||||
|
break; |
||||||
|
hash = (hash << 5) ^ c; |
||||||
|
} while (ptr < end && ptr - start < 64); |
||||||
|
add(hash, ptr - start); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Sort the internal table so it can be used for efficient scoring. |
||||||
|
* <p> |
||||||
|
* Once sorted, additional lines/blocks cannot be added to the index. |
||||||
|
*/ |
||||||
|
void sort() { |
||||||
|
// Sort the array. All of the empty space will wind up at the front,
|
||||||
|
// because we forced all of the keys to always be positive. Later
|
||||||
|
// we only work with the back half of the array.
|
||||||
|
//
|
||||||
|
Arrays.sort(idHash); |
||||||
|
} |
||||||
|
|
||||||
|
int score(SimilarityIndex dst) { |
||||||
|
long max = Math.max(fileSize, dst.fileSize); |
||||||
|
return (int) ((common(dst) * 100L) / max); |
||||||
|
} |
||||||
|
|
||||||
|
int common(SimilarityIndex dst) { |
||||||
|
return common(this, dst); |
||||||
|
} |
||||||
|
|
||||||
|
private static int common(SimilarityIndex src, SimilarityIndex dst) { |
||||||
|
int srcIdx = src.packedIndex(0); |
||||||
|
int dstIdx = dst.packedIndex(0); |
||||||
|
long[] srcHash = src.idHash; |
||||||
|
long[] dstHash = dst.idHash; |
||||||
|
return common(srcHash, srcIdx, dstHash, dstIdx); |
||||||
|
} |
||||||
|
|
||||||
|
private static int common(long[] srcHash, int srcIdx, //
|
||||||
|
long[] dstHash, int dstIdx) { |
||||||
|
if (srcIdx == srcHash.length || dstIdx == dstHash.length) |
||||||
|
return 0; |
||||||
|
|
||||||
|
int common = 0; |
||||||
|
int srcKey = keyOf(srcHash[srcIdx]); |
||||||
|
int dstKey = keyOf(dstHash[dstIdx]); |
||||||
|
|
||||||
|
for (;;) { |
||||||
|
if (srcKey == dstKey) { |
||||||
|
common += countOf(dstHash[dstIdx]); |
||||||
|
|
||||||
|
if (++srcIdx == srcHash.length) |
||||||
|
break; |
||||||
|
srcKey = keyOf(srcHash[srcIdx]); |
||||||
|
|
||||||
|
if (++dstIdx == dstHash.length) |
||||||
|
break; |
||||||
|
dstKey = keyOf(dstHash[dstIdx]); |
||||||
|
|
||||||
|
} else if (srcKey < dstKey) { |
||||||
|
// Regions of src which do not appear in dst.
|
||||||
|
if (++srcIdx == srcHash.length) |
||||||
|
break; |
||||||
|
srcKey = keyOf(srcHash[srcIdx]); |
||||||
|
|
||||||
|
} else /* if (srcKey > dstKey) */{ |
||||||
|
// Regions of dst which do not appear in dst.
|
||||||
|
if (++dstIdx == dstHash.length) |
||||||
|
break; |
||||||
|
dstKey = keyOf(dstHash[dstIdx]); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
return common; |
||||||
|
} |
||||||
|
|
||||||
|
// Testing only
|
||||||
|
int size() { |
||||||
|
return idSize; |
||||||
|
} |
||||||
|
|
||||||
|
// Testing only
|
||||||
|
int key(int idx) { |
||||||
|
return keyOf(idHash[packedIndex(idx)]); |
||||||
|
} |
||||||
|
|
||||||
|
// Testing only
|
||||||
|
long count(int idx) { |
||||||
|
return countOf(idHash[packedIndex(idx)]); |
||||||
|
} |
||||||
|
|
||||||
|
// Brute force approach only for testing.
|
||||||
|
int findIndex(int key) { |
||||||
|
for (int i = 0; i < idSize; i++) |
||||||
|
if (key(i) == key) |
||||||
|
return i; |
||||||
|
return -1; |
||||||
|
} |
||||||
|
|
||||||
|
private int packedIndex(int idx) { |
||||||
|
return (idHash.length - idSize) + idx; |
||||||
|
} |
||||||
|
|
||||||
|
void add(int key, int cnt) { |
||||||
|
key = hash(key); |
||||||
|
int j = slot(key); |
||||||
|
for (;;) { |
||||||
|
long v = idHash[j]; |
||||||
|
if (v == 0) { |
||||||
|
// Empty slot in the table, store here.
|
||||||
|
if (shouldGrow()) { |
||||||
|
grow(); |
||||||
|
j = slot(key); |
||||||
|
continue; |
||||||
|
} |
||||||
|
idHash[j] = (((long) key) << KEY_SHIFT) | cnt; |
||||||
|
idSize++; |
||||||
|
return; |
||||||
|
|
||||||
|
} else if (keyOf(v) == key) { |
||||||
|
// Same key, increment the counter.
|
||||||
|
idHash[j] = v + cnt; |
||||||
|
return; |
||||||
|
|
||||||
|
} else if (++j >= idHash.length) { |
||||||
|
j = 0; |
||||||
|
} |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
private static int hash(int key) { |
||||||
|
// Make the key fit into our table. Since we have a maximum size
|
||||||
|
// that we cap the table at, all keys get squashed before going
|
||||||
|
// into the table. This prevents overflow.
|
||||||
|
//
|
||||||
|
return (key >>> 1) % P; |
||||||
|
} |
||||||
|
|
||||||
|
private int slot(int key) { |
||||||
|
return key % idHash.length; |
||||||
|
} |
||||||
|
|
||||||
|
private boolean shouldGrow() { |
||||||
|
int n = idHash.length; |
||||||
|
return n < MAX_HASH_SIZE && n <= idSize * 2; |
||||||
|
} |
||||||
|
|
||||||
|
private void grow() { |
||||||
|
long[] oldHash = idHash; |
||||||
|
int oldSize = idHash.length; |
||||||
|
|
||||||
|
idHash = new long[2 * oldSize]; |
||||||
|
for (int i = 0; i < oldSize; i++) { |
||||||
|
long v = oldHash[i]; |
||||||
|
if (v != 0) { |
||||||
|
int j = slot(keyOf(v)); |
||||||
|
while (idHash[j] != 0) |
||||||
|
if (++j >= idHash.length) |
||||||
|
j = 0; |
||||||
|
idHash[j] = v; |
||||||
|
} |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
private static int keyOf(long v) { |
||||||
|
return (int) (v >>> KEY_SHIFT); |
||||||
|
} |
||||||
|
|
||||||
|
private static int countOf(long v) { |
||||||
|
return (int) v; |
||||||
|
} |
||||||
|
} |
@ -0,0 +1,295 @@ |
|||||||
|
/* |
||||||
|
* Copyright (C) 2010, Google Inc. |
||||||
|
* and other copyright owners as documented in the project's IP log. |
||||||
|
* |
||||||
|
* This program and the accompanying materials are made available |
||||||
|
* under the terms of the Eclipse Distribution License v1.0 which |
||||||
|
* accompanies this distribution, is reproduced below, and is |
||||||
|
* available at http://www.eclipse.org/org/documents/edl-v10.php
|
||||||
|
* |
||||||
|
* All rights reserved. |
||||||
|
* |
||||||
|
* Redistribution and use in source and binary forms, with or |
||||||
|
* without modification, are permitted provided that the following |
||||||
|
* conditions are met: |
||||||
|
* |
||||||
|
* - Redistributions of source code must retain the above copyright |
||||||
|
* notice, this list of conditions and the following disclaimer. |
||||||
|
* |
||||||
|
* - Redistributions in binary form must reproduce the above |
||||||
|
* copyright notice, this list of conditions and the following |
||||||
|
* disclaimer in the documentation and/or other materials provided |
||||||
|
* with the distribution. |
||||||
|
* |
||||||
|
* - Neither the name of the Eclipse Foundation, Inc. nor the |
||||||
|
* names of its contributors may be used to endorse or promote |
||||||
|
* products derived from this software without specific prior |
||||||
|
* written permission. |
||||||
|
* |
||||||
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND |
||||||
|
* CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, |
||||||
|
* INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES |
||||||
|
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE |
||||||
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR |
||||||
|
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, |
||||||
|
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT |
||||||
|
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; |
||||||
|
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER |
||||||
|
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, |
||||||
|
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) |
||||||
|
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF |
||||||
|
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |
||||||
|
*/ |
||||||
|
|
||||||
|
package org.eclipse.jgit.diff; |
||||||
|
|
||||||
|
import java.io.IOException; |
||||||
|
import java.util.ArrayList; |
||||||
|
import java.util.Arrays; |
||||||
|
import java.util.List; |
||||||
|
|
||||||
|
import org.eclipse.jgit.JGitText; |
||||||
|
import org.eclipse.jgit.diff.DiffEntry.ChangeType; |
||||||
|
import org.eclipse.jgit.lib.FileMode; |
||||||
|
import org.eclipse.jgit.lib.NullProgressMonitor; |
||||||
|
import org.eclipse.jgit.lib.ObjectId; |
||||||
|
import org.eclipse.jgit.lib.ProgressMonitor; |
||||||
|
import org.eclipse.jgit.lib.Repository; |
||||||
|
|
||||||
|
class SimilarityRenameDetector { |
||||||
|
/** |
||||||
|
* Number of bits we need to express an index into src or dst list. |
||||||
|
* <p> |
||||||
|
* This must be 28, giving us a limit of 2^28 entries in either list, which |
||||||
|
* is an insane limit of 536,870,912 file names being considered in a single |
||||||
|
* rename pass. The other 8 bits are used to store the score, while staying |
||||||
|
* under 127 so the long doesn't go negative. |
||||||
|
*/ |
||||||
|
private static final int BITS_PER_INDEX = 28; |
||||||
|
|
||||||
|
private static final int INDEX_MASK = (1 << BITS_PER_INDEX) - 1; |
||||||
|
|
||||||
|
private static final int SCORE_SHIFT = 2 * BITS_PER_INDEX; |
||||||
|
|
||||||
|
private final Repository repo; |
||||||
|
|
||||||
|
/** |
||||||
|
* All sources to consider for copies or renames. |
||||||
|
* <p> |
||||||
|
* A source is typically a {@link ChangeType#DELETE} change, but could be |
||||||
|
* another type when trying to perform copy detection concurrently with |
||||||
|
* rename detection. |
||||||
|
*/ |
||||||
|
private List<DiffEntry> srcs; |
||||||
|
|
||||||
|
/** |
||||||
|
* All destinations to consider looking for a rename. |
||||||
|
* <p> |
||||||
|
* A destination is typically an {@link ChangeType#ADD}, as the name has |
||||||
|
* just come into existence, and we want to discover where its initial |
||||||
|
* content came from. |
||||||
|
*/ |
||||||
|
private List<DiffEntry> dsts; |
||||||
|
|
||||||
|
/** |
||||||
|
* Matrix of all examined file pairs, and their scores. |
||||||
|
* <p> |
||||||
|
* The upper 8 bits of each long stores the score, but the score is bounded |
||||||
|
* to be in the range (0, 128] so that the highest bit is never set, and all |
||||||
|
* entries are therefore positive. |
||||||
|
* <p> |
||||||
|
* List indexes to an element of {@link #srcs} and {@link #dsts} are encoded |
||||||
|
* as the lower two groups of 28 bits, respectively, but the encoding is |
||||||
|
* inverted, so that 0 is expressed as {@code (1 << 28) - 1}. This sorts |
||||||
|
* lower list indices later in the matrix, giving precedence to files whose |
||||||
|
* names sort earlier in the tree. |
||||||
|
*/ |
||||||
|
private long[] matrix; |
||||||
|
|
||||||
|
/** Score a pair must exceed to be considered a rename. */ |
||||||
|
private int renameScore = 60; |
||||||
|
|
||||||
|
private List<DiffEntry> out; |
||||||
|
|
||||||
|
SimilarityRenameDetector(Repository repo, List<DiffEntry> srcs, |
||||||
|
List<DiffEntry> dsts) { |
||||||
|
this.repo = repo; |
||||||
|
this.srcs = srcs; |
||||||
|
this.dsts = dsts; |
||||||
|
} |
||||||
|
|
||||||
|
void setRenameScore(int score) { |
||||||
|
renameScore = score; |
||||||
|
} |
||||||
|
|
||||||
|
void compute(ProgressMonitor pm) throws IOException { |
||||||
|
if (pm == null) |
||||||
|
pm = NullProgressMonitor.INSTANCE; |
||||||
|
|
||||||
|
pm.beginTask(JGitText.get().renamesFindingByContent, //
|
||||||
|
2 * srcs.size() * dsts.size()); |
||||||
|
|
||||||
|
int mNext = buildMatrix(pm); |
||||||
|
out = new ArrayList<DiffEntry>(Math.min(mNext, dsts.size())); |
||||||
|
|
||||||
|
// Match rename pairs on a first come, first serve basis until
|
||||||
|
// we have looked at everything that is above our minimum score.
|
||||||
|
//
|
||||||
|
for (--mNext; mNext >= 0; mNext--) { |
||||||
|
long ent = matrix[mNext]; |
||||||
|
int sIdx = srcFile(ent); |
||||||
|
int dIdx = dstFile(ent); |
||||||
|
DiffEntry s = srcs.get(sIdx); |
||||||
|
DiffEntry d = dsts.get(dIdx); |
||||||
|
|
||||||
|
if (d == null) { |
||||||
|
pm.update(1); |
||||||
|
continue; // was already matched earlier
|
||||||
|
} |
||||||
|
|
||||||
|
ChangeType type; |
||||||
|
if (s.changeType == ChangeType.DELETE) { |
||||||
|
// First use of this source file. Tag it as a rename so we
|
||||||
|
// later know it is already been used as a rename, other
|
||||||
|
// matches (if any) will claim themselves as copies instead.
|
||||||
|
//
|
||||||
|
s.changeType = ChangeType.RENAME; |
||||||
|
type = ChangeType.RENAME; |
||||||
|
} else { |
||||||
|
type = ChangeType.COPY; |
||||||
|
} |
||||||
|
|
||||||
|
out.add(DiffEntry.pair(type, s, d, score(ent))); |
||||||
|
dsts.set(dIdx, null); // Claim the destination was matched.
|
||||||
|
pm.update(1); |
||||||
|
} |
||||||
|
|
||||||
|
srcs = compactSrcList(srcs); |
||||||
|
dsts = compactDstList(dsts); |
||||||
|
pm.endTask(); |
||||||
|
} |
||||||
|
|
||||||
|
List<DiffEntry> getMatches() { |
||||||
|
return out; |
||||||
|
} |
||||||
|
|
||||||
|
List<DiffEntry> getLeftOverSources() { |
||||||
|
return srcs; |
||||||
|
} |
||||||
|
|
||||||
|
List<DiffEntry> getLeftOverDestinations() { |
||||||
|
return dsts; |
||||||
|
} |
||||||
|
|
||||||
|
private static List<DiffEntry> compactSrcList(List<DiffEntry> in) { |
||||||
|
ArrayList<DiffEntry> r = new ArrayList<DiffEntry>(in.size()); |
||||||
|
for (DiffEntry e : in) { |
||||||
|
if (e.changeType == ChangeType.DELETE) |
||||||
|
r.add(e); |
||||||
|
} |
||||||
|
return r; |
||||||
|
} |
||||||
|
|
||||||
|
private static List<DiffEntry> compactDstList(List<DiffEntry> in) { |
||||||
|
ArrayList<DiffEntry> r = new ArrayList<DiffEntry>(in.size()); |
||||||
|
for (DiffEntry e : in) { |
||||||
|
if (e != null) |
||||||
|
r.add(e); |
||||||
|
} |
||||||
|
return r; |
||||||
|
} |
||||||
|
|
||||||
|
private int buildMatrix(ProgressMonitor pm) throws IOException { |
||||||
|
// Allocate for the worst-case scenario where every pair has a
|
||||||
|
// score that we need to consider. We might not need that many.
|
||||||
|
//
|
||||||
|
matrix = new long[srcs.size() * dsts.size()]; |
||||||
|
|
||||||
|
// Consider each pair of files, if the score is above the minimum
|
||||||
|
// threshold we need record that scoring in the matrix so we can
|
||||||
|
// later find the best matches.
|
||||||
|
//
|
||||||
|
int mNext = 0; |
||||||
|
for (int srcIdx = 0; srcIdx < srcs.size(); srcIdx++) { |
||||||
|
DiffEntry srcEnt = srcs.get(srcIdx); |
||||||
|
if (!isFile(srcEnt.oldMode)) { |
||||||
|
pm.update(dsts.size()); |
||||||
|
continue; |
||||||
|
} |
||||||
|
|
||||||
|
SimilarityIndex s = hash(srcEnt.oldId.toObjectId()); |
||||||
|
for (int dstIdx = 0; dstIdx < dsts.size(); dstIdx++) { |
||||||
|
DiffEntry dstEnt = dsts.get(dstIdx); |
||||||
|
|
||||||
|
if (!isFile(dstEnt.newMode)) { |
||||||
|
pm.update(1); |
||||||
|
continue; |
||||||
|
} |
||||||
|
|
||||||
|
if (!RenameDetector.sameType(srcEnt.oldMode, dstEnt.newMode)) { |
||||||
|
pm.update(1); |
||||||
|
continue; |
||||||
|
} |
||||||
|
|
||||||
|
SimilarityIndex d = hash(dstEnt.newId.toObjectId()); |
||||||
|
int score = s.score(d); |
||||||
|
|
||||||
|
if (score < renameScore) { |
||||||
|
pm.update(1); |
||||||
|
continue; |
||||||
|
} |
||||||
|
|
||||||
|
matrix[mNext++] = encode(score, srcIdx, dstIdx); |
||||||
|
pm.update(1); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
// Sort everything in the range we populated, which might be the
|
||||||
|
// entire matrix, or just a smaller slice if we had some bad low
|
||||||
|
// scoring pairs.
|
||||||
|
//
|
||||||
|
Arrays.sort(matrix, 0, mNext); |
||||||
|
return mNext; |
||||||
|
} |
||||||
|
|
||||||
|
private SimilarityIndex hash(ObjectId objectId) throws IOException { |
||||||
|
SimilarityIndex r = new SimilarityIndex(); |
||||||
|
r.hash(repo.openObject(objectId)); |
||||||
|
r.sort(); |
||||||
|
return r; |
||||||
|
} |
||||||
|
|
||||||
|
private static int score(long value) { |
||||||
|
return (int) (value >>> SCORE_SHIFT); |
||||||
|
} |
||||||
|
|
||||||
|
private static int srcFile(long value) { |
||||||
|
return decodeFile(((int) (value >>> BITS_PER_INDEX)) & INDEX_MASK); |
||||||
|
} |
||||||
|
|
||||||
|
private static int dstFile(long value) { |
||||||
|
return decodeFile(((int) value) & INDEX_MASK); |
||||||
|
} |
||||||
|
|
||||||
|
private static long encode(int score, int srcIdx, int dstIdx) { |
||||||
|
return (((long) score) << SCORE_SHIFT) //
|
||||||
|
| (encodeFile(srcIdx) << BITS_PER_INDEX) //
|
||||||
|
| encodeFile(dstIdx); |
||||||
|
} |
||||||
|
|
||||||
|
private static long encodeFile(int idx) { |
||||||
|
// We invert the index so that the first file in the list sorts
|
||||||
|
// later in the table. This permits us to break ties favoring
|
||||||
|
// earlier names over later ones.
|
||||||
|
//
|
||||||
|
return INDEX_MASK - idx; |
||||||
|
} |
||||||
|
|
||||||
|
private static int decodeFile(int v) { |
||||||
|
return INDEX_MASK - v; |
||||||
|
} |
||||||
|
|
||||||
|
private static boolean isFile(FileMode mode) { |
||||||
|
return (mode.getBits() & FileMode.TYPE_MASK) == FileMode.TYPE_FILE; |
||||||
|
} |
||||||
|
} |
Loading…
Reference in new issue