Fix deterministic partitioning with dataset containing Double.NaN (#5996)

The functions featureValueOfSparseVector or featureValueOfDenseVector could return a Float.NaN if the input vectore was containing any missing values. This would make fail the partition key computation and most of the vectors would end up in the same partition. We fix this by avoid returning a NaN and simply use the row HashCode in this case.
We added a test to ensure that the repartition is indeed now uniform on input dataset containing values by checking that the partitions size variance is below a certain threshold.

Signed-off-by: Anthony D'Amato <anthony.damato@hotmail.fr>
This commit is contained in:
Anthony D'Amato
2020-08-19 03:55:37 +02:00
committed by GitHub
parent e51cba6195
commit f58e41bad8
2 changed files with 33 additions and 1 deletions

View File

@@ -103,7 +103,8 @@ object DataUtils extends Serializable {
case sparseVector: SparseVector =>
featureValueOfSparseVector(rowHashCode, sparseVector)
}
math.abs((rowHashCode.toLong + featureValue).toString.hashCode % numPartitions)
val nonNaNFeatureValue = if (featureValue.isNaN) { 0.0f } else { featureValue }
math.abs((rowHashCode.toLong + nonNaNFeatureValue).toString.hashCode % numPartitions)
}
private def attachPartitionKey(