Skip to content

Commit cd43dcc

Browse files
committed
address comments
Signed-off-by: Hongbin Ma (Mahone) <[email protected]>
1 parent a6a3cc0 commit cd43dcc

File tree

3 files changed

+2
-2
lines changed

3 files changed

+2
-2
lines changed

docs/additional-functionality/advanced_configs.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,6 @@ Name | Description | Default Value | Applicable at
5050
<a name="shuffle.ucx.activeMessages.forceRndv"></a>spark.rapids.shuffle.ucx.activeMessages.forceRndv|Set to true to force 'rndv' mode for all UCX Active Messages. This should only be required with UCX 1.10.x. UCX 1.11.x deployments should set to false.|false|Startup
5151
<a name="shuffle.ucx.managementServerHost"></a>spark.rapids.shuffle.ucx.managementServerHost|The host to be used to start the management server|null|Startup
5252
<a name="shuffle.ucx.useWakeup"></a>spark.rapids.shuffle.ucx.useWakeup|When set to true, use UCX's event-based progress (epoll) in order to wake up the progress thread when needed, instead of a hot loop.|true|Startup
53-
<a name="sql.agg.skipAggPassReductionRatio"></a>spark.rapids.sql.agg.skipAggPassReductionRatio|In non-final aggregation stages, if the previous pass has a row reduction ratio greater than this value, the next aggregation pass will be skipped.Setting this to 1 essentially disables this feature.|0.85|Runtime
5453
<a name="sql.allowMultipleJars"></a>spark.rapids.sql.allowMultipleJars|Allow multiple rapids-4-spark, spark-rapids-jni, and cudf jars on the classpath. Spark will take the first one it finds, so the version may not be expected. Possisble values are ALWAYS: allow all jars, SAME_REVISION: only allow jars with the same revision, NEVER: do not allow multiple jars at all.|SAME_REVISION|Startup
5554
<a name="sql.castDecimalToFloat.enabled"></a>spark.rapids.sql.castDecimalToFloat.enabled|Casting from decimal to floating point types on the GPU returns results that have tiny difference compared to results returned from CPU.|true|Runtime
5655
<a name="sql.castFloatToDecimal.enabled"></a>spark.rapids.sql.castFloatToDecimal.enabled|Casting from floating point types to decimal on the GPU returns results that have tiny difference compared to results returned from CPU.|true|Runtime

sql-plugin/src/main/scala/com/nvidia/spark/rapids/GpuAggregateExec.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -983,7 +983,7 @@ class GpuMergeAggregateIterator(
983983
// It's only based on first batch of first pass agg, so it's an estimate
984984
val firstPassReductionRatioEstimate = 1.0 * peek.numRows() / localInputRowsCount.value
985985
if (firstPassReductionRatioEstimate > skipAggPassReductionRatio) {
986-
logDebug("Skipping second and third pass aggregation due to " +
986+
logInfo("Skipping second and third pass aggregation due to " +
987987
"too high reduction ratio in first pass: " +
988988
s"$firstPassReductionRatioEstimate")
989989
// if so, skip any aggregation, return the origin batch directly

sql-plugin/src/main/scala/com/nvidia/spark/rapids/RapidsConf.scala

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1604,6 +1604,7 @@ val GPU_COREDUMP_PIPE_PATTERN = conf("spark.rapids.gpu.coreDump.pipePattern")
16041604
.doc("In non-final aggregation stages, if the previous pass has a row reduction ratio " +
16051605
"greater than this value, the next aggregation pass will be skipped." +
16061606
"Setting this to 1 essentially disables this feature.")
1607+
.internal()
16071608
.doubleConf
16081609
.checkValue(v => v >= 0 && v <= 1, "The ratio value must be in [0, 1].")
16091610
.createWithDefault(0.85)

0 commit comments

Comments
 (0)