Skip to content

[Bug] NPE when pushdown min/max to partitioned table which has a partition is all null #6610

@xieshuaihu

Description

@xieshuaihu

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

1.4-SNAPSHOT

Compute Engine

Spark

Minimal reproduce step

spark.sql("CREATE TABLE T (c1 INT, c2 LONG) PARTITIONED BY(day STRING)")

spark.sql("INSERT INTO T VALUES (1, 2, '2025-11-10')")
spark.sql("INSERT INTO T VALUES (null, 2, '2025-11-11')")

spark.sql("SELECT MIN(c1) FROM T").collect()

What doesn't meet your expectations?

It doesn't work, an NPE is thrown.

[INTERNAL_ERROR] The Spark SQL phase optimization failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace.
org.apache.spark.SparkException: [INTERNAL_ERROR] The Spark SQL phase optimization failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace.
at org.apache.spark.SparkException$.internalError(SparkException.scala:107)
at org.apache.spark.sql.execution.QueryExecution$.toInternalError(QueryExecution.scala:536)
at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:548)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:219)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:218)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:148)
...
Caused by: java.lang.NullPointerException
at java.lang.Integer.compareTo(Integer.java:1216)
at java.lang.Integer.compareTo(Integer.java:52)
at org.apache.paimon.predicate.CompareUtils.compareLiteral(CompareUtils.java:31)
at org.apache.paimon.spark.aggregate.MinEvaluator.update(AggFuncEvaluator.scala:63)
at org.apache.paimon.spark.aggregate.LocalAggregator.$anonfun$update$2(LocalAggregator.scala:98)
...

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions