Description
Now there are 319 TESTS FAILED based on commit `f5360e761ef161f7e04526b59a4baf53f1cf8cd5`
Run completed in 1 hour, 20 minutes, 25 seconds. Total number of tests run: 8485 Suites: completed 357, aborted 0 Tests: succeeded 8166, failed 319, canceled 1, ignored 52, pending 0 *** 319 TESTS FAILED ***
There are 293 failures associated with TPCDS_XXX_PlanStabilitySuite and TPCDS_XXX_PlanStabilityWithStatsSuite:
- TPCDSV2_7_PlanStabilitySuite(33 FAILED)
- TPCDSV1_4_PlanStabilityWithStatsSuite(94 FAILED)
- TPCDSModifiedPlanStabilityWithStatsSuite(21 FAILED)
- TPCDSV1_4_PlanStabilitySuite(92 FAILED)
- TPCDSModifiedPlanStabilitySuite(21 FAILED)
- TPCDSV2_7_PlanStabilityWithStatsSuite(32 FAILED)
Other 26 FAILED cases as follow:
- StreamingAggregationSuite
- count distinct - state format version 1
- count distinct - state format version 2
- GeneratorFunctionSuite
- explode and other columns
- explode_outer and other columns
- UDFSuite
SPARK-26308: udf with complex types of decimalSPARK-32459: UDF should not fail on WrappedArray
- SQLQueryTestSuite
- decimalArithmeticOperations.sql
- postgreSQL/aggregates_part2.sql
- ansi/decimalArithmeticOperations.sql
- udf/postgreSQL/udf-aggregates_part2.sql - Scala UDF
- udf/postgreSQL/udf-aggregates_part2.sql - Regular Python UDF
- WholeStageCodegenSuite
SPARK-26680: Stream in groupBy does not cause StackOverflowError
- DataFrameSuite:
- explode
SPARK-28067: Aggregate sum should not return wrong results for decimal overflow- Star Expansion - ds.explode should fail with a meaningful message if it takes a star
- DataStreamReaderWriterSuite
SPARK-18510: use user specified types for partition columns in file sources
- OrcV1QuerySuite\OrcV2QuerySuite
- Simple selection form ORC table * 2
- ExpressionsSchemaSuite
- Check schemas for expression examples
- DataFrameStatSuite
SPARK-28818: Respect original column nullability in `freqItems`
- JsonV1Suite\JsonV2Suite\JsonLegacyTimeParserSuite
SPARK-4228DataFrame to JSON * 3- backward compatibility * 3
Attachments
Issue Links
- is duplicated by
-
SPARK-32848 CostBasedJoinReorder should produce same result in Scala 2.12 and 2.13 with same input
- Resolved
- relates to
-
SPARK-33524 Change `InMemoryTable` not to use Tuple.hashCode for `BucketTransform`
- Resolved
- links to