Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
2.2.0
-
None
-
None
Description
This issue aims to fix StackOverflowError in branch-2.2. In Apache master branch, it doesn't throw StackOverflowError.
scala> spark.version res0: String = 2.2.0 scala> sql("CREATE TABLE t_1000 (a INT, p INT) USING PARQUET PARTITIONED BY (p)") res1: org.apache.spark.sql.DataFrame = [] scala> (1 to 1000).foreach(p => sql(s"ALTER TABLE t_1000 ADD PARTITION (p=$p)")) scala> sql("SELECT COUNT(DISTINCT p) FROM t_1000").collect java.lang.StackOverflowError at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1522)
Attachments
Issue Links
- duplicates
-
SPARK-21477 Mark LocalTableScanExec's input data transient
- Resolved
- links to