Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.1.0
-
None
Description
It looks like there is some bug introduced in Spark 2.1.0 preventing to read data from a parquet table (hive support is enabled) whose name starts with underscore. CREATE and INSERT statements on the same table instead seems to work as expected.
The problem can be reproduced from spark-shell through the following steps:
1) Create a table with some values
scala> spark.sql("CREATE TABLE `_a`(i INT) USING parquet").show
scala> spark.sql("INSERT INTO `_a` VALUES (1), (2), (3)").show
2) Select data from the just created and filled table --> no results
scala> spark.sql("SELECT * FROM `_a`").show
---
i |
---
---
3) rename the table so that the prefixing underscore disappears
scala> spark.sql("ALTER TABLE `_a` RENAME TO `a`").show
4) select data from the just renamed table --> results are shown
scala> spark.sql("SELECT * FROM `a`").show
---
i |
---
1 |
2 |
3 |
---