Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
1.6.1
-
None
-
None
-
Dev
Description
Hi Team,
We are using Java API to call Spark job for inserting the data in Hive table. When we perform Append operation, Spark inserts data in Hive considering that Hive columns are in Alphabetical order. We don’t see this issue if we using Pyspark. We are using Spark 1.6.1 & Hive 1.0 in EMR 4.6.0.
Further researching on the issue, I found that similar issue has been reported in Spark 2.2.0. Could you please advise if this issue present in Spark 1.6.1 as well ?
https://issues.apache.org/jira/browse/SPARK-14543
As per our understanding PySpark & Java called Spark both will use same Spark APIs in backend for similar operation. Please advise if this is not the case.
Attachments
Issue Links
- duplicates
-
SPARK-14543 SQL/Hive insertInto has unexpected results
- Resolved