Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
connectors-6.0.0
-
None
Description
For Spark 2, it was possible to omit some columns from the dataframe, the same way it is not mandatory to specify all columns when upserting via SQL.
Spark3 has added new checks, which require that EVERY sql column is specifed in the DataFrame.
Consequently, when using the current API, writing will fail unless you specify all columns.
This is a loss of functionality WRT Phoenix (and other SQL datastores) compared to Spark2.
I don't think that we can do anything from the Phoenix side, just documenting the regression here.
Maybe future Spark versions will make this configurable.
Attachments
Issue Links
- fixes
-
PHOENIX-6668 Spark3 connector cannot distinguish column name cases
- Resolved
- is broken by
-
PHOENIX-6632 Migrate connectors to Spark-3
- Resolved
- links to