Details
-
Bug
-
Status: Resolved
-
Normal
-
Resolution: Fixed
-
None
-
Correctness - Consistency
-
Normal
-
Normal
-
User Report
-
All
-
None
-
Description
When doing bulk reads with the analytics library, a user can specify the last modified column as an option. Bulk reader will add a column with the last modified column to the data frame. If a user wants to use the bulk-read data frame to persist data, and using the WriterOptions.TIMESTAMP feature from the last modified column from the bulk-read data frame, the bulk write will fail with a data type mapping error.
Caused by: java.lang.RuntimeException: Unsupported conversion for LONG from java.sql.Timestamp
at org.apache.cassandra.spark.bulkwriter.SqlToCqlTypeConverter$LongConverter.convertInternal(SqlToCqlTypeConverter.java:245)
at org.apache.cassandra.spark.bulkwriter.SqlToCqlTypeConverter$LongConverter.convertInternal(SqlToCqlTypeConverter.java:231)
at org.apache.cassandra.spark.bulkwriter.SqlToCqlTypeConverter$Converter.convert(SqlToCqlTypeConverter.java:203)
at org.apache.cassandra.spark.bulkwriter.SqlToCqlTypeConverter$NullableConverter.convert(SqlToCqlTypeConverter.java:212)
at org.apache.cassandra.spark.bulkwriter.TableSchema.normalize(TableSchema.java:91)
at org.apache.spark.api.java.JavaPairRDD$.$anonfun$toScalaFunction$1(JavaPairRDD.scala:1070)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
Attachments
Issue Links
- links to