Details
-
Improvement
-
Status: In Progress
-
Major
-
Resolution: Unresolved
-
2.0.0, 2.0.1
-
None
-
None
Description
This was discussed in SPARK-17878
For other datasources, it seems okay with string/long/boolean/double value as an option but it seems it is not enough for the datasource such as CSV. As it is an interface for other external datasources, I guess it'd affect several ones out there.
I took a look a first but it seems it'd be difficult to support this (need to change a lot).
One suggestion is support this as a JSON array.
Attachments
Issue Links
- blocks
-
SPARK-24540 Support for multiple character delimiter in Spark CSV read
- Resolved
- is related to
-
SPARK-17878 Support for multiple null values when reading CSV data
- Resolved
- links to