Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
4.0.0
Description
Currently the list of supported charsets in encode() is not stable and fully depends on the used JDK version. So, sometimes user code might not work because a devop changed Java version in Spark cluster. The ticket aims to restrict the list of supported charsets by:
'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16'
Attachments
Attachments
Issue Links
- is cloned by
-
SPARK-46220 Restrict charsets in decode()
- Resolved
- links to