Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-21365

Deduplicate logics parsing DDL-like type definition

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.2.0
    • Fix Version/s: 2.3.0
    • Component/s: PySpark
    • Labels:
      None

      Description

      It looks we duplicate https://github.com/apache/spark/blob/d492cc5a21cd67b3999b85d97f5c41c3734b1ba3/python/pyspark/sql/types.py#L823-L845 logic for parsing DDL-like type definitions.

      There are also two more points here:

      • This does not support "field type" but "field: type".
      • This does not support nested schemas. For example as below:
      >>> spark.createDataFrame([[[1]]], "struct<a: struct<b: int>>").show()
      ...
      ValueError: The strcut field string format is: 'field_name:field_type', but got: a: struct<b: int>
      
      >>> spark.createDataFrame([[[1]]], "a: struct<b: int>").show()
      ...
      ValueError: The strcut field string format is: 'field_name:field_type', but got: a: struct<b: int>
      
      >>> spark.createDataFrame([[[1]]], "a int").show()
      ...
      ValueError: Could not parse datatype: a int
      

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                hyukjin.kwon Hyukjin Kwon
                Reporter:
                hyukjin.kwon Hyukjin Kwon
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: