Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-43389

spark.read.csv throws NullPointerException when lineSep is set to None

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Trivial
    • Resolution: Fixed
    • 3.3.1
    • 3.5.0
    • PySpark, SQL
    • None

    Description

      lineSep was defined as Optional[str] yet i'm unable to explicitly set it as None:

      reader = spark.read.format("csv")
      read_options=

      {'inferSchema': False, 'header': True, 'mode': 'DROPMALFORMED', 'sep': '\t', 'escape': '\\', 'multiLine': False, 'lineSep': None}

      for option, option_value in read_options.items():
      reader = reader.option(option, option_value)
      df = reader.load("s3://<path-to-csv-data>")

      raises exception:

      py4j.protocol.Py4JJavaError: An error occurred while calling o126.load.
      : java.lang.NullPointerException
      at scala.collection.immutable.StringOps$.length$extension(StringOps.scala:51)
      at scala.collection.immutable.StringOps.length(StringOps.scala:51)
      at scala.collection.IndexedSeqOptimized.isEmpty(IndexedSeqOptimized.scala:30)
      at scala.collection.IndexedSeqOptimized.isEmpty$(IndexedSeqOptimized.scala:30)
      at scala.collection.immutable.StringOps.isEmpty(StringOps.scala:33)
      at scala.collection.TraversableOnce.nonEmpty(TraversableOnce.scala:143)
      at scala.collection.TraversableOnce.nonEmpty$(TraversableOnce.scala:143)
      at scala.collection.immutable.StringOps.nonEmpty(StringOps.scala:33)
      at org.apache.spark.sql.catalyst.csv.CSVOptions.$anonfun$lineSeparator$1(CSVOptions.scala:216)
      at scala.Option.map(Option.scala:230)
      at org.apache.spark.sql.catalyst.csv.CSVOptions.<init>(CSVOptions.scala:215)
      at org.apache.spark.sql.catalyst.csv.CSVOptions.<init>(CSVOptions.scala:47)
      at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.inferSchema(CSVFileFormat.scala:60)
      at org.apache.spark.sql.execution.datasources.DataSource.$anonfun$getOrInferFileFormatSchema$11(DataSource.scala:210)
      at scala.Option.orElse(Option.scala:447)
      at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:207)
      at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:411)
      at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:228)
      at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:210)
      at scala.Option.getOrElse(Option.scala:189)
      at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:210)
      at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:185)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:498)
      at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
      at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
      at py4j.Gateway.invoke(Gateway.java:282)
      at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
      at py4j.commands.CallCommand.execute(CallCommand.java:79)
      at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
      at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
      at java.lang.Thread.run(Thread.java:750)

      Attachments

        Activity

          People

            gdhuper Gurpreet Singh
            zach liu Zach Liu
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: