Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-20599

ConsoleSink should work with write (batch)

    Details

      Description

      I think the following should just work.

      spark.
        read.  // <-- it's a batch query not streaming query if that matters
        format("kafka").
        option("subscribe", "topic1").
        option("kafka.bootstrap.servers", "localhost:9092").
        load.
        write.
        format("console").  // <-- that's not supported currently
        save
      

      The above combination of kafka source and console sink leads to the following exception:

      java.lang.RuntimeException: org.apache.spark.sql.execution.streaming.ConsoleSinkProvider does not allow create table as select.
        at scala.sys.package$.error(package.scala:27)
        at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:479)
        at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
        at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:93)
        at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:93)
        at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:610)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:233)
        ... 48 elided
      

        Attachments

          Activity

            People

            • Assignee:
              Lubo Zhang Lubo Zhang
              Reporter:
              jlaskowski Jacek Laskowski
              Shepherd:
              Shixiong Zhu
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: