Uploaded image for project: 'Apache Hudi'
  1. Apache Hudi
  2. HUDI-1535

Hudi spark datasource fails w/ NoClassDefFoundError: org/apache/hudi/client/common/HoodieEngineContext

    XMLWordPrintableJSON

Details

    Description

      I tried Quick Start Guide w/ latest master.

       

      // first insert
      scala> df.write.format("hudi").
           |   options(getQuickstartWriteConfigs).
           |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
           |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
           |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
           |   option(TABLE_NAME, tableName).
           |   mode(Overwrite).
           |   save(basePath)
      java.lang.NoClassDefFoundError: org/apache/hudi/client/common/HoodieEngineContext
        at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:120)
        at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:134)
        at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
        at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
        at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
        at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
        at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
        at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
        at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
        ... 68 elided
      Caused by: java.lang.ClassNotFoundException: org.apache.hudi.client.common.HoodieEngineContext
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 91 more
      

      Command used for spark-submit:

      ./bin/spark-shell   --packages org.apache.spark:spark-avro_2.11:2.4.4   --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'   --jars hudi-utilities-bundle_2.11-0.7.0-SNAPSHOT.jar
      

      Steps to repro:

      Just follow the quick start guide w/ above command for spark submit. 

       

       

      Attachments

        Issue Links

          Activity

            People

              shivnarayan sivabalan narayanan
              shivnarayan sivabalan narayanan
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: