Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-37913

Null Pointer Exception when Loading ML Pipeline Model with Custom Transformer

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Critical
    • Resolution: Unresolved
    • 3.1.2
    • None
    • Spark Core
    • Spark 3.1.2, Scala 2.12, Java 11

    Description

      I am trying to create and persist a ML pipeline model using a custom Spark transformer that I created based on the Unary Transformer example provided by Spark. I am able to save and load the transformer. When I include the custom transformer as a stage in a pipeline model, I can save the model, but am unable to load it. Here is the stack trace of the exception:

       

      01-14-2022 03:49:52 PM ERROR Instrumentation: java.lang.NullPointerException at java.base/java.lang.reflect.Method.invoke(Method.java:559) at org.apache.spark.ml.util.DefaultParamsReader$.loadParamsInstanceReader(ReadWrite.scala:631) at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$load$4(Pipeline.scala:276) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238) at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36) at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198) at scala.collection.TraversableLike.map(TraversableLike.scala:238) at scala.collection.TraversableLike.map$(TraversableLike.scala:231) at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198) at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$load$3(Pipeline.scala:274) at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191) at scala.util.Try$.apply(Try.scala:213) at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191) at org.apache.spark.ml.Pipeline$SharedReadWrite$.load(Pipeline.scala:268) at org.apache.spark.ml.PipelineModel$PipelineModelReader.$anonfun$load$7(Pipeline.scala:356) at org.apache.spark.ml.MLEvents.withLoadInstanceEvent(events.scala:160) at org.apache.spark.ml.MLEvents.withLoadInstanceEvent$(events.scala:155) at org.apache.spark.ml.util.Instrumentation.withLoadInstanceEvent(Instrumentation.scala:42) at org.apache.spark.ml.PipelineModel$PipelineModelReader.$anonfun$load$6(Pipeline.scala:355) at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191) at scala.util.Try$.apply(Try.scala:213) at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191) at org.apache.spark.ml.PipelineModel$PipelineModelReader.load(Pipeline.scala:355) at org.apache.spark.ml.PipelineModel$PipelineModelReader.load(Pipeline.scala:349) at org.apache.spark.ml.util.MLReadable.load(ReadWrite.scala:355) at org.apache.spark.ml.util.MLReadable.load$(ReadWrite.scala:355) at org.apache.spark.ml.PipelineModel$.load(Pipeline.scala:337) at com.dtech.scala.pipeline.PipelineProcess.process(PipelineProcess.scala:122) at com.dtech.scala.pipeline.PipelineProcess$.main(PipelineProcess.scala:448) at com.dtech.scala.pipeline.PipelineProcess.main(PipelineProcess.scala) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:65) at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)

       

      Source Code

      Unary Transformer

      Persist Unary Transformer & Pipeline Model

      Attachments

        Activity

          People

            Unassigned Unassigned
            ally1221 Alana Young
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated: