Uploaded image for project: 'Crunch (Retired)'
  1. Crunch (Retired)
  2. CRUNCH-507

Potential NPE in SparkPipeline constructor and additional constructor

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.11.0
    • 0.12.0
    • Core
    • None

    Description

      Was looking at the SparkPipeline constructor API and was trying to maximize the number of settings I'd inherit when a Spark job was submitted with "spark-submit". This should populate the SparkContext (and JavaSparkContext) with values like the Spark Master. If you want to:

      • Specify a driver class
      • Hadoop Configuration (vs picking up the defaults)
      • Inherit pre-populated SparkContext you'd have to use a constructor like:
      JavaSparkContext sc = new JavaSparkContext(new SparkConf);
      new SparkPipeline(sc.master(), sc.appName(), Driver.class, conf)
      

      Just for convenience we could add a constructor like the following:

      public SparkPipeline(JavaSparkContext sc, String appName, Class driver, Configuration conf)
      

      Could remove the appName but since the spark context is not guaranteed to be non-null we might get a NPE. This also means that on this line[1] we could throw an NPE when trying to pull the hadoopConfiguration() off that object.

      [1] - https://github.com/apache/crunch/blob/3ab0b078c47f23b3ba893fdfb05fd723f663d02b/crunch-spark/src/main/java/org/apache/crunch/impl/spark/SparkPipeline.java#L73

      Attachments

        1. CRUNCH-507.patch
          1 kB
          Micah Whitacre

        Activity

          People

            mkwhitacre Micah Whitacre
            mkwhitacre Micah Whitacre
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: