Uploaded image for project: 'Crunch'
  1. Crunch
  2. CRUNCH-597

Unable to process parquet files using Hadoop

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 0.13.0
    • Fix Version/s: 0.14.0
    • Component/s: Core, IO
    • Labels:
      None

      Description

      Current version of parquet-hadoop results in the following stack trace while attempting to read from parquet file.

      java.lang.Exception: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be cast to parquet.hadoop.ParquetInputSplit
      	at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:406)
      Caused by: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be cast to parquet.hadoop.ParquetInputSplit
      	at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
      	at org.apache.crunch.impl.mr.run.CrunchRecordReader.initialize(CrunchRecordReader.java:140)
      	at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:478)
      	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:671)
      	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
      	at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:268)
      	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      	at java.lang.Thread.run(Thread.java:745)
      

      Here is the relevant code snippet which yields the above stack trace when executed locally,

      Pipeline pipeline = new MRPipeline(Crunch.class, conf);
      PCollection<Pair<String, Observation>> observations = 
                   pipeline.read(AvroParquetFileSource.builder(record).build(new Path(args[0])))
                               .parallelDo(new TranslateFn(), Avros.tableOf(Avros.strings(), Avros.specifics(Observation.class)));
      for (Pair<String, Observation> pair : observations.materialize()) {
            System.out.println(pair.second());
      }
      PipelineResult result = pipeline.done();
      

        Attachments

        1. CRUNCH-597.patch
          10 kB
          Josh Wills

          Activity

            People

            • Assignee:
              jwills Josh Wills
              Reporter:
              srosenberg Stan Rosenberg
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: