Uploaded image for project: 'Parquet'
  1. Parquet
  2. PARQUET-284

Should use ConcurrentHashMap instead of HashMap in ParquetMetadataConverter

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 1.6.1
    • Fix Version/s: 1.8.0
    • Component/s: parquet-mr
    • Labels:
      None
    • Environment:

      Spark 1.2.1, CentOS 6.4

      Description

      When using parquet in Spark Environment, sometimes got hang with following thread dump:
      "Executor task launch worker-0" daemon prio=10 tid=0x000000004073d000 nid=0xd6c5 runnable [0x00007ff3fda40000]
      java.lang.Thread.State: RUNNABLE
      at java.util.HashMap.get(HashMap.java:303)
      at parquet.format.converter.ParquetMetadataConverter.fromFormatEncodings(ParquetMetadataConverter.java:218)
      at parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:543)
      at parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:520)
      at parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:426)
      at parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:381)
      at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:161)
      at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:138)
      at org.apache.spark.rdd.NewHadoopRDD$$anon$1.(NewHadoopRDD.scala:135)
      at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:107)
      at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:69)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
      at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
      at org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
      at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      at org.apache.spark.scheduler.Task.run(Task.scala:56)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      at java.lang.Thread.run(Thread.java:662)

      From the source code of ParquetMetadataConverter:

      private Map> encodingLists = new HashMap>();

      It use HashMap instead of ConcurrentHashMap. Because HashMap is not thread safe and can cause hang when run in multithread environment. So it need to change to ConcurrentHashMap

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                alexlevenson Alex Levenson
                Reporter:
                yanzt Tony Yan
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: