Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-17024

Potential data race introduced by HDFS-15865

    XMLWordPrintableJSON

Details

    • Reviewed

    Description

      After HDFS-15865, we found client aborted due to an NPE.

      2023-04-10 16:07:43,409 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: ***** ABORTING region server kqhdp36,16020,1678077077562: Replay of WAL required. Forcing server shutdown *****
      org.apache.hadoop.hbase.DroppedSnapshotException: region: WAFER_ALL,16|CM RIE.MA1|CP1114561.18|PROC|0000,1625899466315.0fbdf0f1810efa9e68af831247e6555f.
              at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2870)
              at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2539)
              at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2511)
              at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2401)
              at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:613)
              at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:582)
              at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$1000(MemStoreFlusher.java:69)
              at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:362)
              at java.lang.Thread.run(Thread.java:748)
      Caused by: java.lang.NullPointerException
              at org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:880)
              at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:781)
              at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:898)
              at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:850)
              at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:76)
              at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:105)
              at org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.finishClose(HFileWriterImpl.java:859)
              at org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.close(HFileWriterImpl.java:687)
              at org.apache.hadoop.hbase.regionserver.StoreFileWriter.close(StoreFileWriter.java:393)
              at org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:69)
              at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:78)
              at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:1047)
              at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2349)
              at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2806)
      

      This is only possible if a data race happened. File this jira to improve the data and eliminate the data race.

      Attachments

        Issue Links

          Activity

            People

              stonedot Segawa Hiroaki
              weichiu Wei-Chiu Chuang
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: