Uploaded image for project: 'Apache Ozone'
  1. Apache Ozone
  2. HDDS-7593 Supporting HSync and lease recovery
  3. HDDS-10361

[hsync] Output stream should support direct byte buffer

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotBulk Copy AttachmentsBulk Move AttachmentsVotersWatch issueWatchersConvert to IssueLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • HDDS-7593
    • None

    Description

      I'm trying to cherrypick HDDS-9843. Ozone client high memory (heap) utilization (#6153) from master to HDDS-7593 dev branch. But it's giving me this error

      Failed to flush. error: null
      java.lang.UnsupportedOperationException
      	at java.nio.ByteBuffer.array(ByteBuffer.java:994)
      	at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.appendLastChunkBuffer(BlockOutputStream.java:858)
      	at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.updateBlockDataForWriteChunk(BlockOutputStream.java:814)
      	at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:769)
      	at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:565)
      	at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFlushInternal(BlockOutputStream.java:598)
      	at org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFlush(BlockOutputStream.java:573)
      	at org.apache.hadoop.hdds.scm.storage.RatisBlockOutputStream.hsync(RatisBlockOutputStream.java:139)
      	at org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.hsync(BlockOutputStreamEntry.java:158)
      	at org.apache.hadoop.ozone.client.io.KeyOutputStream.handleStreamAction(KeyOutputStream.java:551)
      	at org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:514)
      	at org.apache.hadoop.ozone.client.io.KeyOutputStream.hsync(KeyOutputStream.java:484)
      	at org.apache.hadoop.ozone.client.io.OzoneOutputStream.hsync(OzoneOutputStream.java:118)
      	at org.apache.hadoop.fs.ozone.OzoneFSOutputStream.hsync(OzoneFSOutputStream.java:70)
      	at org.apache.hadoop.fs.ozone.OzoneFSOutputStream.hflush(OzoneFSOutputStream.java:65)
      	at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:136)
      	at org.apache.hadoop.hbase.io.asyncfs.WrapperAsyncFSOutput.flush0(WrapperAsyncFSOutput.java:92)
      	at org.apache.hadoop.hbase.io.asyncfs.WrapperAsyncFSOutput.lambda$flush$0(WrapperAsyncFSOutput.java:113)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at java.lang.Thread.run(Thread.java:748)
      

      The incremental chunk list feature assumes heap byte buffer. But HDDS-9843 requires direct byte buffer. We should leverage direct byte buffer.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            weichiu Wei-Chiu Chuang Assign to me
            weichiu Wei-Chiu Chuang
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment