HBase
  1. HBase
  2. HBASE-8143

HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: 0.98.0, 0.94.7, 0.95.0
    • Fix Version/s: 0.98.0, 0.96.1
    • Component/s: hadoop2
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Committed 0.96 and trunk. Thanks for reviews.

      Description

      We've run into an issue with HBase 0.94 on Hadoop2, with SSR turned on that the memory usage of the HBase process grows to 7g, on an -Xmx3g, after some time, this causes OOM for the RSs.

      Upon further investigation, I've found out that we end up with 200 regions, each having 3-4 store files open. Under hadoop2 SSR, BlockReaderLocal allocates DirectBuffers, which is unlike HDFS 1 where there is no direct buffer allocation.

      It seems that there is no guards against the memory used by local buffers in hdfs 2, and having a large number of open files causes multiple GB of memory to be consumed from the RS process.

      This issue is to further investigate what is going on. Whether we can limit the memory usage in HDFS, or HBase, and/or document the setup.

      Possible mitigation scenarios are:

      • Turn off SSR for Hadoop 2
      • Ensure that there is enough unallocated memory for the RS based on expected # of store files
      • Ensure that there is lower number of regions per region server (hence number of open files)

      Stack trace:

      org.apache.hadoop.hbase.DroppedSnapshotException: region: IntegrationTestLoadAndVerify,yC^P\xD7\x945\xD4,1363388517630.24655343d8d356ef708732f34cfe8946.
              at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1560)
              at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1439)
              at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1380)
              at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:449)
              at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPressure(MemStoreFlusher.java:215)
              at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$500(MemStoreFlusher.java:63)
              at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:237)
              at java.lang.Thread.run(Thread.java:662)
      Caused by: java.lang.OutOfMemoryError: Direct buffer memory
              at java.nio.Bits.reserveMemory(Bits.java:632)
              at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:97)
              at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
              at org.apache.hadoop.hdfs.util.DirectBufferPool.getBuffer(DirectBufferPool.java:70)
              at org.apache.hadoop.hdfs.BlockReaderLocal.<init>(BlockReaderLocal.java:315)
              at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:208)
              at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790)
              at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888)
              at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455)
              at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645)
              at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689)
              at java.io.DataInputStream.readFully(DataInputStream.java:178)
              at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:312)
              at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:543)
              at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:589)
              at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.<init>(StoreFile.java:1261)
              at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:512)
              at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:603)
              at org.apache.hadoop.hbase.regionserver.Store.validateStoreFile(Store.java:1568)
              at org.apache.hadoop.hbase.regionserver.Store.commitFile(Store.java:845)
              at org.apache.hadoop.hbase.regionserver.Store.access$500(Store.java:109)
              at org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.commit(Store.java:2209)
              at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1541)
      
      1. OpenFileTest.java
        4 kB
        Enis Soztutar
      2. 8143v2.txt
        8 kB
        stack
      3. 8143v2.094.txt
        7 kB
        stack
      4. 8143doc.txt
        5 kB
        stack
      5. 8143.hbase-default.xml.txt
        2 kB
        stack

        Issue Links

          Activity

          Hide
          stack added a comment -

          Released in 0.96.1. Issue closed.

          Show
          stack added a comment - Released in 0.96.1. Issue closed.
          Hide
          stack added a comment -

          Patch for 0.94.

          Show
          stack added a comment - Patch for 0.94.
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK #4700 (See https://builds.apache.org/job/HBase-TRUNK/4700/)
          HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM (stack: rev 1545852)

          • /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK #4700 (See https://builds.apache.org/job/HBase-TRUNK/4700/ ) HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM (stack: rev 1545852) /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in hbase-0.96 #205 (See https://builds.apache.org/job/hbase-0.96/205/)
          HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM (stack: rev 1545853)

          • /hbase/branches/0.96/hbase-common/src/main/resources/hbase-default.xml
          • /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
          • /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
          • /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
          Show
          Hudson added a comment - SUCCESS: Integrated in hbase-0.96 #205 (See https://builds.apache.org/job/hbase-0.96/205/ ) HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM (stack: rev 1545853) /hbase/branches/0.96/hbase-common/src/main/resources/hbase-default.xml /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #853 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/853/)
          HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM (stack: rev 1545852)

          • /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #853 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/853/ ) HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM (stack: rev 1545852) /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in hbase-0.96-hadoop2 #133 (See https://builds.apache.org/job/hbase-0.96-hadoop2/133/)
          HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM (stack: rev 1545853)

          • /hbase/branches/0.96/hbase-common/src/main/resources/hbase-default.xml
          • /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
          • /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
          • /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
          Show
          Hudson added a comment - FAILURE: Integrated in hbase-0.96-hadoop2 #133 (See https://builds.apache.org/job/hbase-0.96-hadoop2/133/ ) HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM (stack: rev 1545853) /hbase/branches/0.96/hbase-common/src/main/resources/hbase-default.xml /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java /hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
          Hide
          Enis Soztutar added a comment -

          +1. This looks good.

          Show
          Enis Soztutar added a comment - +1. This looks good.
          Hide
          Elliott Clark added a comment -

          +1 lgtm should be safe and respect the dfs settings.

          Show
          Elliott Clark added a comment - +1 lgtm should be safe and respect the dfs settings.
          Hide
          stack added a comment -

          Enis Soztutar Any chance of a revew on this one? I'd like to commit for 0.96.1. Thanks.

          Show
          stack added a comment - Enis Soztutar Any chance of a revew on this one? I'd like to commit for 0.96.1. Thanks.
          Hide
          stack added a comment -

          Enis Soztutar A review boss please.

          Liang Xie Looks like Lars got you over in HDFS-5461

          Show
          stack added a comment - Enis Soztutar A review boss please. Liang Xie Looks like Lars got you over in HDFS-5461
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12613550/8143v2.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12613550/8143v2.txt against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7837//console This message is automatically generated.
          Hide
          Liang Xie added a comment -

          FYI. i filed https://issues.apache.org/jira/browse/HDFS-5461 which hope to alleviate this issue as well. any comments are welcome

          Show
          Liang Xie added a comment - FYI. i filed https://issues.apache.org/jira/browse/HDFS-5461 which hope to alleviate this issue as well. any comments are welcome
          Hide
          stack added a comment -

          Implement Enis's suggestion on how to set SSR buffer value.

          Show
          stack added a comment - Implement Enis's suggestion on how to set SSR buffer value.
          Hide
          Enis Soztutar added a comment -

          For MR especially, deployments usually add the whole hadoop conf dir to the classpath, no? I think bigtop also does this. In this case, we would like to take the value from hdfs-site.

          What if it is not the default and still too large or if the default value changes (we can't read hdfs-side configs)?

          Luckily, dfs.client.read.shortcircuit.buffer.size is not set in hdfs-default.xml. From the FSUtils code, can we call conf.setIfUnset(), would that work?

          Show
          Enis Soztutar added a comment - For MR especially, deployments usually add the whole hadoop conf dir to the classpath, no? I think bigtop also does this. In this case, we would like to take the value from hdfs-site. What if it is not the default and still too large or if the default value changes (we can't read hdfs-side configs)? Luckily, dfs.client.read.shortcircuit.buffer.size is not set in hdfs-default.xml. From the FSUtils code, can we call conf.setIfUnset(), would that work?
          Hide
          stack added a comment -

          Now thinking about it, will this override the conf set in hdfs-site.xml? If so, maybe we should custom code it in our dfs client layer.

          How would you suggest it work Enis?

          Read the configuration, if it is the default, set it to the hbase value (the hbase value would have to be named something else)? What if it is not the default and still too large or if the default value changes (we can't read hdfs-side configs)? There is no hdfs-site.xml on serverside in most deploys, right?

          Show
          stack added a comment - Now thinking about it, will this override the conf set in hdfs-site.xml? If so, maybe we should custom code it in our dfs client layer. How would you suggest it work Enis? Read the configuration, if it is the default, set it to the hbase value (the hbase value would have to be named something else)? What if it is not the default and still too large or if the default value changes (we can't read hdfs-side configs)? There is no hdfs-site.xml on serverside in most deploys, right?
          Hide
          Enis Soztutar added a comment -

          Now thinking about it, will this override the conf set in hdfs-site.xml? If so, maybe we should custom code it in our dfs client layer.

          Show
          Enis Soztutar added a comment - Now thinking about it, will this override the conf set in hdfs-site.xml? If so, maybe we should custom code it in our dfs client layer.
          Hide
          stack added a comment -

          Setting dfs.client.read.shortcircuit.buffer.size in hbase-default.xml.

          How is this? Should backport it too.

          Show
          stack added a comment - Setting dfs.client.read.shortcircuit.buffer.size in hbase-default.xml. How is this? Should backport it too.
          Hide
          stack added a comment -

          Lars Hofhansl Seems to work for me when I try it setting the config into hbase-site.xml (previous I was getting OOME). In HBase, we go via a HFileSystem. When we create this in the regionserver, we pass the RS Configuration which will be laden w/ the content of the hbase*.xml files... so the dfs config read from hbase-default.xml will be present. In DistributedFileSystem, it creates a DFSClient passing its conf on 'initialize'.

          Show
          stack added a comment - Lars Hofhansl Seems to work for me when I try it setting the config into hbase-site.xml (previous I was getting OOME). In HBase, we go via a HFileSystem. When we create this in the regionserver, we pass the RS Configuration which will be laden w/ the content of the hbase*.xml files... so the dfs config read from hbase-default.xml will be present. In DistributedFileSystem, it creates a DFSClient passing its conf on 'initialize'.
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #803 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/803/)
          HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM; DOC HOW TO AVOID (stack: rev 1534504)

          • /hbase/trunk/src/main/docbkx/performance.xml
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #803 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/803/ ) HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM; DOC HOW TO AVOID (stack: rev 1534504) /hbase/trunk/src/main/docbkx/performance.xml
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK #4635 (See https://builds.apache.org/job/HBase-TRUNK/4635/)
          HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM; DOC HOW TO AVOID (stack: rev 1534504)

          • /hbase/trunk/src/main/docbkx/performance.xml
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK #4635 (See https://builds.apache.org/job/HBase-TRUNK/4635/ ) HBASE-8143 HBase on Hadoop 2 with local short circuit reads (ssr) causes OOM; DOC HOW TO AVOID (stack: rev 1534504) /hbase/trunk/src/main/docbkx/performance.xml
          Hide
          Lars Hofhansl added a comment -

          In hbase-site.xml we'd have to add a new parameter, and then set it ourselves on the DFSClient that we use, or maybe I am missing something? Can we put arbitrary HDFS params in hbase-site.xml? The DFSClient won't read that, I think.

          Show
          Lars Hofhansl added a comment - In hbase-site.xml we'd have to add a new parameter, and then set it ourselves on the DFSClient that we use, or maybe I am missing something? Can we put arbitrary HDFS params in hbase-site.xml? The DFSClient won't read that, I think.
          Hide
          Enis Soztutar added a comment -

          Doc looks fine, but we should change the default in hbase-default.xml I think. The problem last time was to perf test this, and find a meaningful default.

          Show
          Enis Soztutar added a comment - Doc looks fine, but we should change the default in hbase-default.xml I think. The problem last time was to perf test this, and find a meaningful default.
          Hide
          stack added a comment -

          I committed the attached doc. patch; better than nothing.

          Show
          stack added a comment - I committed the attached doc. patch; better than nothing.
          Hide
          stack added a comment -

          Or just put into hbase-site.xml; that'll do.

          Show
          stack added a comment - Or just put into hbase-site.xml; that'll do.
          Hide
          Lars Hofhansl added a comment -

          This should to be added to hdfs-site.xml as seen at the region server, right?

          Show
          Lars Hofhansl added a comment - This should to be added to hdfs-site.xml as seen at the region server, right?
          Hide
          stack added a comment -

          Its a client-side config no? Not for hdfs-side.

          Here is a bit of doc for the reference guide that recommends setting this down from its default size. Does this do? If so, I'll commit (I try to clean up the stale SSR section a little too).

          Show
          stack added a comment - Its a client-side config no? Not for hdfs-side. Here is a bit of doc for the reference guide that recommends setting this down from its default size. Does this do? If so, I'll commit (I try to clean up the stale SSR section a little too).
          Hide
          Lars Hofhansl added a comment -

          Removing from 0.94.
          Can we force HDFS default settings?

          Show
          Lars Hofhansl added a comment - Removing from 0.94. Can we force HDFS default settings?
          Hide
          stack added a comment -

          I ran into this issue recently and followed Lars advice to fix it.

          dfs.client.read.shortcircuit.buffer.size set to 128k all around <name>dfs.client.read.shortcircuit.buffer.size</name> <value>131072</value>

          We should add this to our default configs rather than let folks run into OOMEs.

          Show
          stack added a comment - I ran into this issue recently and followed Lars advice to fix it. dfs.client.read.shortcircuit.buffer.size set to 128k all around <name>dfs.client.read.shortcircuit.buffer.size</name> <value>131072</value> We should add this to our default configs rather than let folks run into OOMEs.
          Hide
          stack added a comment -

          Just saying we will have to balance this sizing amongst the different needs. 4k or 8k might work for the local block reader but might not be appropriate for something like HBASE-9535 (or any other feature we'd want to do off-heap).

          Show
          stack added a comment - Just saying we will have to balance this sizing amongst the different needs. 4k or 8k might work for the local block reader but might not be appropriate for something like HBASE-9535 (or any other feature we'd want to do off-heap).
          Hide
          Lars Hofhansl added a comment -

          With a reasonable buffer size it should be OK. 1mb is clearly counter productive.
          It's on my (long) list of things to test with a really smaller buffer size (like 4 or 8k) and see the impact of that.

          At work we have this set to 128k and that has been working well.

          Show
          Lars Hofhansl added a comment - With a reasonable buffer size it should be OK. 1mb is clearly counter productive. It's on my (long) list of things to test with a really smaller buffer size (like 4 or 8k) and see the impact of that. At work we have this set to 128k and that has been working well.
          Hide
          stack added a comment -

          There is no way of getting a direct byte buffer w/o it being counted against the commit charge for the process? Its a pity given we are just doing read-only.

          All of this off-heap allocation will impinge in our being able to use off heap for other purposes.

          Show
          stack added a comment - There is no way of getting a direct byte buffer w/o it being counted against the commit charge for the process? Its a pity given we are just doing read-only. All of this off-heap allocation will impinge in our being able to use off heap for other purposes.
          Hide
          Lars Hofhansl added a comment -

          The complicating factor is that each reader potentially would have a different buffer size (you'll want that larger than the HFile's block size), so it's hard to default this correctly.

          Show
          Lars Hofhansl added a comment - The complicating factor is that each reader potentially would have a different buffer size (you'll want that larger than the HFile's block size), so it's hard to default this correctly.
          Hide
          Lars Hofhansl added a comment -

          I like that. We can introduce a hbase.client.direct.buffer.multiplier (or something, please suggest a better name), defaulted to (say) 2. Then we use that to set dfs.client.read.shortcircuit.buffer.size automatically.

          Show
          Lars Hofhansl added a comment - I like that. We can introduce a hbase.client.direct.buffer.multiplier (or something, please suggest a better name), defaulted to (say) 2. Then we use that to set dfs.client.read.shortcircuit.buffer.size automatically.
          Hide
          Enis Soztutar added a comment -

          Turns out it is difficult to get hard numbers.

          Agreed. I think we suggest setting that to a multiple (1-4) of typical block sizes. this param is a client side parameter, so we can put a meaningful default in hbase-default.xml, wdyt?

          Show
          Enis Soztutar added a comment - Turns out it is difficult to get hard numbers. Agreed. I think we suggest setting that to a multiple (1-4) of typical block sizes. this param is a client side parameter, so we can put a meaningful default in hbase-default.xml, wdyt?
          Hide
          Lars Hofhansl added a comment -

          Anyway, since there is nothing really to do on the HBase other then documenting a better default (iff short circuit reads are enabled), I'm pushing this to 0.94.9/

          Show
          Lars Hofhansl added a comment - Anyway, since there is nothing really to do on the HBase other then documenting a better default (iff short circuit reads are enabled), I'm pushing this to 0.94.9/
          Hide
          Lars Hofhansl added a comment -

          Turns out it is difficult to get hard numbers.
          We do know that with dfs.client.read.shortcircuit.buffer.size=1m we can run into problems, so we must set it to something smaller in HBase.

          128k seems like a good compromise to document.

          Show
          Lars Hofhansl added a comment - Turns out it is difficult to get hard numbers. We do know that with dfs.client.read.shortcircuit.buffer.size=1m we can run into problems, so we must set it to something smaller in HBase. 128k seems like a good compromise to document.
          Hide
          Lars Hofhansl added a comment -

          I will run some tests on our test cluster and report back.

          Show
          Lars Hofhansl added a comment - I will run some tests on our test cluster and report back.
          Hide
          Enis Soztutar added a comment -

          Raising this to critical. Will get back to this for sure.

          Show
          Enis Soztutar added a comment - Raising this to critical. Will get back to this for sure.
          Hide
          Enis Soztutar added a comment -

          Not yet, but this is in my radar. We know that the issue is with the buffer size. We just have to test with a smaller size to see whether there is any performance impact.

          Show
          Enis Soztutar added a comment - Not yet, but this is in my radar. We know that the issue is with the buffer size. We just have to test with a smaller size to see whether there is any performance impact.
          Hide
          Lars Hofhansl added a comment -

          Moving out to 0.94.8

          Show
          Lars Hofhansl added a comment - Moving out to 0.94.8
          Hide
          Lars Hofhansl added a comment -

          Hey Enis Soztutar, did you get a chance to repeat your tests with a smaller buffer size?

          Show
          Lars Hofhansl added a comment - Hey Enis Soztutar , did you get a chance to repeat your tests with a smaller buffer size?
          Hide
          ramkrishna.s.vasudevan added a comment -

          test with smaller buffer, and maybe change the default/recommended configuration in HBase.

          +1.

          Show
          ramkrishna.s.vasudevan added a comment - test with smaller buffer, and maybe change the default/recommended configuration in HBase. +1.
          Hide
          Enis Soztutar added a comment -

          It is as much an HBase issue as we should have clear recommendations as to how HDFS should be configured.

          Agreed. I intended to keep this issue open, and create a corresponding HDFS issue. After getting some resolution on the hdfs one, depending on the outcome, we can document the setup, test with smaller buffer, and maybe change the default/recommended configuration in HBase.

          Show
          Enis Soztutar added a comment - It is as much an HBase issue as we should have clear recommendations as to how HDFS should be configured. Agreed. I intended to keep this issue open, and create a corresponding HDFS issue. After getting some resolution on the hdfs one, depending on the outcome, we can document the setup, test with smaller buffer, and maybe change the default/recommended configuration in HBase.
          Hide
          Lars Hofhansl added a comment -

          It is as much an HBase issue as we should have clear recommendations as to how HDFS should be configured.
          An interesting test would be how HBase would perform if we made the buffer much smaller (like 64k, which should suffice).

          Show
          Lars Hofhansl added a comment - It is as much an HBase issue as we should have clear recommendations as to how HDFS should be configured. An interesting test would be how HBase would perform if we made the buffer much smaller (like 64k, which should suffice).
          Hide
          Liang Xie added a comment -

          nice case, thanks Enis Soztutar and Lars Hofhansl for explaining, Orz

          Show
          Liang Xie added a comment - nice case, thanks Enis Soztutar and Lars Hofhansl for explaining, Orz
          Hide
          Enis Soztutar added a comment -

          Should this test file be attached to an HDFS JIRA ?

          I am waiting for our dfs folks to discuss some options. Will create an HDFS issue after.

          Show
          Enis Soztutar added a comment - Should this test file be attached to an HDFS JIRA ? I am waiting for our dfs folks to discuss some options. Will create an HDFS issue after.
          Hide
          Ted Yu added a comment -

          Apart from HBaseConfiguration, I don't see code specific to HBase.

          Should this test file be attached to an HDFS JIRA ?

          Show
          Ted Yu added a comment - Apart from HBaseConfiguration, I don't see code specific to HBase. Should this test file be attached to an HDFS JIRA ?
          Hide
          Enis Soztutar added a comment -

          Attaching simple test code. If you run this against Hadoop-2.0.3-alpha, with ssr on, and -Xmx=1g, -XX:MaxDirectMemorySize=1g, you would see,

          numFiles: 940
          numFiles: 950
          numFiles: 960
          numFiles: 970
          numFiles: 980
          Exception in thread "pool-2-thread-14" java.lang.OutOfMemoryError: Direct buffer memory
          	at java.nio.Bits.reserveMemory(Bits.java:632)
          	at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:97)
          	at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
          	at org.apache.hadoop.hdfs.util.DirectBufferPool.getBuffer(DirectBufferPool.java:59)
          	at org.apache.hadoop.hdfs.BlockReaderLocal.<init>(BlockReaderLocal.java:315)
          	at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:208)
          	at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790)
          	at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888)
          	at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455)
          	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645)
          	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689)
          	at java.io.DataInputStream.read(DataInputStream.java:132)
          	at org.apache.hadoop.hbase.OpenFileTest.readFully(OpenFileTest.java:131)
          	at org.apache.hadoop.hbase.OpenFileTest$FileCreater.createAndOpenFile(OpenFileTest.java:74)
          	at org.apache.hadoop.hbase.OpenFileTest$FileCreater.run(OpenFileTest.java:57)
          	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
          	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
          	at java.lang.Thread.run(Thread.java:680)
          
          Show
          Enis Soztutar added a comment - Attaching simple test code. If you run this against Hadoop-2.0.3-alpha, with ssr on, and -Xmx=1g, -XX:MaxDirectMemorySize=1g, you would see, numFiles: 940 numFiles: 950 numFiles: 960 numFiles: 970 numFiles: 980 Exception in thread "pool-2-thread-14" java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:632) at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:97) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288) at org.apache.hadoop.hdfs.util.DirectBufferPool.getBuffer(DirectBufferPool.java:59) at org.apache.hadoop.hdfs.BlockReaderLocal.<init>(BlockReaderLocal.java:315) at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:208) at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689) at java.io.DataInputStream.read(DataInputStream.java:132) at org.apache.hadoop.hbase.OpenFileTest.readFully(OpenFileTest.java:131) at org.apache.hadoop.hbase.OpenFileTest$FileCreater.createAndOpenFile(OpenFileTest.java:74) at org.apache.hadoop.hbase.OpenFileTest$FileCreater.run(OpenFileTest.java:57) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang. Thread .run( Thread .java:680)
          Hide
          Enis Soztutar added a comment -

          I was able to repro this by using a very simple test. BlockReaderLocal just allocates 1M of direct buffer by default, and thus >3000 blocks open causes 3g mem allocation (assuming no checksum).

          These are the configurations.

            public static final String DFS_CLIENT_READ_SHORTCIRCUIT_KEY = "dfs.client.read.shortcircuit";
            public static final boolean DFS_CLIENT_READ_SHORTCIRCUIT_DEFAULT = false;
            public static final String DFS_CLIENT_READ_SHORTCIRCUIT_SKIP_CHECKSUM_KEY = "dfs.client.read.shortcircuit.skip.checksum";
            public static final boolean DFS_CLIENT_READ_SHORTCIRCUIT_SKIP_CHECKSUM_DEFAULT = false;
            public static final String DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_KEY = "dfs.client.read.shortcircuit.buffer.size";
            public static final int DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_DEFAULT = 1024 * 1024;
          

          Although the buffers are allocated in a pool with weak references, in HBase, we keep the streams open, and thus cause the inflation. There is no guard against allocating the buffers in DFSClient or BlockReaderLocal.

          Decreasing the size of the buffers dfs.client.read.shortcircuit.buffer.size, and not-having that many open files should help with the case. It is not clear that the extra buffering in hadoop 2 helps in case of reads coming from HBase.

          Show
          Enis Soztutar added a comment - I was able to repro this by using a very simple test. BlockReaderLocal just allocates 1M of direct buffer by default, and thus >3000 blocks open causes 3g mem allocation (assuming no checksum). These are the configurations. public static final String DFS_CLIENT_READ_SHORTCIRCUIT_KEY = "dfs.client.read.shortcircuit" ; public static final boolean DFS_CLIENT_READ_SHORTCIRCUIT_DEFAULT = false ; public static final String DFS_CLIENT_READ_SHORTCIRCUIT_SKIP_CHECKSUM_KEY = "dfs.client.read.shortcircuit.skip.checksum" ; public static final boolean DFS_CLIENT_READ_SHORTCIRCUIT_SKIP_CHECKSUM_DEFAULT = false ; public static final String DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_KEY = "dfs.client.read.shortcircuit.buffer.size" ; public static final int DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_DEFAULT = 1024 * 1024; Although the buffers are allocated in a pool with weak references, in HBase, we keep the streams open, and thus cause the inflation. There is no guard against allocating the buffers in DFSClient or BlockReaderLocal. Decreasing the size of the buffers dfs.client.read.shortcircuit.buffer.size, and not-having that many open files should help with the case. It is not clear that the extra buffering in hadoop 2 helps in case of reads coming from HBase.
          Hide
          Liang Xie added a comment -

          yes, you could find the following code in hotspot src(globals.hpp):

            product(intx, MaxDirectMemorySize, -1,                                    \
                    "Maximum total size of NIO direct-buffer allocations") 
          

          then as you know, it will goto this code branch:

                      if (s.equals("-1")) {
                          // -XX:MaxDirectMemorySize not given, take default
                          directMemory = Runtime.getRuntime().maxMemory();
                      }
          

          and maxMemory comes from jvm.cpp:

          JVM_ENTRY_NO_ENV(jlong, JVM_MaxMemory(void))
            JVMWrapper("JVM_MaxMemory");
            size_t n = Universe::heap()->max_capacity();
            return convert_size_t_to_jlong(n);
          JVM_END
          

          and max_capacity == Xmx - one of the survivor spaces(per collectedHeap.hpp)

          Hope it's helpful, sorry for my poor english, seems it's a little far away with the original jira...

          Show
          Liang Xie added a comment - yes, you could find the following code in hotspot src(globals.hpp): product(intx, MaxDirectMemorySize, -1, \ "Maximum total size of NIO direct-buffer allocations" ) then as you know, it will goto this code branch: if (s.equals( "-1" )) { // -XX:MaxDirectMemorySize not given, take default directMemory = Runtime .getRuntime().maxMemory(); } and maxMemory comes from jvm.cpp: JVM_ENTRY_NO_ENV(jlong, JVM_MaxMemory(void)) JVMWrapper( "JVM_MaxMemory" ); size_t n = Universe::heap()->max_capacity(); return convert_size_t_to_jlong(n); JVM_END and max_capacity == Xmx - one of the survivor spaces(per collectedHeap.hpp) Hope it's helpful, sorry for my poor english, seems it's a little far away with the original jira...
          Hide
          Lars Hofhansl added a comment -

          In JDK I also only see a single spot where VM.directMemory is set, and it will only change its value if explicitly specified (either though the command line or as a system property - one can set this to -1 in order to have this equal to the heap size).

          I wonder what the rationale behind 1MB is on the Hadoop side. Typically I would have expected something 64k or even 8k.
          The default bytes-per-checksum seems to be 512 bytes, so a 64k buffer it plenty.

          Enis Soztutar, is it possible to repeat your test with something as little as 4k and then with 64k to gauge the performance impact?

          Show
          Lars Hofhansl added a comment - In JDK I also only see a single spot where VM.directMemory is set, and it will only change its value if explicitly specified (either though the command line or as a system property - one can set this to -1 in order to have this equal to the heap size). I wonder what the rationale behind 1MB is on the Hadoop side. Typically I would have expected something 64k or even 8k. The default bytes-per-checksum seems to be 512 bytes, so a 64k buffer it plenty. Enis Soztutar , is it possible to repeat your test with something as little as 4k and then with 64k to gauge the performance impact?
          Hide
          Liang Xie added a comment -

          btw, indeed there's a comment bug in above jdk7 snippet:

          otherwise to Runtime.getRuntime.maxDirectMemory().
          

          I just want to file an issue to OpenJDK mail list, but found it was fixed in JDK8 branch already, emmm...

          Runtime.getRuntime().maxMemory()
          
          Show
          Liang Xie added a comment - btw, indeed there's a comment bug in above jdk7 snippet: otherwise to Runtime .getRuntime.maxDirectMemory(). I just want to file an issue to OpenJDK mail list, but found it was fixed in JDK8 branch already, emmm... Runtime .getRuntime().maxMemory()
          Hide
          Liang Xie added a comment -

          if buffer size goes smaller, then the FileChannel's read count will be invoked more frequently, seems just a trade-off ?

          Show
          Liang Xie added a comment - if buffer size goes smaller, then the FileChannel's read count will be invoked more frequently, seems just a trade-off ?
          Show
          Ted Yu added a comment - BlockReaderLocal always assumes bufferPool.getBuffer() returns ByteBuffer. See https://issues.apache.org/jira/browse/HDFS-4530?focusedCommentId=13606775&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13606775
          Hide
          Lars Hofhansl added a comment -

          Here's the code from VM.java. (JDK7)

              // A user-settable upper limit on the maximum amount of allocatable direct
              // buffer memory.  This value may be changed during VM initialization if
              // "java" is launched with "-XX:MaxDirectMemorySize=<size>".
              //
              // The initial value of this field is arbitrary; during JRE initialization
              // it will be reset to the value specified on the command line, if any,
              // otherwise to Runtime.getRuntime.maxDirectMemory().
              //
              private static long directMemory = 64 * 1024 * 1024;
          

          600-800 even 1M add up to 800MB only max.

          It seems to me that we should recommend a smaller buffer size. 1MB seems pretty large if many files are open.

          Show
          Lars Hofhansl added a comment - Here's the code from VM.java. (JDK7) // A user-settable upper limit on the maximum amount of allocatable direct // buffer memory. This value may be changed during VM initialization if // "java" is launched with "-XX:MaxDirectMemorySize=<size>" . // // The initial value of this field is arbitrary; during JRE initialization // it will be reset to the value specified on the command line, if any, // otherwise to Runtime .getRuntime.maxDirectMemory(). // private static long directMemory = 64 * 1024 * 1024; 600-800 even 1M add up to 800MB only max. It seems to me that we should recommend a smaller buffer size. 1MB seems pretty large if many files are open.
          Hide
          Enis Soztutar added a comment -

          if you did not set MaxDirectMemorySize option explicitly, then it will be equal to -Xmx, that means -XX:MaxDirectMemorySize == -Xmx == 3g for Enis's case.

          In this case, I was not even aware of Hdfs 2 allocating direct memory. I did not spend a lot of time on the hadoop 2 code base.

          In BlockReaderLocal the short circuit buffer size is configurable with "dfs.client.read.shortcircuit.buffer.size". It does default to 1MB, so something does not quite add up

          Let me check that code path to understand who is using that parameter and how.

          Show
          Enis Soztutar added a comment - if you did not set MaxDirectMemorySize option explicitly, then it will be equal to -Xmx, that means -XX:MaxDirectMemorySize == -Xmx == 3g for Enis's case. In this case, I was not even aware of Hdfs 2 allocating direct memory. I did not spend a lot of time on the hadoop 2 code base. In BlockReaderLocal the short circuit buffer size is configurable with "dfs.client.read.shortcircuit.buffer.size". It does default to 1MB, so something does not quite add up Let me check that code path to understand who is using that parameter and how.
          Hide
          Liang Xie added a comment -

          if you did not set MaxDirectMemorySize option explicitly, then it will be equal to -Xmx, that means -XX:MaxDirectMemorySize == -Xmx == 3g for Enis's case.

          Show
          Liang Xie added a comment - if you did not set MaxDirectMemorySize option explicitly, then it will be equal to -Xmx, that means -XX:MaxDirectMemorySize == -Xmx == 3g for Enis's case.
          Hide
          Lars Hofhansl added a comment -

          We keep a reader open for every store file. Looks like the default for MaxDirectMemorySize is 64MB.

          Bits.reserveMemory actually triggers a full GC if it cannot reserve enough bytes. While this is pretty terrible (IMHO), in this case it proves that these are not leftover buffers from previous invocations, but that they are actually being actively used.

          600-800 store files should put the direct memory consumption per reader at ~80-100k.

          In BlockReaderLocal the short circuit buffer size is configurable with "dfs.client.read.shortcircuit.buffer.size". It does default to 1MB, so something does not quite add up.

          Show
          Lars Hofhansl added a comment - We keep a reader open for every store file. Looks like the default for MaxDirectMemorySize is 64MB. Bits.reserveMemory actually triggers a full GC if it cannot reserve enough bytes. While this is pretty terrible (IMHO), in this case it proves that these are not leftover buffers from previous invocations, but that they are actually being actively used. 600-800 store files should put the direct memory consumption per reader at ~80-100k. In BlockReaderLocal the short circuit buffer size is configurable with "dfs.client.read.shortcircuit.buffer.size". It does default to 1MB, so something does not quite add up.
          Hide
          Liang Xie added a comment -

          to be honest, i am not sure which one is more suitable, i am not a native speaker and could not distinguish those two exactly...

          Show
          Liang Xie added a comment - to be honest, i am not sure which one is more suitable, i am not a native speaker and could not distinguish those two exactly...
          Hide
          Ted Yu added a comment -

          @Liang:
          Thanks for the reference.

          Should HDFS-4530 be a bug instead of improvement ?

          Show
          Ted Yu added a comment - @Liang: Thanks for the reference. Should HDFS-4530 be a bug instead of improvement ?
          Hide
          Liang Xie added a comment -

          I submitted a trivial patch in HDFS-4530, both i and Colin Patrick McCabe believed it's probably not a root cause, but it could reduce this risk, just for your refer.

          Show
          Liang Xie added a comment - I submitted a trivial patch in HDFS-4530 , both i and Colin Patrick McCabe believed it's probably not a root cause, but it could reduce this risk, just for your refer.
          Hide
          Lars Hofhansl added a comment -
          Show
          Lars Hofhansl added a comment - Todd Lipcon FYI.
          Hide
          Ted Yu added a comment -

          Have you tried the following directive for JVM (actual value may be different from 256M) ?

          -XX:MaxDirectMemorySize=256M

          Show
          Ted Yu added a comment - Have you tried the following directive for JVM (actual value may be different from 256M) ? -XX:MaxDirectMemorySize=256M

            People

            • Assignee:
              stack
              Reporter:
              Enis Soztutar
            • Votes:
              0 Vote for this issue
              Watchers:
              24 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development