Uploaded image for project: 'Apache Trafodion (Retired)'
  1. Apache Trafodion (Retired)
  2. TRAFODION-2043

Bulk load may fail if bucket cache is configured and is large

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 2.0-incubating, 2.1-incubating
    • 2.1-incubating
    • sql-cmu
    • None
    • Potentially all; this particular example was seen on a 10-node cluster

    Description

      Bulk load may fail when HBase is configured to use bucket cache. An example:

      SQL>LOAD WITH CONTINUE ON ERROR INTO TK.DEVICES SELECT * FROM HIVE.TK.DEVICES ;

      UTIL_OUTPUT
      --------------------------------------------------------------------------------------------------------------------------------
      Task: LOAD Status: Started Object: TRAFODION.TK.DEVICES
      Task: CLEANUP Status: Started Object: TRAFODION.TK.DEVICES
      Task: CLEANUP Status: Ended Object: TRAFODION.TK.DEVICES
      Task: PREPARATION Status: Started Object: TRAFODION.TK.DEVICES

          • ERROR[8448] Unable to access Hbase interface. Call to ExpHbaseInterface::addToHFile returned error HBASE_ADD_TO_HFILE_ERROR(-713). Cause:
            java.lang.OutOfMemoryError: Direct buffer memory
            java.nio.Bits.reserveMemory(Bits.java:658)
            java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
            java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
            org.apache.hadoop.hbase.util.ByteBufferArray.<init>(ByteBufferArray.java:65)
            org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.<init>(ByteBufferIOEngine.java:47)
            org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:307)
            org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.<init>(BucketCache.java:217)
            org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:614)
            org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:553)
            org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:637)
            org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:231)
            org.trafodion.sql.HBulkLoadClient.doCreateHFile(HBulkLoadClient.java:209)
            org.trafodion.sql.HBulkLoadClient.addToHFile(HBulkLoadClient.java:245)
            . [2016-06-09 00:31:55]

      The failure occurs because the bulk load client code is using a server-side API that requires a CacheConfig object, and that object configures itself according to the settings in the hbase-site.xml file. In particular, if a large bucket cache is configured, it may exceed the memory we specify for Trafodion client servers.

      The fix is to either avoid using cache at all, or to unset the bucket cache property before constructing a CacheConfig object.

      Attachments

        Issue Links

          Activity

            People

              dbirdsall Dave Birdsall
              dbirdsall Dave Birdsall
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: