Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
None
-
None
-
None
-
None
Description
We are looking at sources for elevated GC pressure in production. During some live heap analysis we noticed the KeyRange class appears to be an outlier in terms of numbers of instances found in the live heap. I'm wondering if there is some opportunity for object/allocation reuse here? Or perhaps it can be converted into something amenable to escape analysis so we get stack allocations instead of heap allocations?
This is Phoenix 4.13.0 on HBase 0.98.24:
num #instances #bytes class name ---------------------------------------------- 1: 189592127 20432451240 [B 2: 77390411 1857369864 org.apache.phoenix.query.KeyRange 3: 14732411 844013608 [C 4: 15034590 481106880 java.util.HashMap$Node 5: 2587783 433834912 [Ljava.lang.Object; 6: 3336992 400439040 org.apache.hadoop.hbase.regionserver.ScanQueryMatcher 7: 3336992 373743104 org.apache.hadoop.hbase.regionserver.StoreScanner 8: 14729747 353513928 java.lang.String 9: 5605941 269085168 java.util.TreeMap 10: 6511408 208365056 java.util.concurrent.ConcurrentHashMap$Node 11: 2250216 180176200 [Ljava.util.HashMap$Node; 12: 5463124 174819968 org.apache.hadoop.hbase.KeyValue 13: 5277319 168874208 java.util.Hashtable$Entry 14: 3336992 160175616 org.apache.hadoop.hbase.regionserver.ScanDeleteTracker 15: 1734848 138787840 org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$FastDiffSeekerState 16: 1668418 120126096 org.apache.hadoop.hbase.client.Scan 17: 2142254 119966224 org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker num #instances #bytes class name ---------------------------------------------- 1: 189774274 20451239624 [B 2: 77446115 1858706760 org.apache.phoenix.query.KeyRange 3: 14741777 844584352 [C 4: 15043664 481397248 java.util.HashMap$Node 5: 2591421 434232680 [Ljava.lang.Object; 6: 3339244 400709280 org.apache.hadoop.hbase.regionserver.ScanQueryMatcher 7: 3339244 373995328 org.apache.hadoop.hbase.regionserver.StoreScanner 8: 14739076 353737824 java.lang.String 9: 5609708 269265984 java.util.TreeMap 10: 6513671 208437472 java.util.concurrent.ConcurrentHashMap$Node 11: 2251693 180293648 [Ljava.util.HashMap$Node; 12: 5477024 175264768 org.apache.hadoop.hbase.KeyValue 13: 5277320 168874240 java.util.Hashtable$Entry 14: 3339244 160283712 org.apache.hadoop.hbase.regionserver.ScanDeleteTracker 15: 1759096 140727680 org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$FastDiffSeekerState 16: 1669544 120207168 org.apache.hadoop.hbase.client.Scan 17: 2143728 120048768 org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker num #instances #bytes class name ---------------------------------------------- 1: 189920309 20464274472 [B 2: 77499190 1859980560 org.apache.phoenix.query.KeyRange 3: 14748627 845142696 [C 4: 15049176 481573632 java.util.HashMap$Node 5: 2593838 434563512 [Ljava.lang.Object; 6: 3340548 400865760 org.apache.hadoop.hbase.regionserver.ScanQueryMatcher 7: 3340548 374141376 org.apache.hadoop.hbase.regionserver.StoreScanner 8: 14745909 353901816 java.lang.String 9: 5611921 269372208 java.util.TreeMap 10: 6545786 209465152 java.util.concurrent.ConcurrentHashMap$Node 11: 2252716 180374216 [Ljava.util.HashMap$Node; 12: 5484841 175514912 org.apache.hadoop.hbase.KeyValue 13: 5338662 170837184 java.util.Hashtable$Entry 14: 3340548 160346304 org.apache.hadoop.hbase.regionserver.ScanDeleteTracker 15: 1771616 141729280 org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$FastDiffSeekerState 16: 1670196 120254112 org.apache.hadoop.hbase.client.Scan 17: 2144590 120097040 org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker
Attachments
Issue Links
- is duplicated by
-
PHOENIX-4460 High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server
- Resolved