Uploaded image for project: 'Apache HAWQ'
  1. Apache HAWQ
  2. HAWQ-947

set work_mem cannot work

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 2.1.0.0-incubating
    • 2.3.0.0-incubating
    • Core
    • None

    Description

      HAWQ version is 2.0.1.0 build dev.
      EXPLAIN ANALYZE:
      Work_mem: 9554K bytes max, 63834K bytes wanted。
      then set work_mem to '512MB',but not work

      test=# EXPLAIN ANALYZE SELECT count(DISTINCT item_sku_id)
      test-# FROM gdm_m03_item_sku_da
      test-# WHERE item_origin ='中国大陆';
                                                                                                                                                                                                                                 QUERY PLAN
      
      --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
       Aggregate  (cost=54177150.69..54177150.70 rows=1 width=8)
         Rows out:  Avg 1.0 rows x 1 workers.  Max/Last(seg-1:BJHC-HEBE-9014.hadoop.jd.local/seg-1:BJHC-HEBE-9014.hadoop.jd.local) 1/1 rows with 532498/532498 ms to end, start offset by 201/201 ms.
         ->  Gather Motion 306:1  (slice2; segments: 306)  (cost=54177147.60..54177150.68 rows=1 width=8)
               Rows out:  Avg 306.0 rows x 1 workers at destination.  Max/Last(seg-1:BJHC-HEBE-9014.hadoop.jd.local/seg-1:BJHC-HEBE-9014.hadoop.jd.local) 306/306 rows with 529394/529394 ms to first row, 532498/532498 ms to end, start offset b
      y 201/201 ms.
               ->  Aggregate  (cost=54177147.60..54177147.61 rows=1 width=8)
                     Rows out:  Avg 1.0 rows x 306 workers.  Max/Last(seg305:BJHC-HEBE-9031.hadoop.jd.local/seg258:BJHC-HEBE-9029.hadoop.jd.local) 1/1 rows with 530367/532274 ms to end, start offset by 396/246 ms.
                     Executor memory:  9554K bytes avg, 9554K bytes max (seg305:BJHC-HEBE-9031.hadoop.jd.local).
                     Work_mem used:  9554K bytes avg, 9554K bytes max (seg305:BJHC-HEBE-9031.hadoop.jd.local).
                     Work_mem wanted: 63695K bytes avg, 63834K bytes max (seg296:BJHC-HEBE-9031.hadoop.jd.local) to lessen workfile I/O affecting 306 workers.
                     ->  Redistribute Motion 306:306  (slice1; segments: 306)  (cost=0.00..53550018.97 rows=819776 width=11)
                           Hash Key: gdm_m03_item_sku_da.item_sku_id
                           Rows out:  Avg 820083.0 rows x 306 workers at destination.  Max/Last(seg296:BJHC-HEBE-9031.hadoop.jd.local/seg20:BJHC-HEBE-9016.hadoop.jd.local) 821880/818660 rows with 769/771 ms to first row, 524681/525063 ms to e
      nd, start offset by 352/307 ms.
                           ->  Append-only Scan on gdm_m03_item_sku_da  (cost=0.00..48532990.00 rows=819776 width=11)
                                 Filter: item_origin::text = '中国大陆'::text
                                 Rows out:  Avg 820083.0 rows x 306 workers.  Max/Last(seg46:BJHC-HEBE-9017.hadoop.jd.local/seg5:BJHC-HEBE-9015.hadoop.jd.local) 893390/810582 rows with 28/127 ms to first row, 73062/526318 ms to end, start off
      set by 354/458 ms.
       Slice statistics:
         (slice0)    Executor memory: 1670K bytes.
         (slice1)    Executor memory: 3578K bytes avg x 306 workers, 4711K bytes max (seg172:BJHC-HEBE-9024.hadoop.jd.local).
         (slice2)  * Executor memory: 10056K bytes avg x 306 workers, 10056K bytes max (seg305:BJHC-HEBE-9031.hadoop.jd.local).  Work_mem: 9554K bytes max, 63834K bytes wanted.
       Statement statistics:
         Memory used: 262144K bytes
         Memory wanted: 64233K bytes
       Settings:  default_hash_table_bucket_number=6
       Dispatcher statistics:
         executors used(total/cached/new connection): (612/0/612); dispatcher time(total/connection/dispatch data): (489.036 ms/192.741 ms/293.357 ms).
         dispatch data time(max/min/avg): (37.798 ms/0.011 ms/3.504 ms); consume executor data time(max/min/avg): (0.016 ms/0.002 ms/0.005 ms); free executor time(max/min/avg): (0.000 ms/0.000 ms/0.000 ms).
       Data locality statistics:
         data locality ratio: 0.864; virtual segment number: 306; different host number: 17; virtual segment number per host(avg/min/max): (18/18/18); segment size(avg/min/max): (3435087582.693 B/3391891296 B/3489660928 B); segment size with
      penalty(avg/min/max): (3439751300.235 B/3422552064 B/3489660928 B); continuity(avg/min/max): (0.630/0.118/1.000); DFS metadatacache: 21.704 ms; resource allocation: 1.773 ms; datalocality calculation: 51.252 ms.
       Total runtime: 532774.799 ms
      (29 rows)
      
      Time: 532783.403 ms
      test=# show Work_mem;
       work_mem
      ----------
       512MB
      (1 row)
      
      hawq-site.xml
      <configuration>
              <property>
                      <name>hawq_master_address_host</name>
                      <value>BJHC-HEBE-9014.hadoop.jd.local</value>
                      <description>The host name of hawq master.</description>
              </property>
      
              <property>
                      <name>hawq_master_address_port</name>
                      <value>5432</value>
                      <description>The port of hawq master.</description>
              </property>
      
              <property>
                      <name>hawq_standby_address_host</name>
                      <value>none</value>
                      <description>The host name of hawq standby master.</description>
              </property>
      
              <property>
                      <name>hawq_segment_address_port</name>
                      <value>40000</value>
                      <description>The port of hawq segment.</description>
              </property>
      
              <property>
                      <name>hawq_dfs_url</name>
                      <value>adhoc/user/hawq_default</value>
                      <description>URL for accessing HDFS.</description>
              </property>
      
              <property>
                      <name>hawq_master_directory</name>
                      <value>/data0/hawq/hawq-data-directory/masterdd</value>
                      <description>The directory of hawq master.</description>
              </property>
      
              <property>
                      <name>hawq_segment_directory</name>
                      <value>/data0/hawq/hawq-data-directory/segmentdd</value>
                      <description>The directory of hawq segment.</description>
              </property>
      
              <property>
                      <name>hawq_master_temp_directory</name>
                      <value>/data0/hawq/tmp</value>
                      <description>The temporary directory reserved for hawq master.</description>
              </property>
      
              <property>
                      <name>hawq_segment_temp_directory</name>
                      <value>/data0/hawq/tmp</value>
                      <description>The temporary directory reserved for hawq segment.</description>
              </property>
      
              <property>
                      <name>hawq_global_rm_type</name>
                      <value>none</value>
                      <description>The resource manager type to start for allocating resource.
      					 'none' means hawq resource manager exclusively uses whole
      					 cluster; 'yarn' means hawq resource manager contacts YARN
      					 resource manager to negotiate resource.
      		</description>
              </property>
      
              <property>
                      <name>hawq_rm_memory_limit_perseg</name>
                      <value>48GB</value>
                      <description>The limit of memory usage in a hawq segment when
      					 hawq_global_rm_type is set 'none'.
      		</description>
              </property>
      
              <property>
                      <name>hawq_rm_nvcore_limit_perseg</name>
                      <value>16</value>
                      <description>The limit of virtual core usage in a hawq segment when
      					 hawq_global_rm_type is set 'none'.
      		</description>
              </property>
      
              <property>
                      <name>hawq_rm_stmt_vseg_memory</name>
                      <value>256mb</value>
              </property>
      
              <property>
                      <name>hawq_re_cpu_enable</name>
                      <value>false</value>
                      <description>The control to enable/disable CPU resource enforcement.</description>
              </property>
      
              <property>
                      <name>hawq_re_cgroup_mount_point</name>
                      <value>/sys/fs/cgroup</value>
                      <description>The mount point of CGroup file system for resource enforcement.
      					 For example, /sys/fs/cgroup/cpu/hawq for CPU sub-system.
      		</description>
              </property>
      
              <property>
                      <name>hawq_re_cgroup_hierarchy_name</name>
                      <value>hawq</value>
                      <description>The name of the hierarchy to accomodate CGroup directories/files for resource enforcement.
      					 For example, /sys/fs/cgroup/cpu/hawq for CPU sub-system.
      		</description>
              </property>
      
              <property>
                      <name>default_hash_table_bucket_number</name>
                      <value>6</value>
              </property>
      
              <property>
                      <name>log_min_error_statement</name>
                      <value>DEBUG5</value>
              </property>
      
              <property>
                      <name>hawq_rm_nvseg_perquery_limit</name>
                      <value>512</value>
              </property>
      
              <property>
                      <name>hawq_rm_nvseg_perquery_perseg_limit</name>
                      <value>18</value>
              </property>
      
              <property>
                      <name>shared_buffers</name>
                      <value>256MB</value>
              </property>
      
      </configuration>
      

      Total segment instance count from config file = 17

      Thanks

      Attachments

        Activity

          People

            lei_chang Lei Chang
            bill Biao Wu
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: