Uploaded image for project: 'Apache IoTDB'
  1. Apache IoTDB
  2. IOTDB-5597

[ query ]error when create FragmentInstanceExecution ,The system can't allow more query tasks.

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotBulk Copy AttachmentsBulk Move AttachmentsVotersWatch issueWatchersCreate sub-taskConvert to sub-taskLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • None
    • Core/Query, mpp-cluster
    • None
    • 2023-2-Query

    Description

      master_0228_fc8d05b
      1C1D , 30,000 devices, 10 dataregion, 1 timepartition (seq/unseq data)
      1 query :
      select min_time(s_0),max_time(s_0),count(s_0) from root.** align by device;

      2023-02-28 17:40:47,808 [pool-47-IoTDB-ClientRPC-Processor-2$20230228_094037_00001_1.6.0] WARN o.a.i.d.m.e.f.FragmentInstanceManager:154 - error when create FragmentInstanceExecution.
      java.lang.IllegalStateException: The system can't allow more query tasks.
      at com.google.common.base.Preconditions.checkState(Preconditions.java:502)
      at org.apache.iotdb.db.mpp.execution.schedule.queue.IndexedBlockingReserveQueue.push(IndexedBlockingReserveQueue.java:57)
      at org.apache.iotdb.db.mpp.execution.schedule.DriverScheduler.submitTaskToReadyQueue(DriverScheduler.java:246)
      at org.apache.iotdb.db.mpp.execution.schedule.DriverScheduler.submitDrivers(DriverScheduler.java:220)
      at org.apache.iotdb.db.mpp.execution.fragment.FragmentInstanceExecution.createFragmentInstanceExecution(FragmentInstanceExecution.java:65)
      at org.apache.iotdb.db.mpp.execution.fragment.FragmentInstanceManager.lambda$execDataQueryFragmentInstance$2(FragmentInstanceManager.java:144)
      at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1705)
      at org.apache.iotdb.db.mpp.execution.fragment.FragmentInstanceManager.execDataQueryFragmentInstance(FragmentInstanceManager.java:115)
      at org.apache.iotdb.db.consensus.statemachine.DataRegionStateMachine.read(DataRegionStateMachine.java:269)
      at org.apache.iotdb.consensus.iot.IoTConsensusServerImpl.read(IoTConsensusServerImpl.java:288)
      at org.apache.iotdb.consensus.iot.IoTConsensus.read(IoTConsensus.java:184)
      at org.apache.iotdb.db.mpp.execution.executor.RegionReadExecutor.execute(RegionReadExecutor.java:46)
      at org.apache.iotdb.db.mpp.plan.scheduler.FragmentInstanceDispatcherImpl.dispatchLocally(FragmentInstanceDispatcherImpl.java:311)
      at org.apache.iotdb.db.mpp.plan.scheduler.FragmentInstanceDispatcherImpl.dispatchOneInstance(FragmentInstanceDispatcherImpl.java:211)
      at org.apache.iotdb.db.mpp.plan.scheduler.FragmentInstanceDispatcherImpl.dispatchRead(FragmentInstanceDispatcherImpl.java:112)
      at org.apache.iotdb.db.mpp.plan.scheduler.FragmentInstanceDispatcherImpl.dispatch(FragmentInstanceDispatcherImpl.java:99)
      at org.apache.iotdb.db.mpp.plan.scheduler.ClusterScheduler.start(ClusterScheduler.java:116)
      at org.apache.iotdb.db.mpp.plan.execution.QueryExecution.schedule(QueryExecution.java:306)
      at org.apache.iotdb.db.mpp.plan.execution.QueryExecution.start(QueryExecution.java:221)
      at org.apache.iotdb.db.mpp.plan.Coordinator.execute(Coordinator.java:161)
      at org.apache.iotdb.db.service.thrift.impl.ClientRPCServiceImpl.executeStatementInternal(ClientRPCServiceImpl.java:218)
      at org.apache.iotdb.db.service.thrift.impl.ClientRPCServiceImpl.executeStatementV2(ClientRPCServiceImpl.java:480)
      at org.apache.iotdb.service.rpc.thrift.IClientRPCService$Processor$executeStatementV2.getResult(IClientRPCService.java:3629)
      at org.apache.iotdb.service.rpc.thrift.IClientRPCService$Processor$executeStatementV2.getResult(IClientRPCService.java:3609)
      at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38)
      at org.apache.iotdb.db.service.thrift.ProcessorWithMetrics.process(ProcessorWithMetrics.java:64)
      at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:248)
      at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
      at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
      at java.base/java.lang.Thread.run(Thread.java:834)
      2023-02-28 17:40:47,816 [pool-47-IoTDB-ClientRPC-Processor-2$20230228_094037_00001_1.6.0] WARN o.a.i.d.m.p.s.FragmentInstanceDispatcherImpl:313 - The system can't allow more query tasks.

      TEST ENV
      1. 192.168.10.73 * 48CPU *
      /data/mpp_test/m_0228_2_fc8d05b

      ConfigNode env
      MAX_HEAP_SIZE="8G"
      DataNode env
      MAX_HEAP_SIZE="32G"
      MAX_DIRECT_MEMORY_SIZE="16G"
      COMMON prop:
      data_region_group_extension_policy=CUSTOM
      default_data_region_group_num_per_database=10
      time_partition_interval=86400000

      1C1D (start-standalone.sh)

      2. benchmark write data ( part1.conf ,part2.conf)
      benchmark runs the configuration of part1.conf.
      cli flush;
      benchmark runs the configuration of part2.conf.
      cli flush;

      3. cli -e "select min_time(s_0),max_time(s_0),count(s_0) from root.** align by device;"

      Attachments

        1. image-2023-02-28-17-50-41-215.png
          8 kB
          刘珍
        2. ip73_logs.tar.gz
          38 kB
          刘珍
        3. part2.conf
          14 kB
          刘珍
        4. part1.conf
          14 kB
          刘珍

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            jackietien Yuan Tian Assign to me
            刘珍 刘珍
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Agile

                Completed Sprint:
                2023-2-Query ended 06/Mar/23
                View on Board

                Slack

                  Issue deployment