Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-25801

add cpu processor metric of taskmanager

    XMLWordPrintableJSON

Details

    Description

      flink process add cpu load metric, with user know environment of cpu processor they can determine that their job is io bound /cpu bound . But flink doesn't add container access cpu processor metric, if cpu environment of taskmanager is different(Cpu cores), it's hard to calculate cpu used of flink.

       

      //代码占位符
      metrics.<Double, Gauge<Double>>gauge("Load", mxBean::getProcessCpuLoad);
      metrics.<Long, Gauge<Long>>gauge("Time", mxBean::getProcessCpuTime); 

      Spark give totalCores to show Number of cores available in this executor in ExecutorSummary.

      https://spark.apache.org/docs/3.1.1/monitoring.html#:~:text=totalCores,in%20this%20executor.

      //代码占位符
      val sb = new StringBuilder
      sb.append(s"""spark_info{version="$SPARK_VERSION_SHORT", revision="$SPARK_REVISION"} 1.0\n""")
      val store = uiRoot.asInstanceOf[SparkUI].store
      store.executorList(true).foreach { executor =>
        val prefix = "metrics_executor_"
        val labels = Seq(
          "application_id" -> store.applicationInfo.id,
          "application_name" -> store.applicationInfo.name,
          "executor_id" -> executor.id
        ).map { case (k, v) => s"""$k="$v"""" }.mkString("{", ", ", "}")
        sb.append(s"${prefix}rddBlocks$labels ${executor.rddBlocks}\n")
        sb.append(s"${prefix}memoryUsed_bytes$labels ${executor.memoryUsed}\n")
        sb.append(s"${prefix}diskUsed_bytes$labels ${executor.diskUsed}\n")
        sb.append(s"${prefix}totalCores$labels ${executor.totalCores}\n") 
      }

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              kwafor 王俊博
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated: