Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-11373

Add metrics to the History Server and providers

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Resolved
    • Major
    • Resolution: Incomplete
    • 1.6.0
    • None
    • Spark Core

    Description

      The History server doesn't publish metrics about JVM load or anything from the history provider plugins. This means that performance problems from massive job histories aren't visible to management tools, and nor are any provider-generated metrics such as time to load histories, failed history loads, the number of connectivity failures talking to remote services, etc.

      If the history server set up a metrics registry and offered the option to publish its metrics, then management tools could view this data.

      1. the metrics registry would need to be passed down to the instantiated ApplicationHistoryProvider, in order for it to register its metrics.
      2. if the codahale metrics servlet were registered under a path such as /metrics, the values would be visible as HTML and JSON, without the need for management tools.
      3. Integration tests could also retrieve the JSON-formatted data and use it as part of the test suites.

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned
            stevel@apache.org Steve Loughran
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment