Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-2082

Support for alternative log aggregation mechanism

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • None
    • None

    Description

      I will post a more detailed design later. Here is the brief summary and would like to get early feedback.

      Problem Statement:

      Current implementation of log aggregation create one HDFS file for each

      {application, nodemanager }

      . These files are relative small, in the range of 1-2 MB. In a large cluster with lots of application and many nodemanagers, it ends up creating lots of small files in HDFS. This creates pressure on HDFS NN on the following ways.

      1. It increases NN Memory size. It is mitigated by having history server deletes old log files in HDFS.
      2. Runtime RPC hit on HDFS. Each log aggregation file introduced several NN RPCs such as create, getAdditionalBlock, complete, rename. When the cluster is busy, such RPC hit has impact on NN performance.

      In addition, to support non-MR applications on YARN, we might need to support aggregation for long running applications.

      Design choices:

      1. Don't aggregate all the logs, as in YARN-221.
      2. Create a dedicated HDFS namespace used only for log aggregation.
      3. Write logs to some key-value store like HBase. HBase's RPC hit on NN will be much less.
      4. Decentralize the application level log aggregation to NMs. All logs for a given application are aggregated first by a dedicated NM before it is pushed to HDFS.
      5. Have NM aggregate logs on a regular basis; each of these log files will have data from different applications and there needs to be some index for quick lookup.

      Proposal:

      1. Make yarn log aggregation pluggable for both read and write path. Note that Hadoop FileSystem provides an abstraction and we could ask alternative log aggregator implement compatable FileSystem, but that seems to an overkill.

      2. Provide a log aggregation plugin that write to HBase. The scheme design needs to support efficient read on a per application as well as per application+container basis; in addition, it shouldn't create hotspot in a cluster where certain users might create more jobs than others. For example, we can use hash($user+$applicationId} + containerid as the row key.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              mingma Ming Ma
              Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

              Dates

                Created:
                Updated: