Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-6289

Fail to achieve data locality when runing MapReduce and Spark on HDFS

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Duplicate
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: distributed-scheduling
    • Labels:
      None
    • Environment:
    • Target Version/s:

      Description

      When running a simple wordcount experiment on YARN, I noticed that the task failed to achieve data locality, even though there is no other job running on the cluster at the same time. The experiment was done in a 7-node (1 master, 6 data nodes/node managers) cluster and the input of the wordcount job (both Spark and MapReduce) is a single-block file in HDFS which is two-way replicated (replication factor = 2). I ran wordcount on YARN for 10 times. The results show that only 30% of tasks can achieve data locality, which seems like the result of a random placement of tasks. The experiment details are in the attachment, and feel free to reproduce the experiments.

        Attachments

        1. YARN-RackAwareness.docx
          118 kB
          Huangkaixuan
        2. Hadoop_Spark_Conf.zip
          38 kB
          Huangkaixuan
        3. YARN-DataLocality.docx
          197 kB
          Huangkaixuan

          Issue Links

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                Huangkx6810 Huangkaixuan
              • Votes:
                0 Vote for this issue
                Watchers:
                5 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: