Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-1443 Improve Datanode startup time
  3. HDFS-1445

Batch the calls in DataStorage to FileUtil.createHardLink(), so we call it once per directory instead of once per file

    Details

    • Type: Sub-task Sub-task
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.20.2
    • Fix Version/s: 0.20.204.0, 0.23.0
    • Component/s: datanode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      Batch hardlinking during "upgrade" snapshots, cutting time from aprx 8 minutes per volume to aprx 8 seconds. Validated in both Linux and Windows. Depends on prior integration with patch for HADOOP-7133.
      Show
      Batch hardlinking during "upgrade" snapshots, cutting time from aprx 8 minutes per volume to aprx 8 seconds. Validated in both Linux and Windows. Depends on prior integration with patch for HADOOP-7133 .
    • Tags:
      hard links, upgrade, snapshot

      Description

      It was a bit of a puzzle why we can do a full scan of a disk in about 30 seconds during FSDir() or getVolumeMap(), but the same disk took 11 minutes to do Upgrade replication via hardlinks. It turns out that the org.apache.hadoop.fs.FileUtil.createHardLink() method does an outcall to Runtime.getRuntime().exec(), to utilize native filesystem hardlink capability. So it is forking a full-weight external process, and we call it on each individual file to be replicated.

      As a simple check on the possible cost of this approach, I built a Perl test script (under Linux on a production-class datanode). Perl also uses a compiled and optimized p-code engine, and it has both native support for hardlinks and the ability to do "exec".

      • A simple script to create 256,000 files in a directory tree organized like the Datanode, took 10 seconds to run.
      • Replicating that directory tree using hardlinks, the same way as the Datanode, took 12 seconds using native hardlink support.
      • The same replication using outcalls to exec, one per file, took 256 seconds!
      • Batching the calls, and doing 'exec' once per directory instead of once per file, took 16 seconds.

      Obviously, your mileage will vary based on the number of blocks per volume. A volume with less than about 4000 blocks will have only 65 directories. A volume with more than 4K and less than about 250K blocks will have 4200 directories (more or less). And there are two files per block (the data file and the .meta file). So the average number of files per directory may vary from 2:1 to 500:1. A node with 50K blocks and four volumes will have 25K files per volume, or an average of about 6:1. So this change may be expected to take it down from, say, 12 minutes per volume to 2.

        Issue Links

          Activity

          Harsh J made changes -
          Fix Version/s 0.20.204.0 [ 12316319 ]
          Harsh J made changes -
          Fix Version/s 0.20.204.0 [ 12316319 ]
          Owen O'Malley made changes -
          Status Resolved [ 5 ] Closed [ 6 ]
          Matt Foley made changes -
          Link This issue blocks HDFS-2126 [ HDFS-2126 ]
          Matt Foley made changes -
          Fix Version/s 0.20.204.0 [ 12316319 ]
          Jakob Homan made changes -
          Status Patch Available [ 10002 ] Resolved [ 5 ]
          Hadoop Flags [Reviewed]
          Resolution Fixed [ 1 ]
          Matt Foley made changes -
          Link This issue blocks HADOOP-7182 [ HADOOP-7182 ]
          Matt Foley made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Release Note Batch hardlinking during "upgrade" snapshots, cutting time from aprx 8 minutes per volume to aprx 8 seconds. Validated in both Linux and Windows. Requires coordinated change in both COMMON and HDFS. Batch hardlinking during "upgrade" snapshots, cutting time from aprx 8 minutes per volume to aprx 8 seconds. Validated in both Linux and Windows. Depends on prior integration with patch for HADOOP-7133.
          Fix Version/s 0.23.0 [ 12315571 ]
          Fix Version/s 0.22.0 [ 12314241 ]
          Matt Foley made changes -
          Attachment HDFS-1445-trunk.v22_common_1-of-2.patch [ 12470695 ]
          Matt Foley made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Matt Foley made changes -
          Link This issue is blocked by HADOOP-7133 [ HADOOP-7133 ]
          Matt Foley made changes -
          Link This issue is cloned as HDFS-1617 [ HDFS-1617 ]
          Matt Foley made changes -
          Link This issue is cloned as HDFS-1617 [ HDFS-1617 ]
          Matt Foley made changes -
          Attachment HDFS-1445-trunk.v22_common_1-of-2.patch [ 12470695 ]
          Attachment HDFS-1445-trunk.v22_hdfs_2-of-2.patch [ 12470696 ]
          Matt Foley made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Release Note Batch hardlinking during "upgrade" snapshots, cutting time from aprx 8 minutes per volume to aprx 8 seconds. Validated in both Linux and Windows. Requires coordinated change in both COMMON and HDFS.
          Fix Version/s 0.22.0 [ 12314241 ]
          Nigel Daley made changes -
          Fix Version/s 0.22.0 [ 12314241 ]
          Matt Foley made changes -
          Field Original Value New Value
          Assignee Matt Foley [ mattf ]
          Fix Version/s 0.22.0 [ 12314241 ]
          Affects Version/s 0.20.2 [ 12314204 ]
          Component/s data-node [ 12312927 ]
          Matt Foley created issue -

            People

            • Assignee:
              Matt Foley
              Reporter:
              Matt Foley
            • Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development