Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-9629

Support Windows Azure Storage - Blob as a file system in Hadoop

    Details

    • Type: New Feature
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.7.0
    • Component/s: tools
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hadoop now supports integration with Azure Storage as an alternative Hadoop Compatible File System.

      Description

      Description

      This JIRA incorporates adding a new file system implementation for accessing Windows Azure Storage - Blob from within Hadoop, such as using blobs as input to MR jobs or configuring MR jobs to put their output directly into blob storage.

      High level design

      At a high level, the code here extends the FileSystem class to provide an implementation for accessing blob storage; the scheme wasb is used for accessing it over HTTP, and wasbs for accessing over HTTPS. We use the URI scheme:

      wasb[s]://<container>@<account>/path/to/file

      to address individual blobs. We use the standard Azure Java SDK (com.microsoft.windowsazure) to do most of the work. In order to map a hierarchical file system over the flat name-value pair nature of blob storage, we create a specially tagged blob named path/to/dir whenever we create a directory called path/to/dir, then files under that are stored as normal blobs path/to/dir/file. We have many metrics implemented for it using the Metrics2 interface. Tests are implemented mostly using a mock implementation for the Azure SDK functionality, with an option to test against a real blob storage if configured (instructions provided inside in README.txt).

      Credits and history

      This has been ongoing work for a while, and the early version of this work can be seen in HADOOP-8079. This JIRA is a significant revision of that and we'll post the patch here for Hadoop trunk first, then post a patch for branch-1 as well for backporting the functionality if accepted. Credit for this work goes to the early team: Min Wei, David Lao, Lengning Liu and Alexander Stojanovic as well as multiple people who have taken over this work since then (hope I don't forget anyone): Dexter Bradshaw, Johannes Klein, Ivan Mitic, Michael Rys, Mostafa Elhemali, Brian Swan, [~mikelid], Xi Fang, and Chuan Liu.

      Test

      Besides unit tests, we have used WASB as the default file system in our service product. (HDFS is also used but not as default file system.) Various different customer and test workloads have been run against clusters with such configurations for quite some time. The current version reflects to the version of the code tested and used in our production environment.

        Attachments

        1. HADOOP-9629.patch
          426 kB
          Mostafa Elhemali
        2. HADOOP-9629.2.patch
          427 kB
          Mostafa Elhemali
        3. HADOOP-9629.3.patch
          477 kB
          Chuan Liu
        4. HADOOP-9629.trunk.1.patch
          415 kB
          Mike Liddell
        5. HADOOP-9629 - Azure Filesystem - Information for developers.docx
          32 kB
          Mike Liddell
        6. HADOOP-9629 - Azure Filesystem - Information for developers.pdf
          120 kB
          Mike Liddell
        7. HADOOP-9629.trunk.2.patch
          417 kB
          Chris Nauroth
        8. HADOOP-9629.trunk.3.patch
          414 kB
          Mike Liddell
        9. HADOOP-9629.trunk.4.patch
          414 kB
          Mike Liddell
        10. HADOOP-9629.trunk.5.patch
          414 kB
          Chris Nauroth

          Issue Links

            Activity

              People

              • Assignee:
                cnauroth Chris Nauroth
                Reporter:
                mostafae Mostafa Elhemali
              • Votes:
                1 Vote for this issue
                Watchers:
                22 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: