Details

    • Release Note:
      Hide
      Introduced FUSE module for HDFS. Module allows mount of HDFS as a Unix filesystem, and optionally the export of that mount point to other machines. Writes are disabled. rmdir, mv, mkdir, rm are supported, but not cp, touch, and the like. Usage information is attached to the Jira record.

      Show
      Introduced FUSE module for HDFS. Module allows mount of HDFS as a Unix filesystem, and optionally the export of that mount point to other machines. Writes are disabled. rmdir, mv, mkdir, rm are supported, but not cp, touch, and the like. Usage information is attached to the Jira record.

      Description

      This is a FUSE module for Hadoop's HDFS.

      It allows one to mount HDFS as a Unix filesystem and optionally export
      that mount point to other machines.

      rmdir, mv, mkdir, rm are all supported. just not cp, touch, ..., but actual writes require: https://issues.apache.org/jira/browse/HADOOP-3485

      For the most up-to-date documentation, see: http://wiki.apache.org/hadoop/MountableHDFS

      BUILDING:

      Requirements:

      1. a Linux kernel > 2.6.9 or a kernel module from FUSE - i.e., you
      compile it yourself and then modprobe it. Better off with the
      former option if possible. (Note for now if you use the kernel
      with fuse included, it doesn't allow you to export this through NFS
      so be warned. See the FUSE email list for more about this.)

      2. FUSE should be installed in /usr/local or FUSE_HOME ant
      environment variable

      To build:

      1. in HADOOP_HOME: ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1 -Dlibhdfs=1

      NOTE: for amd64 architecture, libhdfs will not compile unless you edit
      the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64
      (probably the same for others too).

      --------------------------------------------------------------------------------

      CONFIGURING:

      Look at all the paths in fuse_dfs_wrapper.sh and either correct them
      or set them in your environment before running. (note for automount
      and mount as root, you probably cannnot control the environment, so
      best to set them in the wrapper)

      INSTALLING:

      1. mkdir /mnt/dfs (or wherever you want to mount it)

      2. fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /mnt/dfs -d
      ; and from another terminal, try ls /mnt/dfs

      If 2 works, try again dropping the debug mode, i.e., -d

      (note - common problems are that you don't have libhdfs.so or
      libjvm.so or libfuse.so on your LD_LIBRARY_PATH, and your CLASSPATH
      does not contain hadoop and other required jars.)

      --------------------------------------------------------------------------------

      DEPLOYING:

      in a root shell do the following:

      1. add the following to /etc/fstab -
      fuse_dfs#dfs://hadoop_server.foo.com:9000 /mnt/dfs fuse
      allow_other,rw 0 0

      2. mount /mnt/dfs Expect problems with not finding fuse_dfs. You will
      need to probably add this to /sbin and then problems finding the
      above 3 libraries. Add these using ldconfig.

      --------------------------------------------------------------------------------

      EXPORTING:

      Add the following to /etc/exports:

      /mnt/hdfs *.foo.com(no_root_squash,rw,fsid=1,sync)

      NOTE - you cannot export this with a FUSE module built into the kernel

      • e.g., kernel 2.6.17. For info on this, refer to the FUSE wiki.
        --------------------------------------------------------------------------------

      ADVANCED:

      you may want to ensure certain directories cannot be deleted from the
      shell until the FS has permissions. You can set this in the build.xml
      file in src/contrib/fuse-dfs/build.xml

      1. patch6.txt
        94 kB
        Pete Wyckoff
      2. HADOOP-4.patch
        65 kB
        Doug Cutting
      3. HADOOP-4.patch
        64 kB
        Pete Wyckoff
      4. HADOOP-4.patch
        63 kB
        Pete Wyckoff
      5. HADOOP-4.patch
        63 kB
        Pete Wyckoff
      6. HADOOP-4.patch
        95 kB
        Doug Cutting
      7. patch6.txt
        94 kB
        Pete Wyckoff
      8. patch5.txt
        94 kB
        Pete Wyckoff
      9. patch4.txt
        94 kB
        Pete Wyckoff
      10. HADOOP-4.patch
        95 kB
        Doug Cutting
      11. HADOOP-4.patch
        95 kB
        Doug Cutting
      12. patch4.txt
        94 kB
        Pete Wyckoff
      13. patch4.txt
        94 kB
        Pete Wyckoff
      14. patch3.txt
        97 kB
        Pete Wyckoff
      15. patch2.txt
        61 kB
        Pete Wyckoff
      16. patch.txt
        61 kB
        Pete Wyckoff
      17. patch.txt
        80 kB
        Pete Wyckoff
      18. fuse_dfs.tar.gz
        21 kB
        Pete Wyckoff
      19. fuse_dfs.c
        25 kB
        Craig Macdonald
      20. fuse-dfs.tar.gz
        172 kB
        Pete Wyckoff
      21. fuse-dfs.tar.gz
        112 kB
        Pete Wyckoff
      22. fuse_dfs.c
        23 kB
        Pete Wyckoff
      23. fuse_dfs.c
        23 kB
        Pete Wyckoff
      24. fuse_dfs.sh
        0.6 kB
        Craig Macdonald
      25. fuse-dfs.tar.gz
        112 kB
        Pete Wyckoff
      26. fuse-dfs.tar.gz
        5 kB
        Pete Wyckoff
      27. Makefile
        0.2 kB
        Sami Siren
      28. fuse_dfs.c
        23 kB
        Pete Wyckoff
      29. fuse-dfs.tar.gz
        5 kB
        Pete Wyckoff
      30. fuse-j-hadoopfs-03.tar.gz
        11 kB
        Anurag Sharma
      31. fuse_dfs.c
        16 kB
        Pete Wyckoff
      32. fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz
        27 kB
        Nguyen Quoc Mai
      33. fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz
        27 kB
        Nguyen Quoc Mai
      34. fuse-hadoop-0.1.1.tar.gz
        5 kB
        John Xing

        Issue Links

          Activity

          John Xing created issue -
          John Xing made changes -
          Field Original Value New Value
          Attachment fuse-hadoop-0.1.1.tar.gz [ 12322643 ]
          Doug Cutting made changes -
          Component/s fs [ 12310689 ]
          Doug Cutting made changes -
          Link This issue is duplicated by HADOOP-17 [ HADOOP-17 ]
          Doug Cutting made changes -
          Workflow jira [ 12346653 ] no reopen closed [ 12372881 ]
          Doug Cutting made changes -
          Workflow no reopen closed [ 12372881 ] no-reopen-closed [ 12373213 ]
          Doug Cutting made changes -
          Workflow no-reopen-closed [ 12373213 ] no-reopen-closed, patch-avail [ 12377419 ]
          Nguyen Quoc Mai made changes -
          Nguyen Quoc Mai made changes -
          Nguyen Quoc Mai made changes -
          Affects Version/s 0.5.0 [ 12311939 ]
          Status Open [ 1 ] Patch Available [ 10002 ]
          Doug Cutting made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Doug Cutting made changes -
          Assignee Doug Cutting [ cutting ]
          Anurag Sharma made changes -
          Attachment fuse-j-hadoopfs-0.1.zip [ 12370867 ]
          Attachment fuse-j-patch.zip [ 12370868 ]
          Anurag Sharma made changes -
          Attachment fuse-j-patch.zip [ 12370868 ]
          Pete Wyckoff made changes -
          Attachment fuse_dfs.c [ 12371393 ]
          Anurag Sharma made changes -
          Attachment fuse-j-hadoopfs-0.1.zip [ 12370867 ]
          Anurag Sharma made changes -
          Attachment fuse-j-hadoopfs-03.tar.gz [ 12371549 ]
          Pete Wyckoff made changes -
          Attachment fuse-dfs.tar.gz [ 12371630 ]
          Pete Wyckoff made changes -
          Attachment fuse_dfs.c [ 12373953 ]
          Sami Siren made changes -
          Attachment Makefile [ 12374201 ]
          Pete Wyckoff made changes -
          Attachment fuse-dfs.tar.gz [ 12375023 ]
          Pete Wyckoff made changes -
          Attachment fuse-dfs.tar.gz [ 12375113 ]
          Craig Macdonald made changes -
          Attachment fuse_dfs.sh [ 12376027 ]
          Pete Wyckoff made changes -
          Attachment fuse_dfs.c [ 12376068 ]
          Craig Macdonald made changes -
          Comment [ Hi Pete,

          I will try the newer version tomorrow when @work. I note that fi->fh isnt used or set in dfs_read in your latest version. Could we set it in dfs_open for O_READONLY, and then use it if available?

          I'm not clear on the semantics of hdfsPread - does it assume that offset is after previous offset?
          If so then we need to check that the current read on a file is strictly after the previous read for a previously open FH to be of use - hdfsTell could be of use here.

          Thanks

          Craig ]
          Pete Wyckoff made changes -
          Attachment fuse_dfs.c [ 12376074 ]
          Pete Wyckoff made changes -
          Attachment fuse-dfs.tar.gz [ 12376075 ]
          Pete Wyckoff made changes -
          Attachment fuse-dfs.tar.gz [ 12376076 ]
          Doug Cutting made changes -
          Assignee Doug Cutting [ cutting ] Pete Wyckoff [ wyckoff ]
          Craig Macdonald made changes -
          Attachment fuse_dfs.c [ 12378260 ]
          Pete Wyckoff made changes -
          Attachment fuse_dfs.tar.gz [ 12379896 ]
          Pete Wyckoff made changes -
          Attachment patch.txt [ 12379898 ]
          Pete Wyckoff made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Release Note contrib package for mounting HDFS on any platform that supports FUSE.
          Pete Wyckoff made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Pete Wyckoff made changes -
          Affects Version/s 0.5.0 [ 12311939 ]
          Status Open [ 1 ] Patch Available [ 10002 ]
          Pete Wyckoff made changes -
          Attachment patch.txt [ 12380203 ]
          Pete Wyckoff made changes -
          Description tool to mount dfs on linux tool to mount dfs on Unix or any OS that supports FUSE
          Environment linux only OSs that support FUSE. Includes Linux, MacOSx, OpenSolaris... http://fuse.sourceforge.net/wiki/index.php/OperatingSystems
          Pete Wyckoff made changes -
          Status Patch Available [ 10002 ] In Progress [ 3 ]
          Pete Wyckoff made changes -
          Status In Progress [ 3 ] Open [ 1 ]
          Pete Wyckoff made changes -
          Status Open [ 1 ] In Progress [ 3 ]
          Pete Wyckoff made changes -
          Status In Progress [ 3 ] Patch Available [ 10002 ]
          Pete Wyckoff made changes -
          Attachment patch2.txt [ 12380454 ]
          Pete Wyckoff made changes -
          Status Patch Available [ 10002 ] In Progress [ 3 ]
          Pete Wyckoff made changes -
          Status In Progress [ 3 ] Patch Available [ 10002 ]
          Pete Wyckoff made changes -
          Attachment patch3.txt [ 12380975 ]
          Pete Wyckoff made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Pete Wyckoff made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Pete Wyckoff made changes -
          Attachment patch4.txt [ 12381047 ]
          Pete Wyckoff made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Pete Wyckoff made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Pete Wyckoff made changes -
          Attachment patch4.txt [ 12381125 ]
          Doug Cutting made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Doug Cutting made changes -
          Attachment HADOOP-4.patch [ 12381139 ]
          Doug Cutting made changes -
          Attachment HADOOP-4.patch [ 12381140 ]
          Doug Cutting made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Pete Wyckoff made changes -
          Attachment patch5.txt [ 12381153 ]
          Pete Wyckoff made changes -
          Attachment patch5.txt [ 12381153 ]
          Pete Wyckoff made changes -
          Attachment patch4.txt [ 12381154 ]
          Pete Wyckoff made changes -
          Assignee Pete Wyckoff [ wyckoff ] Prachi Gupta [ prachi.gpt ]
          Status Patch Available [ 10002 ] Open [ 1 ]
          Pete Wyckoff made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Pete Wyckoff made changes -
          Attachment patch5.txt [ 12381260 ]
          Doug Cutting made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Pete Wyckoff made changes -
          Assignee Prachi Gupta [ prachi.gpt ] Raghu Angadi [ rangadi ]
          Status Open [ 1 ] Patch Available [ 10002 ]
          Pete Wyckoff made changes -
          Attachment patch6.txt [ 12381278 ]
          Raghu Angadi made changes -
          Assignee Raghu Angadi [ rangadi ]
          Doug Cutting made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Doug Cutting made changes -
          Attachment HADOOP-4.patch [ 12381349 ]
          Doug Cutting made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Pete Wyckoff made changes -
          Attachment HADOOP-4.patch [ 12381458 ]
          Pete Wyckoff made changes -
          Status Patch Available [ 10002 ] In Progress [ 3 ]
          Pete Wyckoff made changes -
          Attachment HADOOP-4.patch [ 12381459 ]
          Pete Wyckoff made changes -
          Assignee Pete Wyckoff [ wyckoff ]
          Pete Wyckoff made changes -
          Status In Progress [ 3 ] Patch Available [ 10002 ]
          Pete Wyckoff made changes -
          Attachment HADOOP-4.patch [ 12381467 ]
          Doug Cutting made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Doug Cutting made changes -
          Attachment HADOOP-4.patch [ 12381541 ]
          Doug Cutting made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Pete Wyckoff made changes -
          Attachment patch6.txt [ 12382413 ]
          Doug Cutting made changes -
          Fix Version/s 0.18.0 [ 12312972 ]
          Status Patch Available [ 10002 ] Resolved [ 5 ]
          Resolution Fixed [ 1 ]
          Doug Cutting made changes -
          Component/s contrib/fuse-dfs [ 12312376 ]
          Component/s fs [ 12310689 ]
          Pete Wyckoff made changes -
          Release Note contrib package for mounting HDFS on any platform that supports FUSE. This is a FUSE module for Hadoop's HDFS.

          It allows one to mount HDFS as a Unix filesystem and optionally export
          that mount point to other machines.

          For now, writes are disabled as this requires Hadoop-1700 - file
          appends which I guess won't be ready till 0.18 ish ??.

          rmdir, mv, mkdir, rm are all supported. just not cp, touch, ...

          BUILDING:


          Requirements:

             1. a Linux kernel > 2.6.9 or a kernel module from FUSE - i.e., you
             compile it yourself and then modprobe it. Better off with the
             former option if possible. (Note for now if you use the kernel
             with fuse included, it doesn't allow you to export this through NFS
             so be warned. See the FUSE email list for more about this.)

             2. FUSE should be installed in /usr/local or FUSE_HOME ant
             environment variable

          To build:

             1. in HADOOP_HOME: ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1


          NOTE: for amd64 architecture, libhdfs will not compile unless you edit
          the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64
          (probably the same for others too).

          --------------------------------------------------------------------------------

          CONFIGURING:

          Look at all the paths in fuse_dfs_wrapper.sh and either correct them
          or set them in your environment before running. (note for automount
          and mount as root, you probably cannnot control the environment, so
          best to set them in the wrapper)

          INSTALLING:

          1. mkdir /mnt/dfs (or wherever you want to mount it)

          2. fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /mnt/dfs -d
          ; and from another terminal, try ls /mnt/dfs

          If 2 works, try again dropping the debug mode, i.e., -d

          (note - common problems are that you don't have libhdfs.so or
          libjvm.so or libfuse.so on your LD_LIBRARY_PATH, and your CLASSPATH
          does not contain hadoop and other required jars.)

          --------------------------------------------------------------------------------


          DEPLOYING:

          in a root shell do the following:

          1. add the following to /etc/fstab -
            fuse_dfs#dfs://hadoop_server.foo.com:9000 /mnt/dfs fuse
            allow_other,rw 0 0

          2. mount /mnt/dfs Expect problems with not finding fuse_dfs. You will
             need to probably add this to /sbin and then problems finding the
             above 3 libraries. Add these using ldconfig.

          --------------------------------------------------------------------------------

          EXPORTING:

          Add the following to /etc/exports:

            /mnt/hdfs *.foo.com(no_root_squash,rw,fsid=1,sync)

          NOTE - you cannot export this with a FUSE module built into the kernel
          - e.g., kernel 2.6.17. For info on this, refer to the FUSE wiki.
          --------------------------------------------------------------------------------

          ADVANCED:

          you may want to ensure certain directories cannot be deleted from the
          shell until the FS has permissions. You can set this in the build.xml
          file in src/contrib/fuse-dfs/build.xml

          Robert Chansler made changes -
          Description tool to mount dfs on Unix or any OS that supports FUSE This is a FUSE module for Hadoop's HDFS.

          It allows one to mount HDFS as a Unix filesystem and optionally export
          that mount point to other machines.

          For now, writes are disabled as this requires Hadoop-1700 - file
          appends which I guess won't be ready till 0.18 ish ??.

          rmdir, mv, mkdir, rm are all supported. just not cp, touch, ...

          BUILDING:


          Requirements:

             1. a Linux kernel > 2.6.9 or a kernel module from FUSE - i.e., you
             compile it yourself and then modprobe it. Better off with the
             former option if possible. (Note for now if you use the kernel
             with fuse included, it doesn't allow you to export this through NFS
             so be warned. See the FUSE email list for more about this.)

             2. FUSE should be installed in /usr/local or FUSE_HOME ant
             environment variable

          To build:

             1. in HADOOP_HOME: ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1


          NOTE: for amd64 architecture, libhdfs will not compile unless you edit
          the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64
          (probably the same for others too).

          --------------------------------------------------------------------------------

          CONFIGURING:

          Look at all the paths in fuse_dfs_wrapper.sh and either correct them
          or set them in your environment before running. (note for automount
          and mount as root, you probably cannnot control the environment, so
          best to set them in the wrapper)

          INSTALLING:

          1. mkdir /mnt/dfs (or wherever you want to mount it)

          2. fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /mnt/dfs -d
          ; and from another terminal, try ls /mnt/dfs

          If 2 works, try again dropping the debug mode, i.e., -d

          (note - common problems are that you don't have libhdfs.so or
          libjvm.so or libfuse.so on your LD_LIBRARY_PATH, and your CLASSPATH
          does not contain hadoop and other required jars.)

          --------------------------------------------------------------------------------


          DEPLOYING:

          in a root shell do the following:

          1. add the following to /etc/fstab -
            fuse_dfs#dfs://hadoop_server.foo.com:9000 /mnt/dfs fuse
            allow_other,rw 0 0

          2. mount /mnt/dfs Expect problems with not finding fuse_dfs. You will
             need to probably add this to /sbin and then problems finding the
             above 3 libraries. Add these using ldconfig.

          --------------------------------------------------------------------------------

          EXPORTING:

          Add the following to /etc/exports:

            /mnt/hdfs *.foo.com(no_root_squash,rw,fsid=1,sync)

          NOTE - you cannot export this with a FUSE module built into the kernel
          - e.g., kernel 2.6.17. For info on this, refer to the FUSE wiki.
          --------------------------------------------------------------------------------

          ADVANCED:

          you may want to ensure certain directories cannot be deleted from the
          shell until the FS has permissions. You can set this in the build.xml
          file in src/contrib/fuse-dfs/build.xml

          Release Note This is a FUSE module for Hadoop's HDFS.

          It allows one to mount HDFS as a Unix filesystem and optionally export
          that mount point to other machines.

          For now, writes are disabled as this requires Hadoop-1700 - file
          appends which I guess won't be ready till 0.18 ish ??.

          rmdir, mv, mkdir, rm are all supported. just not cp, touch, ...

          BUILDING:


          Requirements:

             1. a Linux kernel > 2.6.9 or a kernel module from FUSE - i.e., you
             compile it yourself and then modprobe it. Better off with the
             former option if possible. (Note for now if you use the kernel
             with fuse included, it doesn't allow you to export this through NFS
             so be warned. See the FUSE email list for more about this.)

             2. FUSE should be installed in /usr/local or FUSE_HOME ant
             environment variable

          To build:

             1. in HADOOP_HOME: ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1


          NOTE: for amd64 architecture, libhdfs will not compile unless you edit
          the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64
          (probably the same for others too).

          --------------------------------------------------------------------------------

          CONFIGURING:

          Look at all the paths in fuse_dfs_wrapper.sh and either correct them
          or set them in your environment before running. (note for automount
          and mount as root, you probably cannnot control the environment, so
          best to set them in the wrapper)

          INSTALLING:

          1. mkdir /mnt/dfs (or wherever you want to mount it)

          2. fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /mnt/dfs -d
          ; and from another terminal, try ls /mnt/dfs

          If 2 works, try again dropping the debug mode, i.e., -d

          (note - common problems are that you don't have libhdfs.so or
          libjvm.so or libfuse.so on your LD_LIBRARY_PATH, and your CLASSPATH
          does not contain hadoop and other required jars.)

          --------------------------------------------------------------------------------


          DEPLOYING:

          in a root shell do the following:

          1. add the following to /etc/fstab -
            fuse_dfs#dfs://hadoop_server.foo.com:9000 /mnt/dfs fuse
            allow_other,rw 0 0

          2. mount /mnt/dfs Expect problems with not finding fuse_dfs. You will
             need to probably add this to /sbin and then problems finding the
             above 3 libraries. Add these using ldconfig.

          --------------------------------------------------------------------------------

          EXPORTING:

          Add the following to /etc/exports:

            /mnt/hdfs *.foo.com(no_root_squash,rw,fsid=1,sync)

          NOTE - you cannot export this with a FUSE module built into the kernel
          - e.g., kernel 2.6.17. For info on this, refer to the FUSE wiki.
          --------------------------------------------------------------------------------

          ADVANCED:

          you may want to ensure certain directories cannot be deleted from the
          shell until the FS has permissions. You can set this in the build.xml
          file in src/contrib/fuse-dfs/build.xml

          Introduced FUSE module for HDFS. Module allows mount of HDFS as a Unix filesystem, and optionally the export of that mount point to other machines. Writes are disabled. rmdir, mv, mkdir, rm are supported, but not cp, touch, and the like. Usage information is attached to the Jira record.

          Nigel Daley made changes -
          Status Resolved [ 5 ] Closed [ 6 ]
          Pete Wyckoff made changes -
          Description This is a FUSE module for Hadoop's HDFS.

          It allows one to mount HDFS as a Unix filesystem and optionally export
          that mount point to other machines.

          For now, writes are disabled as this requires Hadoop-1700 - file
          appends which I guess won't be ready till 0.18 ish ??.

          rmdir, mv, mkdir, rm are all supported. just not cp, touch, ...

          BUILDING:


          Requirements:

             1. a Linux kernel > 2.6.9 or a kernel module from FUSE - i.e., you
             compile it yourself and then modprobe it. Better off with the
             former option if possible. (Note for now if you use the kernel
             with fuse included, it doesn't allow you to export this through NFS
             so be warned. See the FUSE email list for more about this.)

             2. FUSE should be installed in /usr/local or FUSE_HOME ant
             environment variable

          To build:

             1. in HADOOP_HOME: ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1


          NOTE: for amd64 architecture, libhdfs will not compile unless you edit
          the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64
          (probably the same for others too).

          --------------------------------------------------------------------------------

          CONFIGURING:

          Look at all the paths in fuse_dfs_wrapper.sh and either correct them
          or set them in your environment before running. (note for automount
          and mount as root, you probably cannnot control the environment, so
          best to set them in the wrapper)

          INSTALLING:

          1. mkdir /mnt/dfs (or wherever you want to mount it)

          2. fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /mnt/dfs -d
          ; and from another terminal, try ls /mnt/dfs

          If 2 works, try again dropping the debug mode, i.e., -d

          (note - common problems are that you don't have libhdfs.so or
          libjvm.so or libfuse.so on your LD_LIBRARY_PATH, and your CLASSPATH
          does not contain hadoop and other required jars.)

          --------------------------------------------------------------------------------


          DEPLOYING:

          in a root shell do the following:

          1. add the following to /etc/fstab -
            fuse_dfs#dfs://hadoop_server.foo.com:9000 /mnt/dfs fuse
            allow_other,rw 0 0

          2. mount /mnt/dfs Expect problems with not finding fuse_dfs. You will
             need to probably add this to /sbin and then problems finding the
             above 3 libraries. Add these using ldconfig.

          --------------------------------------------------------------------------------

          EXPORTING:

          Add the following to /etc/exports:

            /mnt/hdfs *.foo.com(no_root_squash,rw,fsid=1,sync)

          NOTE - you cannot export this with a FUSE module built into the kernel
          - e.g., kernel 2.6.17. For info on this, refer to the FUSE wiki.
          --------------------------------------------------------------------------------

          ADVANCED:

          you may want to ensure certain directories cannot be deleted from the
          shell until the FS has permissions. You can set this in the build.xml
          file in src/contrib/fuse-dfs/build.xml

          This is a FUSE module for Hadoop's HDFS.

          It allows one to mount HDFS as a Unix filesystem and optionally export
          that mount point to other machines.

          rmdir, mv, mkdir, rm are all supported. just not cp, touch, ..., but actual writes require: https://issues.apache.org/jira/browse/HADOOP-3485

          BUILDING:


          Requirements:

             1. a Linux kernel > 2.6.9 or a kernel module from FUSE - i.e., you
             compile it yourself and then modprobe it. Better off with the
             former option if possible. (Note for now if you use the kernel
             with fuse included, it doesn't allow you to export this through NFS
             so be warned. See the FUSE email list for more about this.)

             2. FUSE should be installed in /usr/local or FUSE_HOME ant
             environment variable

          To build:

             1. in HADOOP_HOME: ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1


          NOTE: for amd64 architecture, libhdfs will not compile unless you edit
          the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64
          (probably the same for others too).

          --------------------------------------------------------------------------------

          CONFIGURING:

          Look at all the paths in fuse_dfs_wrapper.sh and either correct them
          or set them in your environment before running. (note for automount
          and mount as root, you probably cannnot control the environment, so
          best to set them in the wrapper)

          INSTALLING:

          1. mkdir /mnt/dfs (or wherever you want to mount it)

          2. fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /mnt/dfs -d
          ; and from another terminal, try ls /mnt/dfs

          If 2 works, try again dropping the debug mode, i.e., -d

          (note - common problems are that you don't have libhdfs.so or
          libjvm.so or libfuse.so on your LD_LIBRARY_PATH, and your CLASSPATH
          does not contain hadoop and other required jars.)

          --------------------------------------------------------------------------------


          DEPLOYING:

          in a root shell do the following:

          1. add the following to /etc/fstab -
            fuse_dfs#dfs://hadoop_server.foo.com:9000 /mnt/dfs fuse
            allow_other,rw 0 0

          2. mount /mnt/dfs Expect problems with not finding fuse_dfs. You will
             need to probably add this to /sbin and then problems finding the
             above 3 libraries. Add these using ldconfig.

          --------------------------------------------------------------------------------

          EXPORTING:

          Add the following to /etc/exports:

            /mnt/hdfs *.foo.com(no_root_squash,rw,fsid=1,sync)

          NOTE - you cannot export this with a FUSE module built into the kernel
          - e.g., kernel 2.6.17. For info on this, refer to the FUSE wiki.
          --------------------------------------------------------------------------------

          ADVANCED:

          you may want to ensure certain directories cannot be deleted from the
          shell until the FS has permissions. You can set this in the build.xml
          file in src/contrib/fuse-dfs/build.xml

          Pete Wyckoff made changes -
          Description This is a FUSE module for Hadoop's HDFS.

          It allows one to mount HDFS as a Unix filesystem and optionally export
          that mount point to other machines.

          rmdir, mv, mkdir, rm are all supported. just not cp, touch, ..., but actual writes require: https://issues.apache.org/jira/browse/HADOOP-3485

          BUILDING:


          Requirements:

             1. a Linux kernel > 2.6.9 or a kernel module from FUSE - i.e., you
             compile it yourself and then modprobe it. Better off with the
             former option if possible. (Note for now if you use the kernel
             with fuse included, it doesn't allow you to export this through NFS
             so be warned. See the FUSE email list for more about this.)

             2. FUSE should be installed in /usr/local or FUSE_HOME ant
             environment variable

          To build:

             1. in HADOOP_HOME: ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1


          NOTE: for amd64 architecture, libhdfs will not compile unless you edit
          the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64
          (probably the same for others too).

          --------------------------------------------------------------------------------

          CONFIGURING:

          Look at all the paths in fuse_dfs_wrapper.sh and either correct them
          or set them in your environment before running. (note for automount
          and mount as root, you probably cannnot control the environment, so
          best to set them in the wrapper)

          INSTALLING:

          1. mkdir /mnt/dfs (or wherever you want to mount it)

          2. fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /mnt/dfs -d
          ; and from another terminal, try ls /mnt/dfs

          If 2 works, try again dropping the debug mode, i.e., -d

          (note - common problems are that you don't have libhdfs.so or
          libjvm.so or libfuse.so on your LD_LIBRARY_PATH, and your CLASSPATH
          does not contain hadoop and other required jars.)

          --------------------------------------------------------------------------------


          DEPLOYING:

          in a root shell do the following:

          1. add the following to /etc/fstab -
            fuse_dfs#dfs://hadoop_server.foo.com:9000 /mnt/dfs fuse
            allow_other,rw 0 0

          2. mount /mnt/dfs Expect problems with not finding fuse_dfs. You will
             need to probably add this to /sbin and then problems finding the
             above 3 libraries. Add these using ldconfig.

          --------------------------------------------------------------------------------

          EXPORTING:

          Add the following to /etc/exports:

            /mnt/hdfs *.foo.com(no_root_squash,rw,fsid=1,sync)

          NOTE - you cannot export this with a FUSE module built into the kernel
          - e.g., kernel 2.6.17. For info on this, refer to the FUSE wiki.
          --------------------------------------------------------------------------------

          ADVANCED:

          you may want to ensure certain directories cannot be deleted from the
          shell until the FS has permissions. You can set this in the build.xml
          file in src/contrib/fuse-dfs/build.xml

          This is a FUSE module for Hadoop's HDFS.

          It allows one to mount HDFS as a Unix filesystem and optionally export
          that mount point to other machines.

          rmdir, mv, mkdir, rm are all supported. just not cp, touch, ..., but actual writes require: https://issues.apache.org/jira/browse/HADOOP-3485

          For the most up-to-date documentation, see: http://wiki.apache.org/hadoop/MountableHDFS

          BUILDING:


          Requirements:

             1. a Linux kernel > 2.6.9 or a kernel module from FUSE - i.e., you
             compile it yourself and then modprobe it. Better off with the
             former option if possible. (Note for now if you use the kernel
             with fuse included, it doesn't allow you to export this through NFS
             so be warned. See the FUSE email list for more about this.)

             2. FUSE should be installed in /usr/local or FUSE_HOME ant
             environment variable

          To build:

             1. in HADOOP_HOME: ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1 -Dlibhdfs=1


          NOTE: for amd64 architecture, libhdfs will not compile unless you edit
          the Makefile in src/c++/libhdfs/Makefile and set OS_ARCH=amd64
          (probably the same for others too).

          --------------------------------------------------------------------------------

          CONFIGURING:

          Look at all the paths in fuse_dfs_wrapper.sh and either correct them
          or set them in your environment before running. (note for automount
          and mount as root, you probably cannnot control the environment, so
          best to set them in the wrapper)

          INSTALLING:

          1. mkdir /mnt/dfs (or wherever you want to mount it)

          2. fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /mnt/dfs -d
          ; and from another terminal, try ls /mnt/dfs

          If 2 works, try again dropping the debug mode, i.e., -d

          (note - common problems are that you don't have libhdfs.so or
          libjvm.so or libfuse.so on your LD_LIBRARY_PATH, and your CLASSPATH
          does not contain hadoop and other required jars.)

          --------------------------------------------------------------------------------


          DEPLOYING:

          in a root shell do the following:

          1. add the following to /etc/fstab -
            fuse_dfs#dfs://hadoop_server.foo.com:9000 /mnt/dfs fuse
            allow_other,rw 0 0

          2. mount /mnt/dfs Expect problems with not finding fuse_dfs. You will
             need to probably add this to /sbin and then problems finding the
             above 3 libraries. Add these using ldconfig.

          --------------------------------------------------------------------------------

          EXPORTING:

          Add the following to /etc/exports:

            /mnt/hdfs *.foo.com(no_root_squash,rw,fsid=1,sync)

          NOTE - you cannot export this with a FUSE module built into the kernel
          - e.g., kernel 2.6.17. For info on this, refer to the FUSE wiki.
          --------------------------------------------------------------------------------

          ADVANCED:

          you may want to ensure certain directories cannot be deleted from the
          shell until the FS has permissions. You can set this in the build.xml
          file in src/contrib/fuse-dfs/build.xml

          Owen O'Malley made changes -
          Component/s contrib/fuse-dfs [ 12312376 ]

            People

            • Assignee:
              Pete Wyckoff
              Reporter:
              John Xing
            • Votes:
              4 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development