Hadoop Common
  1. Hadoop Common
  2. HADOOP-3344

libhdfs: always builds 32bit, even when x86_64 Java used

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.20.0
    • Component/s: build
    • Labels:
      None
    • Environment:

      x86_64 linux, x86_64 Java installed

    • Hadoop Flags:
      Incompatible change, Reviewed
    • Release Note:
      Changed build procedure for libhdfs to build correctly for different platforms. Build instructions are in the Jira item.

      Description

      The makefile for libhdfs is hard-coded to compile 32bit libraries. It should perhaps compile dependent on which Java is set.

      The relevant lines are:

      LDFLAGS = -L$(JAVA_HOME)/jre/lib/$(OS_ARCH)/server -ljvm -shared -m32 -Wl,-x
      CPPFLAGS = -m32 -I$(JAVA_HOME)/include -I$(JAVA_HOME)/include/$(PLATFORM)

      $OS_ARCH can be (e.g.) amd64 if you're using a 64bit java on the x86_64 platform. So while gcc will try to link against the correct libjvm.so, it will fail because libhdfs is to be built 32bit (because of -m32)

           [exec] /usr/bin/ld: skipping incompatible /usr/java64/latest/jre/lib/amd64/server/libjvm.so when searching for -ljvm
           [exec] /usr/bin/ld: cannot find -ljvm
           [exec] collect2: ld returned 1 exit status
           [exec] make: *** [/root/def/hadoop-0.16.3/build/libhdfs/libhdfs.so.1] Error 1
      

      The solution should be to specify -m32 or -m64 depending on the os.arch detected.

      There are 3 cases to check:

      • 32bit OS, 32bit java => libhdfs should be built 32bit, specify -m32
      • 64bit OS, 32bit java => libhdfs should be built 32bit, specify -m32
      • 64bit OS, 64bit java => libhdfs should be built 64bit, specify -m64
      1. HADOOP-3344.patch
        1.18 MB
        Giridharan Kesavan
      2. HADOOP-3344.patch
        1.18 MB
        Giridharan Kesavan
      3. HADOOP-3344.patch
        1.18 MB
        Giridharan Kesavan
      4. HADOOP-3344.v0.patch
        15 kB
        Craig Macdonald
      5. HADOOP-3344.v1.patch
        25 kB
        Craig Macdonald
      6. HADOOP-3344.v3.patch
        1.18 MB
        Giridharan Kesavan
      7. HADOOP-3344-v2.patch
        3.00 MB
        Giridharan Kesavan

        Issue Links

          Activity

          Hide
          Doug Cutting added a comment -

          Owen has argued that libhdfs's build should be re-written to use autoconf.

          https://issues.apache.org/jira/browse/HADOOP-1410?focusedCommentId=12497947#action_12497947

          We probably need a separate issue for that. Should we fix this independently?

          Show
          Doug Cutting added a comment - Owen has argued that libhdfs's build should be re-written to use autoconf. https://issues.apache.org/jira/browse/HADOOP-1410?focusedCommentId=12497947#action_12497947 We probably need a separate issue for that. Should we fix this independently?
          Hide
          Craig Macdonald added a comment - - edited

          This was an issue for me for testing fuse-dfs - FUSE is installed to match the kernel, so on a 64bit kernel, I have to use 64bit java and 64bit libhdfs. If an autoconf build system would be ready for 0.18 then perhaps we should try to move to one?

          Show
          Craig Macdonald added a comment - - edited This was an issue for me for testing fuse-dfs - FUSE is installed to match the kernel, so on a 64bit kernel, I have to use 64bit java and 64bit libhdfs. If an autoconf build system would be ready for 0.18 then perhaps we should try to move to one?
          Hide
          Doug Cutting added a comment -

          Craig: it doesn't sound like anyone is yet working on an autoconf build for libhdfs, but, yes, this would be a welcome contribution for Hadoop 0.18.

          Show
          Doug Cutting added a comment - Craig: it doesn't sound like anyone is yet working on an autoconf build for libhdfs, but, yes, this would be a welcome contribution for Hadoop 0.18.
          Hide
          Craig Macdonald added a comment - - edited

          Ok, this is first attempt at an autotools build system.

          Notes/issues:

          • It doesn't yet work. I havent figured out how to get .so shared files built, so building the tests fail
          • The Java-related autoconf macros came from Apache Commons, Daemon. Hope this is OK?
          • I still need to test on other linux architectures. Will need other volunteers for alternative platforms.
          • Requires autoconf 2.61, for the AC_TYPE_INT64_T and similar macros

          To build configure & Makefile, do:

          autoreconf;  libtoolize && aclocal -I ../utils/m4/ &&  automake -a --foreign && autoconf
          

          For normal building:

          ./configure && make
          

          Can anyone provide assistance in completing this?

          Show
          Craig Macdonald added a comment - - edited Ok, this is first attempt at an autotools build system. Notes/issues: It doesn't yet work. I havent figured out how to get .so shared files built, so building the tests fail The Java-related autoconf macros came from Apache Commons, Daemon. Hope this is OK? I still need to test on other linux architectures. Will need other volunteers for alternative platforms. Requires autoconf 2.61, for the AC_TYPE_INT64_T and similar macros To build configure & Makefile, do: autoreconf; libtoolize && aclocal -I ../utils/m4/ && automake -a --foreign && autoconf For normal building: ./configure && make Can anyone provide assistance in completing this?
          Hide
          Doug Cutting added a comment -

          > Can anyone provide assistance in completing this?

          Sorry, I have no experience with autotools. Anyone else?

          Show
          Doug Cutting added a comment - > Can anyone provide assistance in completing this? Sorry, I have no experience with autotools. Anyone else?
          Hide
          Craig Macdonald added a comment -

          This patch is an improved patch - libhdfs now builds OK, but doesn't build as a .so file. Added doc and test targets, and updated build.xml to call configure.

          Will test on 64bit platforms and improve. Work in progress.

          Show
          Craig Macdonald added a comment - This patch is an improved patch - libhdfs now builds OK, but doesn't build as a .so file. Added doc and test targets, and updated build.xml to call configure. Will test on 64bit platforms and improve. Work in progress.
          Hide
          Pete Wyckoff added a comment - - edited

          I'm getting this error —

          relocation against `a local symbol' can not be used when making a shared object; recompile with -fPIC

          This seems to have been a problem for others on amd64 too, but I don't know what the fix is other than adding -fPIC to the library file build, but that has bad implications on non amd64 platforms.

          Show
          Pete Wyckoff added a comment - - edited I'm getting this error — relocation against `a local symbol' can not be used when making a shared object; recompile with -fPIC This seems to have been a problem for others on amd64 too, but I don't know what the fix is other than adding -fPIC to the library file build, but that has bad implications on non amd64 platforms.
          Hide
          Craig Macdonald added a comment -

          fPIC rings a bell on amd64. What are the disadvantages to adding -fPIC (a) for amd64 only, (b) for all platforms?

          Show
          Craig Macdonald added a comment - fPIC rings a bell on amd64. What are the disadvantages to adding -fPIC (a) for amd64 only, (b) for all platforms?
          Hide
          Pete Wyckoff added a comment -

          What are the disadvantages to adding -fPIC (a) for amd64 only, (b) for all platforms?

          Just the extra indirection required for being independent. I guess on another platform if you are going static it could cost something although not much - especially here/

          I had to remove the -shared from the configure.ac, add -fPIC to the CPPFLAGS and also weirdly add hdfs_read_LDADD = libhdfs.la
          for each of read, write and test. I thought it would be automatically added. other than that, everything else works for me on amd64.

          Show
          Pete Wyckoff added a comment - What are the disadvantages to adding -fPIC (a) for amd64 only, (b) for all platforms? Just the extra indirection required for being independent. I guess on another platform if you are going static it could cost something although not much - especially here/ I had to remove the -shared from the configure.ac, add -fPIC to the CPPFLAGS and also weirdly add hdfs_read_LDADD = libhdfs.la for each of read, write and test. I thought it would be automatically added. other than that, everything else works for me on amd64.
          Hide
          Giridharan Kesavan added a comment - - edited

          Here we have the v2 version of the patch available which as well requires autoconf-2.61. This is an improved version over v1 submited by Craig. Many thanks to Craig.

          I 've used a small piece of java code along with the m4 macros to detect the jvm arch.

          getArch.java
          class getArch {
            public static void main(String []args) {
               System.out.println(System.getProperty("sun.arch.data.model", "32"));
            }
          }
          

          If somebody can suggest me of a better way I would be more than happy to implement it

          This patch addresses all the three scenarios

          • 32bit OS, 32bit java => libhdfs should be built 32bit, specify -m32
          • 64bit OS, 32bit java => libhdfs should be built 32bit, specify -m32
          • 64bit OS, 64bit java => libhdfs should be built 64bit, specify -m64
          To Build libhdfs.so use    ant compile-c++-libhdfs -Dcompile.c++=true
          To Test  libhdfs.so use    ant test-c++-libhdfs -Dcompile.c++=true 
          

          I have tested this patch on amd64 with 32 bit and 64bit jvm.
          Please help me by testing in other platforms as necessary and let me know your comments.

          Thanks

          Show
          Giridharan Kesavan added a comment - - edited Here we have the v2 version of the patch available which as well requires autoconf-2.61 . This is an improved version over v1 submited by Craig. Many thanks to Craig. I 've used a small piece of java code along with the m4 macros to detect the jvm arch. getArch.java class getArch { public static void main( String []args) { System .out.println( System .getProperty( "sun.arch.data.model" , "32" )); } } If somebody can suggest me of a better way I would be more than happy to implement it This patch addresses all the three scenarios 32bit OS, 32bit java => libhdfs should be built 32bit, specify -m32 64bit OS, 32bit java => libhdfs should be built 32bit, specify -m32 64bit OS, 64bit java => libhdfs should be built 64bit, specify -m64 To Build libhdfs.so use ant compile-c++-libhdfs -Dcompile.c++=true To Test libhdfs.so use ant test-c++-libhdfs -Dcompile.c++=true I have tested this patch on amd64 with 32 bit and 64bit jvm. Please help me by testing in other platforms as necessary and let me know your comments. Thanks
          Hide
          Arun C Murthy added a comment -

          I 've used a small piece of java code along with the m4 macros to detect the jvm arch.

          Giridharan, you can pass the requisite java properties (sun.arch.data.model etc.) straight from build.xml without adding getArch. Please take a look at the compile-core-native target in hadoop/trunk/build.xml, we use that for compiling the native hadoop compression libraries (libhadoop.so)

          Show
          Arun C Murthy added a comment - I 've used a small piece of java code along with the m4 macros to detect the jvm arch. Giridharan, you can pass the requisite java properties (sun.arch.data.model etc.) straight from build.xml without adding getArch. Please take a look at the compile-core-native target in hadoop/trunk/build.xml, we use that for compiling the native hadoop compression libraries (libhadoop.so)
          Hide
          Giridharan Kesavan added a comment -

          version v3 has the changes incorporated. v3 doesn't have the java snippet anymore to find the JVM Arch
          Thanks to Arun.

          Please help me in testing this v3 patch; I 've the patch tested on amd64 with 32 and 64 bit java.

          Thanks

          Show
          Giridharan Kesavan added a comment - version v3 has the changes incorporated. v3 doesn't have the java snippet anymore to find the JVM Arch Thanks to Arun. Please help me in testing this v3 patch; I 've the patch tested on amd64 with 32 and 64 bit java. Thanks
          Hide
          Craig Macdonald added a comment -

          I have tested on Mac OS X 32bit powerpc 10.4. Will test on Linux in due course (10.4 Mac OS X is now unsupported platform for trunk, as Java 6 is not provided by default).

          Some minor comments:

          1. Mac OS X does not have error.h, so my compile fails. hdfsJniHelper.c includes error.h, added by HADOOP-3549. As far as I can see, it is superfluous. At least on Linux, there are no functions in error.h which hdfsJniHelper.c uses. Should i reopen HADOOP-3549, or file a new issue? Note that error.h should not be confused with the standard errno.h

          2. Will create-c++-configure be called for every compile? It customary to include the generated configure/Makefile, as people compiling need not have autoconf/automake.

          3. (related to 2) Your patch includes configure/Makefile etc, which is great for testing the patch atm, but the version committed should NOT include these.

          Craig

          Show
          Craig Macdonald added a comment - I have tested on Mac OS X 32bit powerpc 10.4. Will test on Linux in due course (10.4 Mac OS X is now unsupported platform for trunk, as Java 6 is not provided by default). Some minor comments: 1. Mac OS X does not have error.h, so my compile fails. hdfsJniHelper.c includes error.h, added by HADOOP-3549 . As far as I can see, it is superfluous. At least on Linux, there are no functions in error.h which hdfsJniHelper.c uses. Should i reopen HADOOP-3549 , or file a new issue? Note that error.h should not be confused with the standard errno.h 2. Will create-c++-configure be called for every compile? It customary to include the generated configure/Makefile, as people compiling need not have autoconf/automake. 3. (related to 2) Your patch includes configure/Makefile etc, which is great for testing the patch atm, but the version committed should NOT include these. Craig
          Hide
          Giridharan Kesavan added a comment -

          Thanks for your comments Craig.

          2. Will create-c++-configure be called for every compile? It customary to include the generated configure/Makefile, as people compiling need not have autoconf/automake.

          Nope - It doesn't gets called.
          During the compilation we just call the configure and then the make install.

          3. (related to 2) Your patch includes configure/Makefile etc, which is great for testing the patch atm, but the version committed should NOT include these.

          I did "make distclean" before creating this patch. I understand this patch has an empty Makefile which we have to delete after submitting the patch to svn.

          Please correct me if my understanding is wrong anywhere.

          Thanks
          Giri

          Show
          Giridharan Kesavan added a comment - Thanks for your comments Craig. 2. Will create-c++-configure be called for every compile? It customary to include the generated configure/Makefile, as people compiling need not have autoconf/automake. Nope - It doesn't gets called. During the compilation we just call the configure and then the make install. 3. (related to 2) Your patch includes configure/Makefile etc, which is great for testing the patch atm, but the version committed should NOT include these. I did "make distclean" before creating this patch. I understand this patch has an empty Makefile which we have to delete after submitting the patch to svn. Please correct me if my understanding is wrong anywhere. Thanks Giri
          Hide
          Giridharan Kesavan added a comment - - edited

          Craig,
          Any update on testing with linux? Based on that I can make the patch available..
          thanks,
          Giri

          Show
          Giridharan Kesavan added a comment - - edited Craig, Any update on testing with linux? Based on that I can make the patch available.. thanks, Giri
          Hide
          Giridharan Kesavan added a comment -

          steps to build libhdfs

          ant compile -Dcompile.c++=true -Dlibhdfs=true 
          By using libhdfs=true flag we build libhdfs with other c++components. 
          

          This patch addresses all the three scenarios

          • 32bit OS, 32bit java => libhdfs should be built 32bit, specify -m32
          • 64bit OS, 32bit java => libhdfs should be built 32bit, specify -m32
          • 64bit OS, 64bit java => libhdfs should be built 64bit, specify -m64

          Here is the local test-patch result

          [exec]
          [exec]
          [exec]
          [exec] +1 overall.
          [exec]
          [exec] +1 @author. The patch does not contain any @author tags.
          [exec]
          [exec] +1 tests included. The patch appears to include 10 new or modified tests.
          [exec]
          [exec] +1 javadoc. The javadoc tool did not generate any warning messages.
          [exec]
          [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings.
          [exec]
          [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings.
          [exec]
          [exec] +1 Eclipse classpath. The patch retains Eclipse classpath integrity.
          [exec]
          [exec]
          [exec]
          [exec]
          [exec] ======================================================================
          [exec] ======================================================================
          [exec] Finished build.
          [exec] ======================================================================
          [exec] ======================================================================
          [exec]
          [exec]

          Thanks,
          Giri

          Show
          Giridharan Kesavan added a comment - steps to build libhdfs ant compile -Dcompile.c++=true -Dlibhdfs=true By using libhdfs=true flag we build libhdfs with other c++components. This patch addresses all the three scenarios 32bit OS, 32bit java => libhdfs should be built 32bit, specify -m32 64bit OS, 32bit java => libhdfs should be built 32bit, specify -m32 64bit OS, 64bit java => libhdfs should be built 64bit, specify -m64 Here is the local test-patch result [exec] [exec] [exec] [exec] +1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 10 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 Eclipse classpath. The patch retains Eclipse classpath integrity. [exec] [exec] [exec] [exec] [exec] ====================================================================== [exec] ====================================================================== [exec] Finished build. [exec] ====================================================================== [exec] ====================================================================== [exec] [exec] Thanks, Giri
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12395943/HADOOP-3344.patch
          against trunk revision 726129.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 10 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

          -1 core tests. The patch failed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3734/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3734/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3734/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3734/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12395943/HADOOP-3344.patch against trunk revision 726129. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 10 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 Eclipse classpath. The patch retains Eclipse classpath integrity. -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3734/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3734/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3734/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3734/console This message is automatically generated.
          Hide
          Giridharan Kesavan added a comment -

          The reason for the failure is the missing autoconf-2.61 on the build server.
          Im going to resubmitting the patch

          Show
          Giridharan Kesavan added a comment - The reason for the failure is the missing autoconf-2.61 on the build server. Im going to resubmitting the patch
          Hide
          Giridharan Kesavan added a comment -

          With this new patch c++-create-configure target is added with the libhdfs flag.
          By doing so it doesn't touch the libhdfs 's configure/compile/ or test target. (unless the flags are set
          ie -Dcompile.c++=true & -Dlibhdfs=true)
          -Giri

          Show
          Giridharan Kesavan added a comment - With this new patch c++-create-configure target is added with the libhdfs flag. By doing so it doesn't touch the libhdfs 's configure/compile/ or test target. (unless the flags are set ie -Dcompile.c++=true & -Dlibhdfs=true) -Giri
          Hide
          Giridharan Kesavan added a comment -

          Build doesn't do anything to the libhdfs target's unless the two flags are set. This is implemented b'coz this libhdfs requires autconf-2.61 and the build server seems to have autoconf-2.59.

          Thanks,
          Giri

          Show
          Giridharan Kesavan added a comment - Build doesn't do anything to the libhdfs target's unless the two flags are set. This is implemented b'coz this libhdfs requires autconf-2.61 and the build server seems to have autoconf-2.59. Thanks, Giri
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12396056/HADOOP-3344.patch
          against trunk revision 726129.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 10 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3743/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3743/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3743/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3743/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12396056/HADOOP-3344.patch against trunk revision 726129. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 10 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 Eclipse classpath. The patch retains Eclipse classpath integrity. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3743/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3743/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3743/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3743/console This message is automatically generated.
          Hide
          Giridharan Kesavan added a comment -

          Looks like patch requires some merging , as there are some conflicts with the ivy porting
          resubmitting the patch
          -giri

          Show
          Giridharan Kesavan added a comment - Looks like patch requires some merging , as there are some conflicts with the ivy porting resubmitting the patch -giri
          Hide
          Giridharan Kesavan added a comment -

          thanks,
          giri

          Show
          Giridharan Kesavan added a comment - thanks, giri
          Hide
          Nigel Daley added a comment -

          I just committed this. Thanks Giri!

          Show
          Nigel Daley added a comment - I just committed this. Thanks Giri!
          Hide
          Nigel Daley added a comment -

          Incompatible due to:

          1) autconf-2.61 required to compile now
          2) location of libhdfs is now c++/<os_osarch_jvmdatamodel>/lib

          Show
          Nigel Daley added a comment - Incompatible due to: 1) autconf-2.61 required to compile now 2) location of libhdfs is now c++/<os_osarch_jvmdatamodel>/lib
          Hide
          Robert Chansler added a comment -

          Edit release note for publication.

          To build libhdfs use the following command.

          ant compile -Dcompile.c++=true -Dlibhdfs=true
          But make sure you have "autoconf-2.61" installed

          By using libhdfs=true flag we build libhdfs with other c++components.
          and the resulting .so file will be installed in c++/<os_osarch_jvmdatamodel>/lib directory

          This patch addresses all the three scenarios

          *32bit OS, 32bit java => libhdfs should be built 32bit, specify -m32
          *64bit OS, 32bit java => libhdfs should be built 32bit, specify -m32
          *64bit OS, 64bit java => libhdfs should be built 64bit, specify -m64

          Show
          Robert Chansler added a comment - Edit release note for publication. To build libhdfs use the following command. ant compile -Dcompile.c++=true -Dlibhdfs=true But make sure you have "autoconf-2.61" installed By using libhdfs=true flag we build libhdfs with other c++components. and the resulting .so file will be installed in c++/<os_osarch_jvmdatamodel>/lib directory This patch addresses all the three scenarios *32bit OS, 32bit java => libhdfs should be built 32bit, specify -m32 *64bit OS, 32bit java => libhdfs should be built 32bit, specify -m32 *64bit OS, 64bit java => libhdfs should be built 64bit, specify -m64
          Hide
          Robert Chansler added a comment -

          Thanks, Craig, for the correction!

          Show
          Robert Chansler added a comment - Thanks, Craig, for the correction!

            People

            • Assignee:
              Giridharan Kesavan
              Reporter:
              Craig Macdonald
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development