Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: None
    • Labels:
      None
    • Target Version/s:

      Description

      Rather than have a half-dozen redefinitions, custom extensions, etc, we should move them all to one location so that the cmake environment is consistent between the various native components.

      1. HADOOP-12036.001.patch
        31 kB
        Alan Burlison
      2. HADOOP-12036.002.patch
        31 kB
        Alan Burlison
      3. HADOOP-12036.004.patch
        31 kB
        Alan Burlison
      4. HADOOP-12036.005.patch
        31 kB
        Alan Burlison

        Issue Links

          Activity

          Hide
          alanburlison Alan Burlison added a comment -

          I agree, as I'm already working on trying to come up with a good solution for this as part of HADOOP-11985, I'm happy to take this issue if that's OK with you.

          Show
          alanburlison Alan Burlison added a comment - I agree, as I'm already working on trying to come up with a good solution for this as part of HADOOP-11985 , I'm happy to take this issue if that's OK with you.
          Hide
          aw Allen Wittenauer added a comment -

          Go for it. I've been too wrapped up in fixing test-patch.

          Off the top of my head, I was thinking these should probably all live somewhere in either dev-support or hadoop-common.

          Show
          aw Allen Wittenauer added a comment - Go for it. I've been too wrapped up in fixing test-patch. Off the top of my head, I was thinking these should probably all live somewhere in either dev-support or hadoop-common.
          Hide
          alanburlison Alan Burlison added a comment -

          Prototype refactoring of hadoop-common CMakeLists.txt and JNIFlags.cmake into HadoopCommon.cmake and HadoopJNI.cmake

          Show
          alanburlison Alan Burlison added a comment - Prototype refactoring of hadoop-common CMakeLists.txt and JNIFlags.cmake into HadoopCommon.cmake and HadoopJNI.cmake
          Hide
          alanburlison Alan Burlison added a comment -

          I've attached a prototype refactoring of common code from both hadoop-common and other CMakeList.txt files that duplicate code from hadoop-common into two new files:

          • HadoopCommon.cmake - this contains common configuration (e.g. compiler flags) as well as common macros and functions. I've prefixed all the macros and functions with "hadoop_" to make them easier to spot in any CMakeList.txt files that use them
          • HadoopJNI.cmake - this contains the JNI configuration code, with specific sections for Linux and Solaris

          This isn't fully tested so I'm not submitting it as a patch but I did want to get feedback on the overall approach before going too far down this path. In my updated CMakeLists.txt or hadoop-common I include HadoopCommon.cmake at the start of the file:

          list(APPEND CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/..)
          include(HadoopCommon)

          Then after setting the compiler flags etc appropriately I include HadoopJNI to configure the JNI settings.

          Show
          alanburlison Alan Burlison added a comment - I've attached a prototype refactoring of common code from both hadoop-common and other CMakeList.txt files that duplicate code from hadoop-common into two new files: HadoopCommon.cmake - this contains common configuration (e.g. compiler flags) as well as common macros and functions. I've prefixed all the macros and functions with "hadoop_" to make them easier to spot in any CMakeList.txt files that use them HadoopJNI.cmake - this contains the JNI configuration code, with specific sections for Linux and Solaris This isn't fully tested so I'm not submitting it as a patch but I did want to get feedback on the overall approach before going too far down this path. In my updated CMakeLists.txt or hadoop-common I include HadoopCommon.cmake at the start of the file: list(APPEND CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/..) include(HadoopCommon) Then after setting the compiler flags etc appropriately I include HadoopJNI to configure the JNI settings.
          Hide
          cmccabe Colin P. McCabe added a comment -

          I like the idea of having some utility file which all our top-level CMakeLists.txt files can source. Also as I commented on HADOOP-11987, we should get rid of the duplicate FindJNI.cmake file in the mapreduce directory.

          However, we have libraries and programs that don't need or want to link to JNI (hdfs native client, libhadooppipes, etc.). We need to make sure that we don't add anything to CFLAGS, CXXFLAGS, or LDFLAGS that would be disruptive to them or that would force them to link against libjvm.so.

          Alan, why don't you post a patch that has the Solaris support and the cleanup you want, and we can review it here.

          Show
          cmccabe Colin P. McCabe added a comment - I like the idea of having some utility file which all our top-level CMakeLists.txt files can source. Also as I commented on HADOOP-11987 , we should get rid of the duplicate FindJNI.cmake file in the mapreduce directory. However, we have libraries and programs that don't need or want to link to JNI (hdfs native client, libhadooppipes, etc.). We need to make sure that we don't add anything to CFLAGS, CXXFLAGS, or LDFLAGS that would be disruptive to them or that would force them to link against libjvm.so . Alan, why don't you post a patch that has the Solaris support and the cleanup you want, and we can review it here.
          Hide
          cmccabe Colin P. McCabe added a comment -

          Also, you missed setting the REENTRANT, LARGEFILE, and _GNU_SOURCE macros, which are needed on Linux.

          Show
          cmccabe Colin P. McCabe added a comment - Also, you missed setting the REENTRANT, LARGEFILE, and _GNU_SOURCE macros, which are needed on Linux.
          Hide
          alanburlison Alan Burlison added a comment -

          A prototype showing the suggested approach is attached above.

          I'd figured that the general config and the JNI config probably needed to be separate , so I've split the functionality into two files, HadoopCommon which is for all the stuff that isn't JNI-related and HadoopJNI which is just the JNI configuration.

          However the split may not be quite right, at present CMAKE_C_FLAGS, CMAKE_CXX_FLAGS and CMAKE_LD_FLAGS are set in HadoopJNI, that bit probably needs moving into HadoopCommon.

          Show
          alanburlison Alan Burlison added a comment - A prototype showing the suggested approach is attached above. I'd figured that the general config and the JNI config probably needed to be separate , so I've split the functionality into two files, HadoopCommon which is for all the stuff that isn't JNI-related and HadoopJNI which is just the JNI configuration. However the split may not be quite right, at present CMAKE_C_FLAGS, CMAKE_CXX_FLAGS and CMAKE_LD_FLAGS are set in HadoopJNI, that bit probably needs moving into HadoopCommon.
          Hide
          alanburlison Alan Burlison added a comment -
          • _REENTRANT is I believe better replaced by -pthread for gcc, as that ensures that both the appropriate preprocessor and compiler flags are both set. However I wasn't sure that all the native code in Hadoop was threaded so I felt it was better to explicitly specify this in each individual CMakeList.txt file
          • -D_LARGEFILE_SOURCE seems to be deprecated according to http://man7.org/linux/man-pages/man7/feature_test_macros.7.html: "New programs should not employ this macro; defining _XOPEN_SOURCE as just described or defining _FILE_OFFSET_BITS with the value 64 is the preferred mechanism to achieve the same result". _FILE_OFFSET_BITS=64 is already in the current CFLAGS but I believe even that can't be made a global option, as noted in the CMakeLists.txt for hadoop-yarn-project: "note: can't enable -D_LARGEFILE: see MAPREDUCE-4258"
          • _GNU_SOURCE isn't appropriate as a compiler command-line flag, as discussed in HADOOP-11997
          Show
          alanburlison Alan Burlison added a comment - _REENTRANT is I believe better replaced by -pthread for gcc, as that ensures that both the appropriate preprocessor and compiler flags are both set. However I wasn't sure that all the native code in Hadoop was threaded so I felt it was better to explicitly specify this in each individual CMakeList.txt file -D_LARGEFILE_SOURCE seems to be deprecated according to http://man7.org/linux/man-pages/man7/feature_test_macros.7.html: "New programs should not employ this macro; defining _XOPEN_SOURCE as just described or defining _FILE_OFFSET_BITS with the value 64 is the preferred mechanism to achieve the same result". _FILE_OFFSET_BITS=64 is already in the current CFLAGS but I believe even that can't be made a global option, as noted in the CMakeLists.txt for hadoop-yarn-project: "note: can't enable -D_LARGEFILE: see MAPREDUCE-4258 " _GNU_SOURCE isn't appropriate as a compiler command-line flag, as discussed in HADOOP-11997
          Hide
          aw Allen Wittenauer added a comment -

          Linking HADOOP-11929, since any target/profile/setup changes will impact that code as well.

          Show
          aw Allen Wittenauer added a comment - Linking HADOOP-11929 , since any target/profile/setup changes will impact that code as well.
          Hide
          cmccabe Colin P. McCabe added a comment -

          _REENTRANT is I believe better replaced by -pthread for gcc, as that ensures that both the appropriate preprocessor and compiler flags are both set. However I wasn't sure that all the native code in Hadoop was threaded so I felt it was better to explicitly specify this in each individual CMakeList.txt file

          All the native code is threaded. Please set either -pthread or REENTRANT (it seems that -pthread simply sets REENTRANT, on gcc at least).

          -D_LARGEFILE_SOURCE seems to be deprecated according to http://man7.org/linux/man-pages/man7/feature_test_macros.7.html: "New programs should not employ this macro; defining _XOPEN_SOURCE as just described or defining _FILE_OFFSET_BITS with the value 64 is the preferred mechanism to achieve the same result". _FILE_OFFSET_BITS=64 is already in the current CFLAGS but I believe even that can't be made a global option, as noted in the CMakeLists.txt for hadoop-yarn-project: "note: can't enable -D_LARGEFILE: see MAPREDUCE-4258"

          Ha, I filed that jira years ago. It's sad to see it hasn't been fixed yet.

          Anyway, we want large file support for every other program that gets compiled. It needs to be in this common code you are working on. hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt can simply set a variable which the common code will check, and disable large-file support if it sees.

          _GNU_SOURCE isn't appropriate as a compiler command-line flag, as discussed in HADOOP-11997

          Please set _GNU_SOURCE on Linux. On other OSes you can set whatever you need to get all the features of that OS.

          Show
          cmccabe Colin P. McCabe added a comment - _REENTRANT is I believe better replaced by -pthread for gcc, as that ensures that both the appropriate preprocessor and compiler flags are both set. However I wasn't sure that all the native code in Hadoop was threaded so I felt it was better to explicitly specify this in each individual CMakeList.txt file All the native code is threaded. Please set either -pthread or REENTRANT (it seems that -pthread simply sets REENTRANT , on gcc at least). -D_LARGEFILE_SOURCE seems to be deprecated according to http://man7.org/linux/man-pages/man7/feature_test_macros.7.html: "New programs should not employ this macro; defining _XOPEN_SOURCE as just described or defining _FILE_OFFSET_BITS with the value 64 is the preferred mechanism to achieve the same result". _FILE_OFFSET_BITS=64 is already in the current CFLAGS but I believe even that can't be made a global option, as noted in the CMakeLists.txt for hadoop-yarn-project: "note: can't enable -D_LARGEFILE: see MAPREDUCE-4258 " Ha, I filed that jira years ago. It's sad to see it hasn't been fixed yet. Anyway, we want large file support for every other program that gets compiled. It needs to be in this common code you are working on. hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt can simply set a variable which the common code will check, and disable large-file support if it sees. _GNU_SOURCE isn't appropriate as a compiler command-line flag, as discussed in HADOOP-11997 Please set _GNU_SOURCE on Linux. On other OSes you can set whatever you need to get all the features of that OS.
          Hide
          alanburlison Alan Burlison added a comment -
          • What about the test_bulk_crc32 app for example, is that multithreaded? If it's OK to use -pthread on that then I think -pthread can safely be added to the global flags, as you suggest.
          • I've put in a modification that allows you to preset a variable, GCC_SHARED_FLAGS. If it's not set it currently defaults to "-g -O2 -Wall", if it is OK to do so I'll change that to "-g -O2 -Wall -pthread -D_LARGEFILE_SOURCE", then the hadoop-yarn-project CMakeLists.txt can simply set it appropriately before including HadoopCommon.cmake
          • Sorry to sound like a stuck record but you still haven't given me what I consider to be a satisfactory reason why _GNU_SOURCE should be set as a compiler command-line flag, even on Linux. Just to be completely clear, I'm not intending to just remove it and have stuff blow up on Linux, I'm intending to go through all the source and add the necessary #define/#undef blocks so that all the Linux-specific code continues to work. Stopping this practice will help keep the source portable, as I've explained at length in HADOOP-11997.
          Show
          alanburlison Alan Burlison added a comment - What about the test_bulk_crc32 app for example, is that multithreaded? If it's OK to use -pthread on that then I think -pthread can safely be added to the global flags, as you suggest. I've put in a modification that allows you to preset a variable, GCC_SHARED_FLAGS. If it's not set it currently defaults to "-g -O2 -Wall", if it is OK to do so I'll change that to "-g -O2 -Wall -pthread -D_LARGEFILE_SOURCE", then the hadoop-yarn-project CMakeLists.txt can simply set it appropriately before including HadoopCommon.cmake Sorry to sound like a stuck record but you still haven't given me what I consider to be a satisfactory reason why _GNU_SOURCE should be set as a compiler command-line flag, even on Linux. Just to be completely clear, I'm not intending to just remove it and have stuff blow up on Linux, I'm intending to go through all the source and add the necessary #define/#undef blocks so that all the Linux-specific code continues to work. Stopping this practice will help keep the source portable, as I've explained at length in HADOOP-11997 .
          Hide
          cmccabe Colin P. McCabe added a comment -

          What about the test_bulk_crc32 app for example, is that multithreaded? If it's OK to use -pthread on that then I think -pthread can safely be added to the global flags, as you suggest.

          Some test programs aren't multithreaded, but there is no harm in passing -pthread in all cases. The overhead is extremely minimal, and it brings the test programs closer to what they're supposed to test (which is real hadoop performance / correctness in a multi-threaded environment.)

          The main thing to keep in mind is that we don't want JNI linked into programs unless they actually use JNI. But that shouldn't be an issue based on my reading of the patch.

          I've put in a modification that allows you to preset a variable, GCC_SHARED_FLAGS. If it's not set it currently defaults to "-g -O2 -Wall", if it is OK to do so I'll change that to "-g -O2 -Wall -pthread -D_LARGEFILE_SOURCE", then the hadoop-yarn-project CMakeLists.txt can simply set it appropriately before including HadoopCommon.cmake

          Doesn't that mean that if you change -O2 to -O3 in the common code, yarn will be unaffected? I would prefer to override only the thing that needs to be different, aka _LARGEFILE_SOURCE.

          Sorry to sound like a stuck record but you still haven't given me what I consider to be a satisfactory reason why _GNU_SOURCE should be set as a compiler command-line flag, even on Linux. Just to be completely clear, I'm not intending to just remove it and have stuff blow up on Linux, I'm intending to go through all the source and add the necessary #define/#undef blocks so that all the Linux-specific code continues to work. Stopping this practice will help keep the source portable, as I've explained at length in HADOOP-11997.

          Everyone agrees that not setting _GNU_SOURCE (or setting it in a more limited way) won't enforce portability. For example, you still get the non-POSIX definition of strerror_r. You still have all the actually non-portable things in the places they are needed (i.e. cgroups, etc.).

          On the other hand, everyone agrees that having a nightly build on Solaris / BSD / whatever will enforce portability.

          From experience, trying to conditionally set _GNU_SOURCE is a huge pain in the rear for everyone, leads to mysterious compile errors, and makes life difficult for new contributors. There's just no point.

          If you put the energy and effort that is going into worrying about _GNU_SOURCE into creating a Solaris or BSD nightly build, you would have a guarantee that portability would not regress. It is a waste of time to worry about _GNU_SOURCE, although if you want to avoid setting it on Solaris (just to keep things "tidy") that is fine.

          Show
          cmccabe Colin P. McCabe added a comment - What about the test_bulk_crc32 app for example, is that multithreaded? If it's OK to use -pthread on that then I think -pthread can safely be added to the global flags, as you suggest. Some test programs aren't multithreaded, but there is no harm in passing -pthread in all cases. The overhead is extremely minimal, and it brings the test programs closer to what they're supposed to test (which is real hadoop performance / correctness in a multi-threaded environment.) The main thing to keep in mind is that we don't want JNI linked into programs unless they actually use JNI. But that shouldn't be an issue based on my reading of the patch. I've put in a modification that allows you to preset a variable, GCC_SHARED_FLAGS. If it's not set it currently defaults to "-g -O2 -Wall", if it is OK to do so I'll change that to "-g -O2 -Wall -pthread -D_LARGEFILE_SOURCE", then the hadoop-yarn-project CMakeLists.txt can simply set it appropriately before including HadoopCommon.cmake Doesn't that mean that if you change -O2 to -O3 in the common code, yarn will be unaffected? I would prefer to override only the thing that needs to be different, aka _LARGEFILE_SOURCE . Sorry to sound like a stuck record but you still haven't given me what I consider to be a satisfactory reason why _GNU_SOURCE should be set as a compiler command-line flag, even on Linux. Just to be completely clear, I'm not intending to just remove it and have stuff blow up on Linux, I'm intending to go through all the source and add the necessary #define/#undef blocks so that all the Linux-specific code continues to work. Stopping this practice will help keep the source portable, as I've explained at length in HADOOP-11997 . Everyone agrees that not setting _GNU_SOURCE (or setting it in a more limited way) won't enforce portability. For example, you still get the non-POSIX definition of strerror_r . You still have all the actually non-portable things in the places they are needed (i.e. cgroups, etc.). On the other hand, everyone agrees that having a nightly build on Solaris / BSD / whatever will enforce portability. From experience, trying to conditionally set _GNU_SOURCE is a huge pain in the rear for everyone, leads to mysterious compile errors, and makes life difficult for new contributors. There's just no point. If you put the energy and effort that is going into worrying about _GNU_SOURCE into creating a Solaris or BSD nightly build, you would have a guarantee that portability would not regress. It is a waste of time to worry about _GNU_SOURCE , although if you want to avoid setting it on Solaris (just to keep things "tidy") that is fine.
          Hide
          aw Allen Wittenauer added a comment -

          On the other hand, everyone agrees that having a nightly build on Solaris / BSD / whatever will enforce portability.

          I think it's funny that you argued against depending upon the nightly for Linux builds just a few hours ago in another JIRA.

          Show
          aw Allen Wittenauer added a comment - On the other hand, everyone agrees that having a nightly build on Solaris / BSD / whatever will enforce portability. I think it's funny that you argued against depending upon the nightly for Linux builds just a few hours ago in another JIRA.
          Hide
          cmccabe Colin P. McCabe added a comment -

          Allen, we both know very well that having a nightly build for Solaris / BSD / whatever will be orders of magnitude better than our current situation, where someone just builds the stuff on a non-Linux platform every few months and notices that it's broken. If interest in a specific non-Linux platform grows to the point where it makes sense, we can add it to the precommit build. I think we're nowhere near that point for any of the non-Linux platforms.

          Show
          cmccabe Colin P. McCabe added a comment - Allen, we both know very well that having a nightly build for Solaris / BSD / whatever will be orders of magnitude better than our current situation, where someone just builds the stuff on a non-Linux platform every few months and notices that it's broken. If interest in a specific non-Linux platform grows to the point where it makes sense, we can add it to the precommit build. I think we're nowhere near that point for any of the non-Linux platforms.
          Hide
          alanburlison Alan Burlison added a comment -

          JNI stuff will only be pulled in if you explicitly include HadoopJNI.cmake.

          If you want to globally change -02 to -O3 then YARN would have to be modified explicitly. Without knowing exactly what flags you are going to want to override in advance I can't see how else you can do it, unless you have a better idea.

          I agree that not setting _GNU_SOURCE won't enforce portability, but simply doing Solaris test builds also won't enforce compatibility. For example the code covered by HADOOP-7824 is wrong, but compiles on Solaris just fine. It's not clear if nightly Solaris builds would actually catch the issue.

          Your point about strerror_r is an interesting one as the POSIX and Linux ones aren't compatible. There are 475 references to errno in the native code and if each of them required the use of strerror_r then someone would have had to go through all those places and wrap them in a platform-independent version. Fortunately that isn't actually necessary but it does illustrate the potential scale of the problem.

          Also, the problems caused by globally defining _GNU_SOURCE are more insidious than just compile-time errors because in some cases _GNU_SOURCE actually changes run-time behaviour as well, e.g. in basename(3). In that case you'll potentially get mysterious behaviour at run-time on non-Linux platforms. Strictly speaking if the source is intended to be portable then _POSIX_C_SOURCE should be set to 200112L, but I'm not suggesting going that far,

          I've had a look at hadoop-common and as far as I can tell, _GNU_SOURCE is only needed to get the extra bits in dlfcn.h, it isn't actually needed anywhere else. It may well be that your concerns about lots of breakage being caused by not globally defining _GNU_SOURCE are not borne out, but I'd have to do more work to confirm that.

          I'm looking at what's involved in getting the Solaris Jenkins machines(s) working again with a current version of Solaris, but that's going to take a little time to investigate so I don;t have anything to report yet. However be assured it is on my list.

          Show
          alanburlison Alan Burlison added a comment - JNI stuff will only be pulled in if you explicitly include HadoopJNI.cmake. If you want to globally change -02 to -O3 then YARN would have to be modified explicitly. Without knowing exactly what flags you are going to want to override in advance I can't see how else you can do it, unless you have a better idea. I agree that not setting _GNU_SOURCE won't enforce portability, but simply doing Solaris test builds also won't enforce compatibility. For example the code covered by HADOOP-7824 is wrong, but compiles on Solaris just fine. It's not clear if nightly Solaris builds would actually catch the issue. Your point about strerror_r is an interesting one as the POSIX and Linux ones aren't compatible. There are 475 references to errno in the native code and if each of them required the use of strerror_r then someone would have had to go through all those places and wrap them in a platform-independent version. Fortunately that isn't actually necessary but it does illustrate the potential scale of the problem. Also, the problems caused by globally defining _GNU_SOURCE are more insidious than just compile-time errors because in some cases _GNU_SOURCE actually changes run-time behaviour as well, e.g. in basename(3). In that case you'll potentially get mysterious behaviour at run-time on non-Linux platforms. Strictly speaking if the source is intended to be portable then _POSIX_C_SOURCE should be set to 200112L, but I'm not suggesting going that far, I've had a look at hadoop-common and as far as I can tell, _GNU_SOURCE is only needed to get the extra bits in dlfcn.h, it isn't actually needed anywhere else. It may well be that your concerns about lots of breakage being caused by not globally defining _GNU_SOURCE are not borne out, but I'd have to do more work to confirm that. I'm looking at what's involved in getting the Solaris Jenkins machines(s) working again with a current version of Solaris, but that's going to take a little time to investigate so I don;t have anything to report yet. However be assured it is on my list.
          Hide
          cmccabe Colin P. McCabe added a comment -

          JNI stuff will only be pulled in if you explicitly include HadoopJNI.cmake.

          OK.

          If you want to globally change -02 to -O3 then YARN would have to be modified explicitly. Without knowing exactly what flags you are going to want to override in advance I can't see how else you can do it, unless you have a better idea.

          Why not set this in your proposed HadoopCommon.cmake file?

          I agree that not setting _GNU_SOURCE won't enforce portability, but simply doing Solaris test builds also won't enforce compatibility. For example the code covered by HADOOP-7824 is wrong, but compiles on Solaris just fine. It's not clear if nightly Solaris builds would actually catch the issue.

          There is always more we can do. Compiling things is not enough to find all errors. Testing things is not enough to find all errors. Even running things in production is not enough to find all errors. My point was simply that the issues you find with a nightly Solaris build will be a strict superset of what you would find with any manipulation of GNU_SOURCE, making worrying about _GNU_SOURCE quite a waste of time. Failing to set this can only hurt by creating confusion about which function is being called. It never helps anything.

          Your point about strerror_r is an interesting one as the POSIX and Linux ones aren't compatible. There are 475 references to errno in the native code and if each of them required the use of strerror_r then someone would have had to go through all those places and wrap them in a platform-independent version. Fortunately that isn't actually necessary but it does illustrate the potential scale of the problem.

          We did wrap "all those places that use strerror_r" in a platform-independent version." The platform independent version is called terror. I wrote it because of the frustrations of strerror_r on Linux. Also, the strerror_r interface is cumbersome even when the correct version is available since it requires a buffer to be passed in.

          Show
          cmccabe Colin P. McCabe added a comment - JNI stuff will only be pulled in if you explicitly include HadoopJNI.cmake. OK. If you want to globally change -02 to -O3 then YARN would have to be modified explicitly. Without knowing exactly what flags you are going to want to override in advance I can't see how else you can do it, unless you have a better idea. Why not set this in your proposed HadoopCommon.cmake file? I agree that not setting _GNU_SOURCE won't enforce portability, but simply doing Solaris test builds also won't enforce compatibility. For example the code covered by HADOOP-7824 is wrong, but compiles on Solaris just fine. It's not clear if nightly Solaris builds would actually catch the issue. There is always more we can do. Compiling things is not enough to find all errors. Testing things is not enough to find all errors. Even running things in production is not enough to find all errors. My point was simply that the issues you find with a nightly Solaris build will be a strict superset of what you would find with any manipulation of GNU_SOURCE, making worrying about _GNU_SOURCE quite a waste of time. Failing to set this can only hurt by creating confusion about which function is being called. It never helps anything. Your point about strerror_r is an interesting one as the POSIX and Linux ones aren't compatible. There are 475 references to errno in the native code and if each of them required the use of strerror_r then someone would have had to go through all those places and wrap them in a platform-independent version. Fortunately that isn't actually necessary but it does illustrate the potential scale of the problem. We did wrap "all those places that use strerror_r" in a platform-independent version." The platform independent version is called terror . I wrote it because of the frustrations of strerror_r on Linux. Also, the strerror_r interface is cumbersome even when the correct version is available since it requires a buffer to be passed in.
          Hide
          cmccabe Colin P. McCabe added a comment -

          Hi Alan,

          Is there anything I can do to help here?

          If you are busy with other things, I can post a patch consolidating the cmake extensions for you to review. That way we could make incremental progress here.

          Show
          cmccabe Colin P. McCabe added a comment - Hi Alan, Is there anything I can do to help here? If you are busy with other things, I can post a patch consolidating the cmake extensions for you to review. That way we could make incremental progress here.
          Hide
          alanburlison Alan Burlison added a comment -

          Hi Colin, thanks for the kind offer

          I've finished the CMake stuff with the exception of dropping the support for 32-bit Solaris builds and enforcing 64-bit only. I've also gone through all the other CMake scripts in the source tree and moved them over to the new CMake infrastructure. I'm currently running test builds on Linux, Solaris x86 & SPARC to confirm that everything still builds OK - the new CMake stuff seems fine but I have a little more cleanup to do to make everything -Wall clean. Once that's done I'll start submitting the patches for review. There's one for each affected Hadoop sub-project and they'll need to be applied in a specific order, although not necessarily all at once.

          I'm travelling most of this week so it's probably going to be next week before I submit the patches.

          Show
          alanburlison Alan Burlison added a comment - Hi Colin, thanks for the kind offer I've finished the CMake stuff with the exception of dropping the support for 32-bit Solaris builds and enforcing 64-bit only. I've also gone through all the other CMake scripts in the source tree and moved them over to the new CMake infrastructure. I'm currently running test builds on Linux, Solaris x86 & SPARC to confirm that everything still builds OK - the new CMake stuff seems fine but I have a little more cleanup to do to make everything -Wall clean. Once that's done I'll start submitting the patches for review. There's one for each affected Hadoop sub-project and they'll need to be applied in a specific order, although not necessarily all at once. I'm travelling most of this week so it's probably going to be next week before I submit the patches.
          Hide
          cmccabe Colin P. McCabe added a comment -

          Thanks, Alan Burlison. It sounds like it might be a big change. We might need to think about how we'll test that ARM support hasn't regressed (VM? ask the ARM guys for help?)

          Show
          cmccabe Colin P. McCabe added a comment - Thanks, Alan Burlison . It sounds like it might be a big change. We might need to think about how we'll test that ARM support hasn't regressed (VM? ask the ARM guys for help?)
          Hide
          alanburlison Alan Burlison added a comment -

          Whilst there's quite a lot of change much of it is cosmetic - factoring out common code etc. The ARM support is self contained and I've simply moved it into a new shared location rather than modifying it. As I said above I've done the changes as a patch per project/subtree. The common changes are in hadoop-common-project so that patch needs applying first, but the other CMake infrastructure changes can be applied incrementally. I've left the old hadoop-common JNIFlags.cmake in place as it is used by other sub-projects. Once they have all been updated the old JNIFlags.cmake can then be removed.

          There are two other categories of patch that aren't CMake related, those that make the existing code -Wall-clean and those that add Solaris support. Again, they are done on a per-subproject basis.

          Show
          alanburlison Alan Burlison added a comment - Whilst there's quite a lot of change much of it is cosmetic - factoring out common code etc. The ARM support is self contained and I've simply moved it into a new shared location rather than modifying it. As I said above I've done the changes as a patch per project/subtree. The common changes are in hadoop-common-project so that patch needs applying first, but the other CMake infrastructure changes can be applied incrementally. I've left the old hadoop-common JNIFlags.cmake in place as it is used by other sub-projects. Once they have all been updated the old JNIFlags.cmake can then be removed. There are two other categories of patch that aren't CMake related, those that make the existing code -Wall-clean and those that add Solaris support. Again, they are done on a per-subproject basis.
          Hide
          cmccabe Colin P. McCabe added a comment - - edited

          edit: nevermind, I see that you haven't posted a new patch

          Show
          cmccabe Colin P. McCabe added a comment - - edited edit: nevermind, I see that you haven't posted a new patch
          Hide
          cmccabe Colin P. McCabe added a comment -

          I think splitting out the patches to be -Wall clean would be a nice idea. We really should be -Wall clean now since we're passing the flag, but I think sometimes people don't look at the build output carefully enough. Also, there are additional warnings we could potentially enable.

          Show
          cmccabe Colin P. McCabe added a comment - I think splitting out the patches to be -Wall clean would be a nice idea. We really should be -Wall clean now since we're passing the flag, but I think sometimes people don't look at the build output carefully enough. Also, there are additional warnings we could potentially enable.
          Hide
          alanburlison Alan Burlison added a comment -

          Yes, I'm running some tests before I submit. I'm getting failures on Linux but I can't see yet if/how they are related to my changes.

          Show
          alanburlison Alan Burlison added a comment - Yes, I'm running some tests before I submit. I'm getting failures on Linux but I can't see yet if/how they are related to my changes.
          Hide
          alanburlison Alan Burlison added a comment -

          Refactored CMake infrastructure for review. This patch introduces two new shared CMake files, HadoopCommon.cmake and HadoopJNI.cmake to hadoop-common-project which contain the refactored code, and it modifies hadoop-common-project's CMakeLists.txt to use the new infrastructure. The old JNIFlags.cmake file is left in place for the moment as other Hadoop components still require it until they are updated to use the new infrastructure (patches to follow). Once all the CMake infrastructure patches have been integrated, it can be removed.

          Show
          alanburlison Alan Burlison added a comment - Refactored CMake infrastructure for review. This patch introduces two new shared CMake files, HadoopCommon.cmake and HadoopJNI.cmake to hadoop-common-project which contain the refactored code, and it modifies hadoop-common-project's CMakeLists.txt to use the new infrastructure. The old JNIFlags.cmake file is left in place for the moment as other Hadoop components still require it until they are updated to use the new infrastructure (patches to follow). Once all the CMake infrastructure patches have been integrated, it can be removed.
          Hide
          alanburlison Alan Burlison added a comment -

          I need to fine-tune the patch to cope with misfeatures in the standard CMake infrastructure - in the case of bzip detection the compilation is using the -m64 flag but the bzip library being probed is being passed to the compiler as a path rather than as -lbzip2, and the path being passed is the 32-bit library path rather than the 64-bit one. The test compile then fails because CMake thinks the library isn't present when it actually is.

          Also, even if -DREQUIRE_BZIP2 is passed to CMake on the command line and there's logic in the CMakeLists.txt which is supposed to fail the build when bzip2 isn't found, the build continues.

          I'll update the patches when I've figured out how best to work around the CMake deficiencies.

          Show
          alanburlison Alan Burlison added a comment - I need to fine-tune the patch to cope with misfeatures in the standard CMake infrastructure - in the case of bzip detection the compilation is using the -m64 flag but the bzip library being probed is being passed to the compiler as a path rather than as -lbzip2, and the path being passed is the 32-bit library path rather than the 64-bit one. The test compile then fails because CMake thinks the library isn't present when it actually is. Also, even if -DREQUIRE_BZIP2 is passed to CMake on the command line and there's logic in the CMakeLists.txt which is supposed to fail the build when bzip2 isn't found, the build continues. I'll update the patches when I've figured out how best to work around the CMake deficiencies.
          Hide
          aw Allen Wittenauer added a comment -

          Ha.

          This was one of the things I was going to test with the new cmake files. I've found lots of problems with the current bzip2 bits (regardless of platform) so I was hoping the new ones fixed them. I always worked around it by disable the shared library check, since that seemed to the cause of the issues.

          Thanks for working on this!

          Show
          aw Allen Wittenauer added a comment - Ha. This was one of the things I was going to test with the new cmake files. I've found lots of problems with the current bzip2 bits (regardless of platform) so I was hoping the new ones fixed them. I always worked around it by disable the shared library check, since that seemed to the cause of the issues. Thanks for working on this!
          Hide
          cmccabe Colin P. McCabe added a comment -

          Thanks, Alan Burlison. Looks good.

          #
          # Solaris-specific JNI configuration.                                                                                       
          #   
          elseif(CMAKE_SYSTEM_NAME STREQUAL "SunOS")
          ...
              if(CMAKE_SYSTEM_PROCESSOR STREQUAL "i386")                                                                              
                  set(CMAKE_SYSTEM_PROCESSOR "amd64")                                                                                 
          ...
          

          Given that we are disallowing 32-bit builds on Solaris, is this necessary?

          # Set the shared compiler flags if using GCC.
          if(CMAKE_COMPILER_IS_GNUCC AND CMAKE_COMPILER_IS_GNUCXX)
              hadoop_add_compiler_flags("${GCC_SHARED_FLAGS}")
          endif()
          

          clang is a valid choice to build Hadoop. It supports all the same flags as gcc (for all practical purposes). I'm pretty sure Intel's compiler supports most of the gcc flags as well.

          In general, we should have logic like this:

          if (some weird compiler that doesn't support gcc flags) {
            // do something special
          } else {
            // set gcc flags
          }
          

          In other words, we should special-case things that are not gcc-compatible rather than special-casing gcc. Otherwise we are going to get incorrect builds on clang, icc, etc. etc.

          I need to fine-tune the patch to cope with misfeatures in the standard CMake infrastructure - in the case of bzip detection the compilation is using the -m64 flag but the bzip library being probed is being passed to the compiler as a path rather than as -lbzip2, and the path being passed is the 32-bit library path rather than the 64-bit one. The test compile then fails because CMake thinks the library isn't present when it actually is. Also, even if -DREQUIRE_BZIP2 is passed to CMake on the command line and there's logic in the CMakeLists.txt which is supposed to fail the build when bzip2 isn't found, the build continues.

          Ah, that's interesting. I guess we didn't notice it earlier since most developers don't have 32-bit binaries installed any more. We should probably just write our own bzip2 detection code, similar to how we've done with some other libraries. It would be nice to make this a separate patch. It seems like the kind of thing people would want to backport to earlier releases. It would also be easier to review that way (there is no logical dependency on the other stuff)

          thanks again, good to see progress on this.

          Show
          cmccabe Colin P. McCabe added a comment - Thanks, Alan Burlison . Looks good. # # Solaris-specific JNI configuration. # elseif(CMAKE_SYSTEM_NAME STREQUAL "SunOS" ) ... if (CMAKE_SYSTEM_PROCESSOR STREQUAL "i386" ) set(CMAKE_SYSTEM_PROCESSOR "amd64" ) ... Given that we are disallowing 32-bit builds on Solaris, is this necessary? # Set the shared compiler flags if using GCC. if (CMAKE_COMPILER_IS_GNUCC AND CMAKE_COMPILER_IS_GNUCXX) hadoop_add_compiler_flags( "${GCC_SHARED_FLAGS}" ) endif() clang is a valid choice to build Hadoop. It supports all the same flags as gcc (for all practical purposes). I'm pretty sure Intel's compiler supports most of the gcc flags as well. In general, we should have logic like this: if (some weird compiler that doesn't support gcc flags) { // do something special } else { // set gcc flags } In other words, we should special-case things that are not gcc-compatible rather than special-casing gcc. Otherwise we are going to get incorrect builds on clang, icc, etc. etc. I need to fine-tune the patch to cope with misfeatures in the standard CMake infrastructure - in the case of bzip detection the compilation is using the -m64 flag but the bzip library being probed is being passed to the compiler as a path rather than as -lbzip2, and the path being passed is the 32-bit library path rather than the 64-bit one. The test compile then fails because CMake thinks the library isn't present when it actually is. Also, even if -DREQUIRE_BZIP2 is passed to CMake on the command line and there's logic in the CMakeLists.txt which is supposed to fail the build when bzip2 isn't found, the build continues. Ah, that's interesting. I guess we didn't notice it earlier since most developers don't have 32-bit binaries installed any more. We should probably just write our own bzip2 detection code, similar to how we've done with some other libraries. It would be nice to make this a separate patch. It seems like the kind of thing people would want to backport to earlier releases. It would also be easier to review that way (there is no logical dependency on the other stuff) thanks again, good to see progress on this.
          Hide
          alanburlison Alan Burlison added a comment -
          • The first item is needed for 64-bit builds on Solaris. That's because CMAKE_SYSTEM_PROCESSOR is the same as the output of 'uname -p', which 'i386' on Solaris x86 for example, because we support both 32 and 64 bit executables simultaneously. We have to fake it up this way to get CMake to build 64 bit. That's what the comment above the block in question is saying. Horrid, yes.
          • The second item makes good sense, I'll swap the logic around to assume gcc flags by default.
          • I've dug down a little further into the standard CMake library detection code, and to be honest it's surprising it works at all on Solaris for LP64 libraries. It doesn't use the right LP64 subdirectories when searching for libraries and I suspect it only works at all because we currently ship both 32 and 64 versions of most everything. When we eventually drop 32-bit versions I think the current implementation will just stop working on Solaris. Also, from a cursory glance the custom code in the Hadoop CMake files for libraries like snappy only partly improves things in this area as it still hard-codes the Linux scheme for 64-bit library directories. I think I might be able to persuade the standard CMake find_library do do the right thing, I'll keep digging.
          Show
          alanburlison Alan Burlison added a comment - The first item is needed for 64-bit builds on Solaris. That's because CMAKE_SYSTEM_PROCESSOR is the same as the output of 'uname -p', which 'i386' on Solaris x86 for example, because we support both 32 and 64 bit executables simultaneously. We have to fake it up this way to get CMake to build 64 bit. That's what the comment above the block in question is saying. Horrid, yes. The second item makes good sense, I'll swap the logic around to assume gcc flags by default. I've dug down a little further into the standard CMake library detection code, and to be honest it's surprising it works at all on Solaris for LP64 libraries. It doesn't use the right LP64 subdirectories when searching for libraries and I suspect it only works at all because we currently ship both 32 and 64 versions of most everything. When we eventually drop 32-bit versions I think the current implementation will just stop working on Solaris. Also, from a cursory glance the custom code in the Hadoop CMake files for libraries like snappy only partly improves things in this area as it still hard-codes the Linux scheme for 64-bit library directories. I think I might be able to persuade the standard CMake find_library do do the right thing, I'll keep digging.
          Hide
          alanburlison Alan Burlison added a comment -

          The CMake find_package(BZip2 QUIET) module looks like it's just a thin wrapper around the standard CMake find_library() function and I think that's where the problem is - the CMake find_library() implementation just doesn't work reliably.

          Show
          alanburlison Alan Burlison added a comment - The CMake find_package(BZip2 QUIET) module looks like it's just a thin wrapper around the standard CMake find_library() function and I think that's where the problem is - the CMake find_library() implementation just doesn't work reliably.
          Hide
          aw Allen Wittenauer added a comment -

          Hadoop's bzip2 detection doesn't even work reliably on Linux, so it would make sense that it's a general CMake problem rather than something wacky we're doing (although we're doing a lot of wacky things).

          Show
          aw Allen Wittenauer added a comment - Hadoop's bzip2 detection doesn't even work reliably on Linux, so it would make sense that it's a general CMake problem rather than something wacky we're doing (although we're doing a lot of wacky things).
          Hide
          alanburlison Alan Burlison added a comment -

          The workaround is to set CMAKE_LIBRARY_ARCHITECTURE to the correct value for the LP64 library location on the platform "amd64" or "sparcv9".

          Before:

          Found ZLIB: /usr/lib/libz.so.1 (found version "1.2.8")
          Looking for BZ2_bzCompressInit in /usr/lib/libbz2.so.1
          Looking for BZ2_bzCompressInit in /usr/lib/libbz2.so.1 - not found

          Which is wrong as it's finding the 32 bit and not the 64 bit library, then feeding it into a 64-bit test compile which not surprisingly blows up.

          After:

          Found ZLIB: /usr/lib/amd64/libz.so.1 (found version "1.2.8")
          Looking for BZ2_bzCompressInit in /usr/lib/amd64/libbz2.so.1
          Looking for BZ2_bzCompressInit in /usr/lib/amd64/libbz2.so.1 - found

          which is correct.

          This and the other architecture-related stuff needs moving from HadoopJNI.cmake into HadoopCommon.cmake as all compilations need the tweaks, not just JNI code. I believe the same move is also needed for the Linux equivalents.

          The other issue of warnings such as "Manually-specified variables were not used by the project ... REQUIRE_BZIP2" is because when the library is found the REQUIRE_BZIP2 variable isn't tested and CMake isn't smart enough to see it's used down a different code path. The easiest way of stopping the warnings is to put a dummy assignment to REQUIRE_BZIP2 in the "library found" code path.

          Show
          alanburlison Alan Burlison added a comment - The workaround is to set CMAKE_LIBRARY_ARCHITECTURE to the correct value for the LP64 library location on the platform "amd64" or "sparcv9". Before: Found ZLIB: /usr/lib/libz.so.1 (found version "1.2.8") Looking for BZ2_bzCompressInit in /usr/lib/libbz2.so.1 Looking for BZ2_bzCompressInit in /usr/lib/libbz2.so.1 - not found Which is wrong as it's finding the 32 bit and not the 64 bit library, then feeding it into a 64-bit test compile which not surprisingly blows up. After: Found ZLIB: /usr/lib/amd64/libz.so.1 (found version "1.2.8") Looking for BZ2_bzCompressInit in /usr/lib/amd64/libbz2.so.1 Looking for BZ2_bzCompressInit in /usr/lib/amd64/libbz2.so.1 - found which is correct. This and the other architecture-related stuff needs moving from HadoopJNI.cmake into HadoopCommon.cmake as all compilations need the tweaks, not just JNI code. I believe the same move is also needed for the Linux equivalents. The other issue of warnings such as "Manually-specified variables were not used by the project ... REQUIRE_BZIP2" is because when the library is found the REQUIRE_BZIP2 variable isn't tested and CMake isn't smart enough to see it's used down a different code path. The easiest way of stopping the warnings is to put a dummy assignment to REQUIRE_BZIP2 in the "library found" code path.
          Hide
          alanburlison Alan Burlison added a comment -

          Updated patch, as per discussion

          Show
          alanburlison Alan Burlison added a comment - Updated patch, as per discussion
          Hide
          cmccabe Colin P. McCabe added a comment -

          The first item is needed for 64-bit builds on Solaris. That's because CMAKE_SYSTEM_PROCESSOR is the same as the output of 'uname -p', which 'i386' on Solaris x86 for example, because we support both 32 and 64 bit executables simultaneously. We have to fake it up this way to get CMake to build 64 bit. That's what the comment above the block in question is saying. Horrid, yes.

          Ah, ok.

          The workaround [for bzip2 library finding issues] is to set CMAKE_LIBRARY_ARCHITECTURE to the correct value for the LP64 library location on the platform "amd64" or "sparcv9".

          Good idea. I see that the patch sets this for Solaris. Should we do this for Linux as well? Or should we do that in a follow-on, in case it's disruptive?

          130 # Add in support other compilers here, if necessary.
          131 if(NOT (CMAKE_COMPILER_IS_GNUCC AND CMAKE_COMPILER_IS_GNUCXX))
          132
          133 # Assume GCC and set the shared compiler flags.
          134 else()
          135     hadoop_add_compiler_flags("${GCC_SHARED_FLAGS}")
          136 endif()
          

          Hmm, if I'm reading this right, I still don't think this will work for clang or icc? Clang won't set CMAKE_COMPILER_IS_GNUCC or CMAKE_COMPILER_IS_GNUCXX set, so it won't get the correct CFLAGS set. Maybe let's just get rid of this if statement and add a comment saying that compiler which don't use the gcc flags should be special-cased here. (We don't support any such compilers yet, but we'd like to in the future)

          thanks

          Show
          cmccabe Colin P. McCabe added a comment - The first item is needed for 64-bit builds on Solaris. That's because CMAKE_SYSTEM_PROCESSOR is the same as the output of 'uname -p', which 'i386' on Solaris x86 for example, because we support both 32 and 64 bit executables simultaneously. We have to fake it up this way to get CMake to build 64 bit. That's what the comment above the block in question is saying. Horrid, yes. Ah, ok. The workaround [for bzip2 library finding issues] is to set CMAKE_LIBRARY_ARCHITECTURE to the correct value for the LP64 library location on the platform "amd64" or "sparcv9". Good idea. I see that the patch sets this for Solaris. Should we do this for Linux as well? Or should we do that in a follow-on, in case it's disruptive? 130 # Add in support other compilers here, if necessary. 131 if (NOT (CMAKE_COMPILER_IS_GNUCC AND CMAKE_COMPILER_IS_GNUCXX)) 132 133 # Assume GCC and set the shared compiler flags. 134 else () 135 hadoop_add_compiler_flags( "${GCC_SHARED_FLAGS}" ) 136 endif() Hmm, if I'm reading this right, I still don't think this will work for clang or icc? Clang won't set CMAKE_COMPILER_IS_GNUCC or CMAKE_COMPILER_IS_GNUCXX set, so it won't get the correct CFLAGS set. Maybe let's just get rid of this if statement and add a comment saying that compiler which don't use the gcc flags should be special-cased here. (We don't support any such compilers yet, but we'd like to in the future) thanks
          Hide
          alanburlison Alan Burlison added a comment -

          I've been as conservative as I can with the existing Linux code as I'm wary of breaking distros other than the one I'm using (Centos), so I thing doing it as a follow-up might be prudent.

          And yes, much better as you say to add a "XXX Add new compiler support here" comment, I'll make that change.

          Show
          alanburlison Alan Burlison added a comment - I've been as conservative as I can with the existing Linux code as I'm wary of breaking distros other than the one I'm using (Centos), so I thing doing it as a follow-up might be prudent. And yes, much better as you say to add a "XXX Add new compiler support here" comment, I'll make that change.
          Hide
          alanburlison Alan Burlison added a comment -

          Assume GCC by default

          Show
          alanburlison Alan Burlison added a comment - Assume GCC by default
          Hide
          cmccabe Colin P. McCabe added a comment -

          I've been as conservative as I can with the existing Linux code as I'm wary of breaking distros other than the one I'm using (Centos), so I thing doing it as a follow-up might be prudent.

          Sure.

          And yes, much better as you say to add a "XXX Add new compiler support here" comment, I'll make that change.

          So, patch 003 has this:

          130	# Add in support other compilers here, if necessary.
          131	if(NOT (CMAKE_COMPILER_IS_GNUCC AND CMAKE_COMPILER_IS_GNUCXX))
          132	
          133	# Assume GCC and set the shared compiler flags.
          134	else()
          135	    hadoop_add_compiler_flags("${GCC_SHARED_FLAGS}")
          136	endif()
          

          So it still won't work for clang. Since clang works right now, this would cause a regression.

          Can we just get rid of the if statement completely and add a TODO for other compilers? +1 pending that.

          Show
          cmccabe Colin P. McCabe added a comment - I've been as conservative as I can with the existing Linux code as I'm wary of breaking distros other than the one I'm using (Centos), so I thing doing it as a follow-up might be prudent. Sure. And yes, much better as you say to add a "XXX Add new compiler support here" comment, I'll make that change. So, patch 003 has this: 130 # Add in support other compilers here, if necessary. 131 if (NOT (CMAKE_COMPILER_IS_GNUCC AND CMAKE_COMPILER_IS_GNUCXX)) 132 133 # Assume GCC and set the shared compiler flags. 134 else () 135 hadoop_add_compiler_flags( "${GCC_SHARED_FLAGS}" ) 136 endif() So it still won't work for clang. Since clang works right now, this would cause a regression. Can we just get rid of the if statement completely and add a TODO for other compilers? +1 pending that.
          Hide
          alanburlison Alan Burlison added a comment -

          Doh! Sorry Colin, I fat-fingered the JIRA update and attached the wrong version of the patch. Updated as 004.

          Show
          alanburlison Alan Burlison added a comment - Doh! Sorry Colin, I fat-fingered the JIRA update and attached the wrong version of the patch. Updated as 004.
          Hide
          cmccabe Colin P. McCabe added a comment -

          +1. Thanks, Alan Burlison.

          Show
          cmccabe Colin P. McCabe added a comment - +1. Thanks, Alan Burlison .
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #8075 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8075/)
          HADOOP-12036. Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561)

          • hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • hadoop-common-project/hadoop-common/HadoopJNI.cmake
          • hadoop-common-project/hadoop-common/HadoopCommon.cmake
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8075 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8075/ ) HADOOP-12036 . Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561) hadoop-common-project/hadoop-common/src/CMakeLists.txt hadoop-common-project/hadoop-common/HadoopJNI.cmake hadoop-common-project/hadoop-common/HadoopCommon.cmake hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #241 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/241/)
          HADOOP-12036. Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561)

          • hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/HadoopJNI.cmake
          • hadoop-common-project/hadoop-common/HadoopCommon.cmake
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #241 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/241/ ) HADOOP-12036 . Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561) hadoop-common-project/hadoop-common/src/CMakeLists.txt hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/HadoopJNI.cmake hadoop-common-project/hadoop-common/HadoopCommon.cmake
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #971 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/971/)
          HADOOP-12036. Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561)

          • hadoop-common-project/hadoop-common/HadoopJNI.cmake
          • hadoop-common-project/hadoop-common/HadoopCommon.cmake
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/CMakeLists.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #971 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/971/ ) HADOOP-12036 . Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561) hadoop-common-project/hadoop-common/HadoopJNI.cmake hadoop-common-project/hadoop-common/HadoopCommon.cmake hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/CMakeLists.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #230 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/230/)
          HADOOP-12036. Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561)

          • hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • hadoop-common-project/hadoop-common/HadoopJNI.cmake
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/HadoopCommon.cmake
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #230 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/230/ ) HADOOP-12036 . Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561) hadoop-common-project/hadoop-common/src/CMakeLists.txt hadoop-common-project/hadoop-common/HadoopJNI.cmake hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/HadoopCommon.cmake
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Hdfs-trunk #2169 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2169/)
          HADOOP-12036. Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561)

          • hadoop-common-project/hadoop-common/HadoopJNI.cmake
          • hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • hadoop-common-project/hadoop-common/HadoopCommon.cmake
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk #2169 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2169/ ) HADOOP-12036 . Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561) hadoop-common-project/hadoop-common/HadoopJNI.cmake hadoop-common-project/hadoop-common/src/CMakeLists.txt hadoop-common-project/hadoop-common/HadoopCommon.cmake hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #239 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/239/)
          HADOOP-12036. Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561)

          • hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/HadoopJNI.cmake
          • hadoop-common-project/hadoop-common/HadoopCommon.cmake
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #239 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/239/ ) HADOOP-12036 . Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561) hadoop-common-project/hadoop-common/src/CMakeLists.txt hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/HadoopJNI.cmake hadoop-common-project/hadoop-common/HadoopCommon.cmake
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2187 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2187/)
          HADOOP-12036. Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561)

          • hadoop-common-project/hadoop-common/HadoopCommon.cmake
          • hadoop-common-project/hadoop-common/HadoopJNI.cmake
          • hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2187 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2187/ ) HADOOP-12036 . Consolidate all of the cmake extensions in one directory (alanburlison via cmccabe) (cmccabe: rev aa07dea3577158b92a17651d10da20df73f54561) hadoop-common-project/hadoop-common/HadoopCommon.cmake hadoop-common-project/hadoop-common/HadoopJNI.cmake hadoop-common-project/hadoop-common/src/CMakeLists.txt hadoop-common-project/hadoop-common/CHANGES.txt

            People

            • Assignee:
              alanburlison Alan Burlison
              Reporter:
              aw Allen Wittenauer
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development