Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.22.0
    • Component/s: libhdfs
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Remove AC_TYPE* from the libhdfs build since we get these via stdint.

      Currently configure.ac uses AC_TYPE_INT16_T, AC_TYPE_INT32_T, AC_TYPE_INT64_T and AC_TYPE_UINT16_T and thus requires autoconf 2.61 or higher.
      This prevents using it on such platforms as CentOS/RHEL 5.4 and 5.5. Given that those are pretty popular and also given that it is really difficult to find a platform
      these days that doesn't natively define intXX_t types I'm curious as to whether we can simply remove those macros or perhaps fail ONLY if we happen to be on such
      a platform.

      Here's a link to GNU autoconf docs for your reference:
      http://www.gnu.org/software/hello/manual/autoconf/Particular-Types.html

      1. HDFS-1619.patch.txt
        0.5 kB
        Roman Shaposhnik
      2. hdfs-1619-2.patch
        0.6 kB
        Eli Collins
      3. HDFS-1619-C99.patch.txt
        0.9 kB
        Roman Shaposhnik

        Issue Links

          Activity

          Hide
          Allen Wittenauer added a comment -

          Given that CentOS/RHEL 5.5 doesn't ship with a working Java, I don't see the issue with requiring a newer autoconf toolset.

          Show
          Allen Wittenauer added a comment - Given that CentOS/RHEL 5.5 doesn't ship with a working Java, I don't see the issue with requiring a newer autoconf toolset.
          Hide
          Brian Bockelman added a comment -

          Hi Allen,

          I'm sorry, but I come to the opposite conclusion. RHEL5.5 doesn't have a preferred version of Java, so it's reasonable to ask for the community-standard one from Sun.

          OTOH, RHEL5.5 has a specific version of autotools, and by requiring a more recent version, you force the sysadmin to go against the vendor.

          It seems like something relatively trivial to change in order to help RHEL5.5 compatibility.

          Is there really a platform where we expecting int16_t to not exist, but have a recent version of automake (note: I'm not familiar with Solaris, this is not a rhetorical question)?

          Brian

          PS - note to reporter: You aren't going to get HDFS to cleanly build against RHEL5 even fixing this; ant is the next problem.

          Show
          Brian Bockelman added a comment - Hi Allen, I'm sorry, but I come to the opposite conclusion. RHEL5.5 doesn't have a preferred version of Java, so it's reasonable to ask for the community-standard one from Sun. OTOH, RHEL5.5 has a specific version of autotools, and by requiring a more recent version, you force the sysadmin to go against the vendor. It seems like something relatively trivial to change in order to help RHEL5.5 compatibility. Is there really a platform where we expecting int16_t to not exist, but have a recent version of automake (note: I'm not familiar with Solaris, this is not a rhetorical question)? Brian PS - note to reporter: You aren't going to get HDFS to cleanly build against RHEL5 even fixing this; ant is the next problem.
          Hide
          Roman Shaposhnik added a comment -

          Brian, to answer your immediate question – I believe Solaris 8 would fail (not sure about Solaris 9). As for your note – I'm aware of it. One needs a manually assembled Java toolchain on CentOS/RHEL which is, in my opinion, tolerable compared to a manually assembled native build tool chain (e.g. providing upstream autoconf & automake). But this is, of course, subject to YMMV.

          Show
          Roman Shaposhnik added a comment - Brian, to answer your immediate question – I believe Solaris 8 would fail (not sure about Solaris 9). As for your note – I'm aware of it. One needs a manually assembled Java toolchain on CentOS/RHEL which is, in my opinion, tolerable compared to a manually assembled native build tool chain (e.g. providing upstream autoconf & automake). But this is, of course, subject to YMMV.
          Hide
          Allen Wittenauer added a comment -

          The issue that we face is that is nearly impossible to cater to every flavor's customized autoconf build system (OS X, I'm looking at you). So we either need to make the autoconf bits extremely lightweight or pretty much dictate what version gets used. So however we solve this particular problem needs to be done in light of the fact that this is a cross-platform project.

          Show
          Allen Wittenauer added a comment - The issue that we face is that is nearly impossible to cater to every flavor's customized autoconf build system (OS X, I'm looking at you). So we either need to make the autoconf bits extremely lightweight or pretty much dictate what version gets used. So however we solve this particular problem needs to be done in light of the fact that this is a cross-platform project.
          Hide
          Roman Shaposhnik added a comment -

          This trivial patch bites the bullet and removes the dependency on newer autoconf and on AC_TYPE_INTXX macros.

          The logic here is this – on all platforms with conformant C99 compilers these macros are useless anyway, on platforms where C99 compiler is NOT available libhdfs is unlikely to compile even when these macros are present.

          Show
          Roman Shaposhnik added a comment - This trivial patch bites the bullet and removes the dependency on newer autoconf and on AC_TYPE_INTXX macros. The logic here is this – on all platforms with conformant C99 compilers these macros are useless anyway, on platforms where C99 compiler is NOT available libhdfs is unlikely to compile even when these macros are present.
          Hide
          Allen Wittenauer added a comment -

          Shouldn't we test for c99 then? In the case of Sun's compilers, it definitely requires a flag.

          Show
          Allen Wittenauer added a comment - Shouldn't we test for c99 then? In the case of Sun's compilers, it definitely requires a flag.
          Hide
          Roman Shaposhnik added a comment -

          That's a pretty good idea. We can add AC_PROG_CC_C99 to configure.ac. Will this work as far as accepting the patch is concerned?

          Show
          Roman Shaposhnik added a comment - That's a pretty good idea. We can add AC_PROG_CC_C99 to configure.ac. Will this work as far as accepting the patch is concerned?
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12481427/HDFS-1619.patch.txt
          against trunk revision 1131264.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.TestHDFSTrash

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/706//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/706//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/706//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12481427/HDFS-1619.patch.txt against trunk revision 1131264. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.TestHDFSTrash +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/706//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/706//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/706//console This message is automatically generated.
          Hide
          Eli Collins added a comment -

          We don't need to set AC_PROG_CC_C99 because we don't actually use any c99 features in libhdfs. However libhds already uses (ie requires) stdint.h so we don't need set AC_TYPE* because we get these already from our stdint.h dependency. AC_TYPE* were probably not removed when stdint.h was introduced. Therefore I think the current patch is good to go as is. Make sense?

          Show
          Eli Collins added a comment - We don't need to set AC_PROG_CC_C99 because we don't actually use any c99 features in libhdfs. However libhds already uses (ie requires) stdint.h so we don't need set AC_TYPE* because we get these already from our stdint.h dependency. AC_TYPE* were probably not removed when stdint.h was introduced. Therefore I think the current patch is good to go as is. Make sense?
          Hide
          Eli Collins added a comment -

          Also note that fuse-dfs is AC_PREREQ(2.52), no reason libhdfs needs to be 2.61.

          Show
          Eli Collins added a comment - Also note that fuse-dfs is AC_PREREQ(2.52), no reason libhdfs needs to be 2.61.
          Hide
          Roman Shaposhnik added a comment -
          Also note that fuse-dfs is AC_PREREQ(2.52), no reason libhdfs needs to be 2.61.

          I don't think I follow your logic here. Currently libhdfs HAS to be 2.61 because it uses AC_TYPE_INTxx macros.

          We don't need to set AC_PROG_CC_C99 because we don't actually use any c99 features in libhdfs.

          It is a bit confusing for me to read this statement since you then confirm that stdint.h is included. stdint.h is a C99 feature. In fact,
          unless I'm missing some kind of a macro in libhdfs' configure.ac we HAVE to introduce AC_PROG_CC_C99 for it to
          be properly compiled. The idea behind AC_PROG_CC_C99 is to try and ask C compiler be C99 compliant and we definitely
          have to do that if for nothing else just to make sure that #include <stdint.h> alway works. At that point we can completely
          ditch AC_TYPE_INTxx macros just as you mentioned.

          Makes sense?

          Show
          Roman Shaposhnik added a comment - Also note that fuse-dfs is AC_PREREQ(2.52), no reason libhdfs needs to be 2.61. I don't think I follow your logic here. Currently libhdfs HAS to be 2.61 because it uses AC_TYPE_INTxx macros. We don't need to set AC_PROG_CC_C99 because we don't actually use any c99 features in libhdfs. It is a bit confusing for me to read this statement since you then confirm that stdint.h is included. stdint.h is a C99 feature. In fact, unless I'm missing some kind of a macro in libhdfs' configure.ac we HAVE to introduce AC_PROG_CC_C99 for it to be properly compiled. The idea behind AC_PROG_CC_C99 is to try and ask C compiler be C99 compliant and we definitely have to do that if for nothing else just to make sure that #include <stdint.h> alway works. At that point we can completely ditch AC_TYPE_INTxx macros just as you mentioned. Makes sense?
          Hide
          Eli Collins added a comment -

          I don't think I follow your logic here. Currently libhdfs HAS to be 2.61 because it uses AC_TYPE_INTxx macros.

          See my earlier comment, libhdfs doesn't need to define AC_TYPE* because it gets these via stdint.h

          It is a bit confusing for me to read this statement since you then confirm that stdint.h is included. stdint.h is a C99 feature.

          Not exactly, stdint is a header that is provided on systems w c99 compliant compilers. Howver you can use stdint (or stdbool etc) w/o enabling C99, in fact that's how libhdfs works today, we don't enable c99 when we compile.

          Show
          Eli Collins added a comment - I don't think I follow your logic here. Currently libhdfs HAS to be 2.61 because it uses AC_TYPE_INTxx macros. See my earlier comment, libhdfs doesn't need to define AC_TYPE* because it gets these via stdint.h It is a bit confusing for me to read this statement since you then confirm that stdint.h is included. stdint.h is a C99 feature. Not exactly, stdint is a header that is provided on systems w c99 compliant compilers. Howver you can use stdint (or stdbool etc) w/o enabling C99, in fact that's how libhdfs works today, we don't enable c99 when we compile.
          Hide
          Eli Collins added a comment -

          The idea behind AC_PROG_CC_C99 is to try and ask C compiler be C99 compliant and we definitely have to do that if for nothing else just to make sure that #include <stdint.h> alway works

          AC_PROG_CC_C99 tries to enable c99 mode on the compiler (via CC), we don't need to do that as we don't use c99 mode, we just require some c99 headers be present. Not sure how to tell autoconf you require these headers w/o telling it to try to enable c99 mode by default. Using AC_PROG_CC_C99 is fine (we could legitimately start using c99 features in libhdfs and fuse-dfs), just saying we can remove these AC_* type defines w/o defining AC_PROG_CC_C99.

          Show
          Eli Collins added a comment - The idea behind AC_PROG_CC_C99 is to try and ask C compiler be C99 compliant and we definitely have to do that if for nothing else just to make sure that #include <stdint.h> alway works AC_PROG_CC_C99 tries to enable c99 mode on the compiler (via CC), we don't need to do that as we don't use c99 mode, we just require some c99 headers be present. Not sure how to tell autoconf you require these headers w/o telling it to try to enable c99 mode by default. Using AC_PROG_CC_C99 is fine (we could legitimately start using c99 features in libhdfs and fuse-dfs), just saying we can remove these AC_* type defines w/o defining AC_PROG_CC_C99.
          Hide
          Roman Shaposhnik added a comment -
          Not exactly, stdint is a header that is provided on systems w c99 compliant compilers.

          IIRC, C99 compliant Sun Studio compilers on Solaris 8/9 would refuse to compile #include <stdint.h> unless you explicitly ask for c99.

          Show
          Roman Shaposhnik added a comment - Not exactly, stdint is a header that is provided on systems w c99 compliant compilers. IIRC, C99 compliant Sun Studio compilers on Solaris 8/9 would refuse to compile #include <stdint.h> unless you explicitly ask for c99.
          Hide
          Allen Wittenauer added a comment -

          IIRC, using // is a comment for C code is defined as a standard in C99. So yes, we do use C99 features in libhdfs.

          Show
          Allen Wittenauer added a comment - IIRC, using // is a comment for C code is defined as a standard in C99. So yes, we do use C99 features in libhdfs.
          Hide
          Eli Collins added a comment -

          IIRC, using // is a comment for C code is defined as a standard in C99. So yes, we do use C99 features in libhdfs.

          gcc's default std (gnu89) permits some extensions like C++ style comments (most compilers introduced this well before 1999). libhdfs compiles fine with -std=gnu89 -pedantic. In any case, using c99 features in libhdfs is totally reasonable so we might as well indicate it's required.

          I think we're all in agreement that Roman's patch plus using AC_PROG_CC_C99 is acceptable. Any objections to hdfs-1619-2.patch?

          Show
          Eli Collins added a comment - IIRC, using // is a comment for C code is defined as a standard in C99. So yes, we do use C99 features in libhdfs. gcc's default std (gnu89) permits some extensions like C++ style comments (most compilers introduced this well before 1999). libhdfs compiles fine with -std=gnu89 -pedantic. In any case, using c99 features in libhdfs is totally reasonable so we might as well indicate it's required. I think we're all in agreement that Roman's patch plus using AC_PROG_CC_C99 is acceptable. Any objections to hdfs-1619-2.patch?
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12481531/hdfs-1619-2.patch
          against trunk revision 1131331.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/710//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/710//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/710//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12481531/hdfs-1619-2.patch against trunk revision 1131331. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/710//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/710//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/710//console This message is automatically generated.
          Hide
          Roman Shaposhnik added a comment -

          Agreed. Minor nitpick, I think we should replace AC_PROG_CC with AC_PROG_CC_C99 not add an extra check.

          Show
          Roman Shaposhnik added a comment - Agreed. Minor nitpick, I think we should replace AC_PROG_CC with AC_PROG_CC_C99 not add an extra check.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12481591/HDFS-1619-C99.patch.txt
          against trunk revision 1132698.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/719//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12481591/HDFS-1619-C99.patch.txt against trunk revision 1132698. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/719//console This message is automatically generated.
          Hide
          Roman Shaposhnik added a comment -

          After some additional googling around it seems that AC_PROG_CC_C99 is a rather new invention (circa 2006) and we probably shouldn't replace one set of new macros with another one. At this point, I'd rather go with patch #1.

          Show
          Roman Shaposhnik added a comment - After some additional googling around it seems that AC_PROG_CC_C99 is a rather new invention (circa 2006) and we probably shouldn't replace one set of new macros with another one. At this point, I'd rather go with patch #1.
          Hide
          Eli Collins added a comment -

          Good point. (and per previous comments we can introduce AC_PROG_CC_C99 when we actually require it).

          +1 to HDFS-1619.patch.txt

          Show
          Eli Collins added a comment - Good point. (and per previous comments we can introduce AC_PROG_CC_C99 when we actually require it). +1 to HDFS-1619 .patch.txt
          Hide
          Allen Wittenauer added a comment -

          libhdfs already requires c99 mode for various commercial compilers due to the usage of C++-style comments. If we're going to remove the c99 requirements, then this patch should fix the comments too.

          Show
          Allen Wittenauer added a comment - libhdfs already requires c99 mode for various commercial compilers due to the usage of C++-style comments. If we're going to remove the c99 requirements, then this patch should fix the comments too.
          Hide
          Eli Collins added a comment -

          This patch does not remove the c99 requirement, we decided against introducing it in this thread. This patch just removes the AC_TYPE defines, which we should do because we already get these in the current code via stdint (per the docs "The Gnulib stdint module is an alternate way to define many of these symbols"). However, I agree we should compile with c99 since libhdfs uses some c99 features, I'll file a separate jira for that.

          Show
          Eli Collins added a comment - This patch does not remove the c99 requirement, we decided against introducing it in this thread. This patch just removes the AC_TYPE defines, which we should do because we already get these in the current code via stdint (per the docs "The Gnulib stdint module is an alternate way to define many of these symbols"). However, I agree we should compile with c99 since libhdfs uses some c99 features, I'll file a separate jira for that.
          Hide
          Eli Collins added a comment -

          I've committed this to branch 22 and trunk. Thanks Roman!

          Show
          Eli Collins added a comment - I've committed this to branch 22 and trunk. Thanks Roman!
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-22-branch #63 (See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/63/)
          HDFS-1619. svn merge -c 1132881 from trunk

          eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1132883
          Files :

          • /hadoop/hdfs/branches/branch-0.22/src/test/hdfs
          • /hadoop/hdfs/branches/branch-0.22
          • /hadoop/hdfs/branches/branch-0.22/src/webapps/secondary
          • /hadoop/hdfs/branches/branch-0.22/src/java
          • /hadoop/hdfs/branches/branch-0.22/src/webapps/hdfs
          • /hadoop/hdfs/branches/branch-0.22/src/webapps/datanode
          • /hadoop/hdfs/branches/branch-0.22/src/contrib/hdfsproxy
          • /hadoop/hdfs/branches/branch-0.22/CHANGES.txt
          • /hadoop/hdfs/branches/branch-0.22/src/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
          • /hadoop/hdfs/branches/branch-0.22/src/c++/libhdfs/configure.ac
          • /hadoop/hdfs/branches/branch-0.22/src/c++/libhdfs
          • /hadoop/hdfs/branches/branch-0.22/build.xml
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-22-branch #63 (See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/63/ ) HDFS-1619 . svn merge -c 1132881 from trunk eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1132883 Files : /hadoop/hdfs/branches/branch-0.22/src/test/hdfs /hadoop/hdfs/branches/branch-0.22 /hadoop/hdfs/branches/branch-0.22/src/webapps/secondary /hadoop/hdfs/branches/branch-0.22/src/java /hadoop/hdfs/branches/branch-0.22/src/webapps/hdfs /hadoop/hdfs/branches/branch-0.22/src/webapps/datanode /hadoop/hdfs/branches/branch-0.22/src/contrib/hdfsproxy /hadoop/hdfs/branches/branch-0.22/CHANGES.txt /hadoop/hdfs/branches/branch-0.22/src/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java /hadoop/hdfs/branches/branch-0.22/src/c++/libhdfs/configure.ac /hadoop/hdfs/branches/branch-0.22/src/c++/libhdfs /hadoop/hdfs/branches/branch-0.22/build.xml
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #746 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/746/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #746 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/746/ )
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #699 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/699/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #699 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/699/ )

            People

            • Assignee:
              Roman Shaposhnik
              Reporter:
              Roman Shaposhnik
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development