Hadoop Common
  1. Hadoop Common
  2. HADOOP-5191

After creation and startup of the hadoop namenode on AIX or Solaris, you will only be allowed to connect to the namenode via hostname but not IP.

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 0.19.1
    • Fix Version/s: 0.21.0
    • Component/s: None
    • Labels:
      None
    • Environment:

      AIX 6.1 or Solaris

    • Release Note:
      Accessing HDFS with any ip, hostname, or proxy should work as long as it points to the interface NameNode is listening on.

      Description

      After creation and startup of the hadoop namenode on AIX or Solaris, you will only be allowed to connect to the namenode via hostname but not IP.

      fs.default.name=hdfs://p520aix61.mydomain.com:9000
      Hostname for box is p520aix and the IP is 10.120.16.68

      If you use the following url, "hdfs://10.120.16.68", to connect to the namenode, the exception that appears below occurs. You can only connect successfully if "hdfs://p520aix61.mydomain.com:9000" is used.

      Exception in thread "Thread-0" java.lang.IllegalArgumentException: Wrong FS: hdfs://10.120.16.68:9000/testdata, expected: hdfs://p520aix61.mydomain.com:9000
      at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320)
      at org.apache.hadoop.dfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:84)
      at org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:122)
      at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390)
      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
      at TestHadoopHDFS.run(TestHadoopHDFS.java:116)

      1. 5191-1.patch
        5 kB
        Bill Habermaas
      2. hadoop-5191.patch
        2 kB
        Bill Habermaas
      3. HADOOP-5191.patch
        2 kB
        Raghu Angadi
      4. HADOOP-5191.patch
        1 kB
        Raghu Angadi
      5. TestHadoopHDFS.java
        5 kB
        Bill Habermaas

        Issue Links

          Activity

          Hide
          Bill Habermaas added a comment -

          This patch resolves the issue on AIX 6.1.

          Show
          Bill Habermaas added a comment - This patch resolves the issue on AIX 6.1.
          Hide
          Bill Habermaas added a comment -

          Patch and Unit test

          Show
          Bill Habermaas added a comment - Patch and Unit test
          Hide
          Bill Habermaas added a comment -

          Patch to checkPath method in hadoop/hdfs/DistributedFileSystem.java to allow use of IP address as well as hostname. The unit test included here does a simple test of getting a FileSystem object using hostname and then getting a FileSystem object using IP address (which didn't work before).

          Show
          Bill Habermaas added a comment - Patch to checkPath method in hadoop/hdfs/DistributedFileSystem.java to allow use of IP address as well as hostname. The unit test included here does a simple test of getting a FileSystem object using hostname and then getting a FileSystem object using IP address (which didn't work before).
          Hide
          Bo Shi added a comment -

          I'm not sure if this falls within the scope of this JIRA, but it would be nice to be able to contact the host thru aliases;

          E.g. if the namenode is configured as somehost:9000, and I have somehost mapped to myalias in my /etc/hosts file, I won't be allowed to connect thru myalias:9000.

          Show
          Bo Shi added a comment - I'm not sure if this falls within the scope of this JIRA, but it would be nice to be able to contact the host thru aliases; E.g. if the namenode is configured as somehost:9000, and I have somehost mapped to myalias in my /etc/hosts file, I won't be allowed to connect thru myalias:9000.
          Hide
          Raghu Angadi added a comment -

          This does not seem like an AIX or Solaris issue. Fix should work work for ips as well as aliases if there is one.

          This goes to basics of what "fs.default.name" means. If a canonical for for comparing makes sense according to its definition, then we should do it properly (for e.g. how are multiple ips handled, or aliases handled as Bo Shi mentioned).

          Do we have a definition or meaning of "fs.default.name"?

          This issue has come up multiple times and deserves either a fix or a clarification.

          regd patch : please avoid referring jiras in the code as much as possible.. it is ok to 'waste space' with slightly longer justifications in the code.

          Show
          Raghu Angadi added a comment - This does not seem like an AIX or Solaris issue. Fix should work work for ips as well as aliases if there is one. This goes to basics of what "fs.default.name" means. If a canonical for for comparing makes sense according to its definition, then we should do it properly (for e.g. how are multiple ips handled, or aliases handled as Bo Shi mentioned). Do we have a definition or meaning of "fs.default.name"? This issue has come up multiple times and deserves either a fix or a clarification. regd patch : please avoid referring jiras in the code as much as possible.. it is ok to 'waste space' with slightly longer justifications in the code.
          Hide
          Doug Cutting added a comment -

          This problem is independent of fs.default.name. That only needs to be set in the provided test to start the namenode, and it's setting is unrelated to the failure.

          The bug is that DistributedFileSystem calls NameNode#getUri() to create the client FileSystem's uri. The client cannot know all of the addresses and names of the namenode. Accesses to a namenode with different addresses and/or hostnames should result in different DistributedFileSystem instances.

          Show
          Doug Cutting added a comment - This problem is independent of fs.default.name. That only needs to be set in the provided test to start the namenode, and it's setting is unrelated to the failure. The bug is that DistributedFileSystem calls NameNode#getUri() to create the client FileSystem's uri. The client cannot know all of the addresses and names of the namenode. Accesses to a namenode with different addresses and/or hostnames should result in different DistributedFileSystem instances.
          Hide
          Raghu Angadi added a comment -

          > Accesses to a namenode with different addresses and/or hostnames should result in different DistributedFileSystem instances.

          Yes. I see two problems :

          • HDFS should not change host name in the URI based on resolution. So the following should result in an error: getFS("hdfs://host/").getFileStatus("hdfs://host.domain/file").
            • But currently getFS("hdfs://host/").getFileStatus("hdfs://host/file") might result in an error, then HDFS should fix it.
          • TestHadoopHDFS.java might essentially be making the same mistake : getFS("hdfs://hostname/").getFileStatus("hdfs://ip/file"); It should rather do getFS("hdfs://ip")...
            • Where is this file located?
          Show
          Raghu Angadi added a comment - > Accesses to a namenode with different addresses and/or hostnames should result in different DistributedFileSystem instances. Yes. I see two problems : HDFS should not change host name in the URI based on resolution. So the following should result in an error: getFS("hdfs://host/").getFileStatus("hdfs://host.domain/file"). But currently getFS("hdfs://host/").getFileStatus("hdfs://host/file") might result in an error, then HDFS should fix it. TestHadoopHDFS.java might essentially be making the same mistake : getFS("hdfs://hostname/").getFileStatus("hdfs://ip/file"); It should rather do getFS("hdfs://ip")... Where is this file located?
          Hide
          Raghu Angadi added a comment -

          > I'm not sure if this falls within the scope of this JIRA, but it would be nice to be able to contact the host thru aliases;
          > E.g. if the namenode is configured as somehost:9000, and I have somehost mapped to myalias in my /etc/hosts file, I won't be allowed to connect thru myalias:9000.

          You might be affected HDFS issue the Doug mentioned above. We should fix it.

          Show
          Raghu Angadi added a comment - > I'm not sure if this falls within the scope of this JIRA, but it would be nice to be able to contact the host thru aliases; > E.g. if the namenode is configured as somehost:9000, and I have somehost mapped to myalias in my /etc/hosts file, I won't be allowed to connect thru myalias:9000. You might be affected HDFS issue the Doug mentioned above. We should fix it.
          Hide
          Bill Habermaas added a comment -

          In the future I'll remember not to refer to jira's in the patch comments. It is a habit to include a backreference to the origin of the change that has been pointed out not applicable to this project. As to the patch, I will withdraw it if there is a better way to solve the problem. It fixed the issue for me but I'm always open to a better idea.

          Show
          Bill Habermaas added a comment - In the future I'll remember not to refer to jira's in the patch comments. It is a habit to include a backreference to the origin of the change that has been pointed out not applicable to this project. As to the patch, I will withdraw it if there is a better way to solve the problem. It fixed the issue for me but I'm always open to a better idea.
          Hide
          Raghu Angadi added a comment -

          Is the source for TestHadoopHDFS.java available?

          Show
          Raghu Angadi added a comment - Is the source for TestHadoopHDFS.java available?
          Hide
          Bill Habermaas added a comment -

          This is a very simple program to write data to HDFS. The source lines below are an excerpt from the source file. If I use a hostname that matches the namenode's machine then the FileSystem.get will work. If I use an IP address instead then it fails. Why is this considered an error??

          static public String filePath = "hdfs://10.120.12.81:9000/test/datafile";

          String file = filePath;

          Configuration conf = new Configuration();
          try

          { fs = FileSystem.get(new URI(file),conf); }
          Show
          Bill Habermaas added a comment - This is a very simple program to write data to HDFS. The source lines below are an excerpt from the source file. If I use a hostname that matches the namenode's machine then the FileSystem.get will work. If I use an IP address instead then it fails. Why is this considered an error?? static public String filePath = "hdfs://10.120.12.81:9000/test/datafile"; String file = filePath; Configuration conf = new Configuration(); try { fs = FileSystem.get(new URI(file),conf); }
          Hide
          Raghu Angadi added a comment -

          The above should work as you expect. How do I run this test?

          e.g., the following works :
          $ bin/hadoop fs -Dhadoop.default.name="hdfs://hostname:7020/" -ls hdfs://ipaddress:7020/user/rangadi/5Mb-2

          Is this essentially what you are doing?

          earlier I said :

          [...] But currently getFS("hdfs://host/").getFileStatus("hdfs://host/file") might result in an error, then HDFS should fix it. [...]

          I don't think that is the case. This works as expected, i.e. getFS("hdfs://alias1/"), getFS("hdfs://alias2"), and getFS("hdfs://ip") all get different instances of HDFS and work as expected, even if all those point to same physical namenode.

          There is one odd thing inside filesystem initialization where it invokes NetUtils.getStaticResolution() on the hosts, which seems returns null for my tests. But by default, there are no static resolutions set.

          Show
          Raghu Angadi added a comment - The above should work as you expect. How do I run this test? e.g., the following works : $ bin/hadoop fs -Dhadoop.default.name="hdfs://hostname:7020/" -ls hdfs://ipaddress:7020/user/rangadi/5Mb-2 Is this essentially what you are doing? earlier I said : [...] But currently getFS("hdfs://host/").getFileStatus("hdfs://host/file") might result in an error, then HDFS should fix it. [...] I don't think that is the case. This works as expected, i.e. getFS("hdfs://alias1/") , getFS("hdfs://alias2") , and getFS("hdfs://ip") all get different instances of HDFS and work as expected, even if all those point to same physical namenode. There is one odd thing inside filesystem initialization where it invokes NetUtils.getStaticResolution() on the hosts, which seems returns null for my tests. But by default, there are no static resolutions set.
          Hide
          Raghu Angadi added a comment -

          Oops! I read the code incompletely. I can reproduce the problem. I think the fix should be a simple one.

          Show
          Raghu Angadi added a comment - Oops! I read the code incompletely. I can reproduce the problem. I think the fix should be a simple one.
          Hide
          Raghu Angadi added a comment -

          Would the attached patch HADOOP-5191.patch fix this issue?

          It simply uses the authority that user provides while creating the FS. No unnecessary hostname resolution or default port stripping. This should handle host aliases, ip addresses, proxies.. etc.

          If you like the fix, please include you unit test (either as it is or modified) with the patch.

          Show
          Raghu Angadi added a comment - Would the attached patch HADOOP-5191 .patch fix this issue? It simply uses the authority that user provides while creating the FS. No unnecessary hostname resolution or default port stripping. This should handle host aliases, ip addresses, proxies.. etc. If you like the fix, please include you unit test (either as it is or modified) with the patch.
          Hide
          Raghu Angadi added a comment -

          btw, the reason the '-ls' test above works without this fix is that, implementation uses the URI to create the FS, but uses just the path name while fetching the file info.

          Show
          Raghu Angadi added a comment - btw, the reason the '-ls' test above works without this fix is that, implementation uses the URI to create the FS, but uses just the path name while fetching the file info.
          Hide
          Steve Loughran added a comment -

          Looking at the code referenced in the stack trace, I see a lot of .equalsIgnoreCase() tests in FileSystem.checkPath(). These should be made more robust against locales by using .toLower(Locale.EN_US) and then case sensitive matching, otherwise the code stop working in some countries. I don't think this the problem being encountered here, though

          Show
          Steve Loughran added a comment - Looking at the code referenced in the stack trace, I see a lot of .equalsIgnoreCase() tests in FileSystem.checkPath(). These should be made more robust against locales by using .toLower(Locale.EN_US) and then case sensitive matching, otherwise the code stop working in some countries. I don't think this the problem being encountered here, though
          Hide
          Doug Cutting added a comment -

          > These should be made more robust against locales by using .toLower(Locale.EN_US) ...

          These are only used to compare URI schemes and authorities, not directory or file names. The scheme cannot contain non-ASCII, I think, but the authority (hostname typically) can, although this is rare. To fix this, since non-ASCII characters in URIs must be escaped, we can just use getAuthorityRaw() instead of getAuthority() to compare escaped ASCII.

          Show
          Doug Cutting added a comment - > These should be made more robust against locales by using .toLower(Locale.EN_US) ... These are only used to compare URI schemes and authorities, not directory or file names. The scheme cannot contain non-ASCII, I think, but the authority (hostname typically) can, although this is rare. To fix this, since non-ASCII characters in URIs must be escaped, we can just use getAuthorityRaw() instead of getAuthority() to compare escaped ASCII.
          Hide
          Raghu Angadi added a comment -

          Bill, does the attached patch work for you?

          Show
          Raghu Angadi added a comment - Bill, does the attached patch work for you?
          Hide
          Bill Habermaas added a comment -

          Raghu,
          Yes, this patch works for me - I am using it on AIX, Linux, and Solaris. I have also run all the hadoop junit tests against my patched 0.18.3.

          Show
          Bill Habermaas added a comment - Raghu, Yes, this patch works for me - I am using it on AIX, Linux, and Solaris. I have also run all the hadoop junit tests against my patched 0.18.3.
          Hide
          Raghu Angadi added a comment -

          thanks Bill. Please +1 the patch if you reviewed it.

          since many users are affected by this, this could go into 0.20.x as well, along with the trunk.

          Show
          Raghu Angadi added a comment - thanks Bill. Please +1 the patch if you reviewed it. since many users are affected by this, this could go into 0.20.x as well, along with the trunk.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12401788/HADOOP-5191.patch
          against trunk revision 757667.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no tests are needed for this patch.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/126/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/126/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/126/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/126/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12401788/HADOOP-5191.patch against trunk revision 757667. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no tests are needed for this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 Eclipse classpath. The patch retains Eclipse classpath integrity. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/126/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/126/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/126/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/126/console This message is automatically generated.
          Hide
          Bill Habermaas added a comment -

          Raghu - I haven't tried your patch as I have not had an opportunity but I assume you rewrote what I did due to the other comments on this jira. You might want to include my unit test with your patch so this gets through Hudson.

          Show
          Bill Habermaas added a comment - Raghu - I haven't tried your patch as I have not had an opportunity but I assume you rewrote what I did due to the other comments on this jira. You might want to include my unit test with your patch so this gets through Hudson.
          Hide
          Raghu Angadi added a comment -

          I am guessing that is a +1 for the patch.

          Attached patch adds a one more test case in TestDistributedFileSystem. Does not increase the test time.

          Show
          Raghu Angadi added a comment - I am guessing that is a +1 for the patch. Attached patch adds a one more test case in TestDistributedFileSystem. Does not increase the test time.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12403540/HADOOP-5191.patch
          against trunk revision 757958.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 4 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/133/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/133/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/133/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/133/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12403540/HADOOP-5191.patch against trunk revision 757958. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 4 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 Eclipse classpath. The patch retains Eclipse classpath integrity. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/133/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/133/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/133/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/133/console This message is automatically generated.
          Hide
          Raghu Angadi added a comment -

          I just committed this.

          Show
          Raghu Angadi added a comment - I just committed this.
          Hide
          Bill Habermaas added a comment -

          This issue probably needs to be reopened. I have discovered that map/reduce also has dependency on how hdfs is connected (hostname as opposed to IP address). I don't think this should be reported as another jira but what do you think? Guys - there has to be a cleaner way to handle hostname/IP usage that works across the board.

          2009-03-27 04:15:45,045 WARN [Thread-145] org.apache.hadoop.mapred.LocalJobRunner: job_local_0002
          java.io.IOException: Can not get the relative path: base = hdfs://10.120.16.68:9000/mydata/2009/03/27/0bab100a-1bf1-499a-935d-bc4b4e94f44c/_temporary/_attempt_local_0002_r_000000_0 child = hdfs://p520aix61.mydomain.com:9000/mydata/2009/03/27/0bab100a-1bf1-499a-935d-bc4b4e94f44c/_temporary/_attempt_local_0002_r_000000_0/part-00000
          at org.apache.hadoop.mapred.Task.getFinalPath(Task.java:586)
          at org.apache.hadoop.mapred.Task.moveTaskOutputs(Task.java:599)
          at org.apache.hadoop.mapred.Task.moveTaskOutputs(Task.java:617)
          at org.apache.hadoop.mapred.Task.saveTaskOutput(Task.java:561)
          at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:202)

          Show
          Bill Habermaas added a comment - This issue probably needs to be reopened. I have discovered that map/reduce also has dependency on how hdfs is connected (hostname as opposed to IP address). I don't think this should be reported as another jira but what do you think? Guys - there has to be a cleaner way to handle hostname/IP usage that works across the board. 2009-03-27 04:15:45,045 WARN [Thread-145] org.apache.hadoop.mapred.LocalJobRunner: job_local_0002 java.io.IOException: Can not get the relative path: base = hdfs://10.120.16.68:9000/mydata/2009/03/27/0bab100a-1bf1-499a-935d-bc4b4e94f44c/_temporary/_attempt_local_0002_r_000000_0 child = hdfs://p520aix61.mydomain.com:9000/mydata/2009/03/27/0bab100a-1bf1-499a-935d-bc4b4e94f44c/_temporary/_attempt_local_0002_r_000000_0/part-00000 at org.apache.hadoop.mapred.Task.getFinalPath(Task.java:586) at org.apache.hadoop.mapred.Task.moveTaskOutputs(Task.java:599) at org.apache.hadoop.mapred.Task.moveTaskOutputs(Task.java:617) at org.apache.hadoop.mapred.Task.saveTaskOutput(Task.java:561) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:202)
          Hide
          Bill Habermaas added a comment -

          I guess I'm the only person that has encountered this problem of mixing IP/hostname with map/reduce when connecting via IP address. As there are no comments then I'll assume it is a different problem and open a separate incident.

          Show
          Bill Habermaas added a comment - I guess I'm the only person that has encountered this problem of mixing IP/hostname with map/reduce when connecting via IP address. As there are no comments then I'll assume it is a different problem and open a separate incident.
          Hide
          Raghu Angadi added a comment -

          A separate issue is better. Mostly likely it is similar issue in mapred. Better to be fixed there. If you have test case like here, please attach there.

          Show
          Raghu Angadi added a comment - A separate issue is better. Mostly likely it is similar issue in mapred. Better to be fixed there. If you have test case like here, please attach there.
          Hide
          Hudson added a comment -
          Show
          Hudson added a comment - Integrated in Hadoop-trunk #796 (See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/796/ )

            People

            • Assignee:
              Raghu Angadi
              Reporter:
              Bill Habermaas
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development