Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-711

Move Distributed Cache from Common to Map/Reduce

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.21.0
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Incompatible change, Reviewed
    • Release Note:
      Hide
      - Removed distributed cache classes and package from the Common project.
      - Added the same to the mapreduce project.
      - This will mean that users using Distributed Cache will now necessarily need the mapreduce jar in Hadoop 0.21.
      - Modified the package name to o.a.h.mapreduce.filecache from o.a.h.filecache and deprecated the old package name.
      Show
      - Removed distributed cache classes and package from the Common project. - Added the same to the mapreduce project. - This will mean that users using Distributed Cache will now necessarily need the mapreduce jar in Hadoop 0.21. - Modified the package name to o.a.h.mapreduce.filecache from o.a.h.filecache and deprecated the old package name.

      Description

      Distributed Cache logically belongs as part of map/reduce and not Common.

      1. MAPREDUCE-711-20090709-common.txt
        37 kB
        Vinod Kumar Vavilapalli
      2. MAPREDUCE-711-20090709-mapreduce.txt
        47 kB
        Vinod Kumar Vavilapalli
      3. MAPREDUCE-711-20090709-mapreduce.1.txt
        37 kB
        Vinod Kumar Vavilapalli
      4. MAPREDUCE-711-20090710.txt
        46 kB
        Vinod Kumar Vavilapalli
      5. 711.20S.patch
        68 kB
        Ravi Gummadi

        Issue Links

          Activity

          Hide
          Philip Zeyliger added a comment -

          +1!

          Show
          Philip Zeyliger added a comment - +1!
          Hide
          Hemanth Yamijala added a comment -

          +1

          Show
          Hemanth Yamijala added a comment - +1
          Hide
          Vinod Kumar Vavilapalli added a comment -

          I am taking this up; this is need for other DistributedCache related issues - HADOOP-4493 and MAPREDUCE-476.

          Attached are two patches, one for common project and one for mapreduce project. The patches are generated by simple refactoring in eclipse across projects. I've created new package org.apache.hadoop.mapred.filecache in both src/java as well as src/test/mapred.

          I have run the tests that are directly affected by these patches, and they pass. Will run all the common and mapreduce tests in the background.

          Few questions:

          • Do I need a separate jira issue for common part, given It is just moving of files across projects? Can open a new issue if felt otherwise.
          • What happens to the repository history of these files? Is there any possible way we can move repo history too?
          Show
          Vinod Kumar Vavilapalli added a comment - I am taking this up; this is need for other DistributedCache related issues - HADOOP-4493 and MAPREDUCE-476 . Attached are two patches, one for common project and one for mapreduce project. The patches are generated by simple refactoring in eclipse across projects. I've created new package org.apache.hadoop.mapred.filecache in both src/java as well as src/test/mapred. I have run the tests that are directly affected by these patches, and they pass. Will run all the common and mapreduce tests in the background. Few questions: Do I need a separate jira issue for common part, given It is just moving of files across projects? Can open a new issue if felt otherwise. What happens to the repository history of these files? Is there any possible way we can move repo history too?
          Hide
          Tom White added a comment -

          We shouldn't just repackage DistributedCache without deprecating it first, since it is a public interface. For this Jira, it might be better to move it to the MapReduce project while keeping it in the same package (org.apache.hadoop.filecache), since there are other Jiras to evolve its interface (MAPREDUCE-476, MAPREDUCE-303). Moving it to a new package (org.apache.hadoop.mapreduce.distcache?) could happen in one of those.

          Show
          Tom White added a comment - We shouldn't just repackage DistributedCache without deprecating it first, since it is a public interface. For this Jira, it might be better to move it to the MapReduce project while keeping it in the same package (org.apache.hadoop.filecache), since there are other Jiras to evolve its interface ( MAPREDUCE-476 , MAPREDUCE-303 ). Moving it to a new package (org.apache.hadoop.mapreduce.distcache?) could happen in one of those.
          Hide
          Vinod Kumar Vavilapalli added a comment -

          We shouldn't just repackage DistributedCache without deprecating it first, since it is a public interface.

          Makes sense, Tom. Uploading a new patch for mapreduce which keeps the files in the same old package org.apache.hadoop.filecache.

          Show
          Vinod Kumar Vavilapalli added a comment - We shouldn't just repackage DistributedCache without deprecating it first, since it is a public interface. Makes sense, Tom. Uploading a new patch for mapreduce which keeps the files in the same old package org.apache.hadoop.filecache.
          Hide
          Owen O'Malley added a comment -

          Actually, I think we should repackage it at the same time. In particular, I'd propose making it org.apache.hadoop.mapreduce.filecache. Naturally, you'll need to extend the repackaged class with one in mapreduce with the old package org.apache.hadoop.filecache.

          Show
          Owen O'Malley added a comment - Actually, I think we should repackage it at the same time. In particular, I'd propose making it org.apache.hadoop.mapreduce.filecache. Naturally, you'll need to extend the repackaged class with one in mapreduce with the old package org.apache.hadoop.filecache.
          Hide
          Philip Zeyliger added a comment -

          +1 to MAPREDUCE-711-20090709-common.txt and MAPREDUCE-711-20090709-mapreduce.1.txt .

          Owen, it would be easier (not massively so, of course), if we checked in the patches here (just the move), then did MAPREDUCE-476, then moved things into a separate package afterwards. If you'd prefer to do the move immediately, no problem: a little sed will fix the patches right up.

          We could name the package org.apache.hadoop.mapreduce.distributedcache, since filecache is a name not used anywhere else. I'd like to (see MAPREDUCE-476) separate out the public interface of the distributed cache (which is merely setting, at job configuration time, and getting, at job runtime) from the internal implementation. I think the way to do this is to have JobContext.getDistributedCacheData(), which encapsulates what got cached, and some setters as well. The getters may be a bit futile since the configuration values used are probably part of the public interface, and people may be querying them directly. But that's a separate task.

          I'm going to go, right now, regenerate the patches for MAPREDUCE-476 assuming these patches go in.

          Show
          Philip Zeyliger added a comment - +1 to MAPREDUCE-711 -20090709-common.txt and MAPREDUCE-711 -20090709-mapreduce.1.txt . Owen, it would be easier (not massively so, of course), if we checked in the patches here (just the move), then did MAPREDUCE-476 , then moved things into a separate package afterwards. If you'd prefer to do the move immediately, no problem: a little sed will fix the patches right up. We could name the package org.apache.hadoop.mapreduce.distributedcache, since filecache is a name not used anywhere else. I'd like to (see MAPREDUCE-476 ) separate out the public interface of the distributed cache (which is merely setting, at job configuration time, and getting, at job runtime) from the internal implementation. I think the way to do this is to have JobContext.getDistributedCacheData(), which encapsulates what got cached, and some setters as well. The getters may be a bit futile since the configuration values used are probably part of the public interface, and people may be querying them directly. But that's a separate task. I'm going to go, right now, regenerate the patches for MAPREDUCE-476 assuming these patches go in.
          Hide
          Vinod Kumar Vavilapalli added a comment -

          Actually Philip, I am going ahead and am doing the package restructuring now itself. Things like these, it would be good to do as early as possible than later. In any case, you will just need another `sed` run over your patch.

          One more question:

          • We are knocking off DistributedCache completely from common. This would mean breaking of code that explicitly depends only on the pre-split core jar. Two solutions are possible - 1) duplicate code across projects and 2) keep code in mapreduce and put placeholders in common, thus creating a dependency of mapreduce on common. None of the two seem feasible to me.
          Show
          Vinod Kumar Vavilapalli added a comment - Actually Philip, I am going ahead and am doing the package restructuring now itself. Things like these, it would be good to do as early as possible than later. In any case, you will just need another `sed` run over your patch. One more question: We are knocking off DistributedCache completely from common. This would mean breaking of code that explicitly depends only on the pre-split core jar. Two solutions are possible - 1) duplicate code across projects and 2) keep code in mapreduce and put placeholders in common, thus creating a dependency of mapreduce on common. None of the two seem feasible to me.
          Hide
          Vinod Kumar Vavilapalli added a comment -

          Owen, Hudson will not be able to run the mapred patch till the changes to common patch are committed along with the common jar committed to mapreduce. How should I go ahead with this? Separate jira issue for common project first? Thanks.

          Show
          Vinod Kumar Vavilapalli added a comment - Owen, Hudson will not be able to run the mapred patch till the changes to common patch are committed along with the common jar committed to mapreduce. How should I go ahead with this? Separate jira issue for common project first? Thanks.
          Hide
          Philip Zeyliger added a comment -

          Cool; I'll produce a new patch once you upload a new one here. Do consider changing the package name from filecache to distributedcache, since two names are more confusing than one.

          I think people who depended on the one-jar-to-rule-them-all (the pre-split world) will assume that they must depend on all three split jars for if they don't want to worry about what ended up where. So I'm not sure you're breaking code by moving it into another jar any more than the project split already has.

          – Philip

          Show
          Philip Zeyliger added a comment - Cool; I'll produce a new patch once you upload a new one here. Do consider changing the package name from filecache to distributedcache, since two names are more confusing than one. I think people who depended on the one-jar-to-rule-them-all (the pre-split world) will assume that they must depend on all three split jars for if they don't want to worry about what ended up where. So I'm not sure you're breaking code by moving it into another jar any more than the project split already has. – Philip
          Hide
          Owen O'Malley added a comment -

          Just post the result of the test-patch and post on the jira that the regressions still pass.

          Show
          Owen O'Malley added a comment - Just post the result of the test-patch and post on the jira that the regressions still pass.
          Hide
          Philip Zeyliger added a comment -

          Have you been able to check this in?

          Show
          Philip Zeyliger added a comment - Have you been able to check this in?
          Hide
          Vinod Kumar Vavilapalli added a comment -

          Have you been able to check this in?

          I am running test-patch and the regression tests now. When they complete, I will report back so that this can be checked in.

          Show
          Vinod Kumar Vavilapalli added a comment - Have you been able to check this in? I am running test-patch and the regression tests now. When they complete, I will report back so that this can be checked in.
          Hide
          Vinod Kumar Vavilapalli added a comment -

          I have run the mapred tests, all of them passed (!, surprise, surprise).

          Ran ant-test also. Results:

          [exec] -1 overall.
          [exec]
          [exec] +1 @author. The patch does not contain any @author tags.
          [exec]
          [exec] +1 tests included. The patch appears to include 8 new or modified tests.
          [exec]
          [exec] +1 javadoc. The javadoc tool did not generate any warning messages.
          [exec]
          [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings.
          [exec]
          [exec] -1 findbugs. The patch appears to introduce 1 new Findbugs warnings.
          [exec]
          [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings.

          The findBugs warning is unavoidable.

          <a href="#NM_SAME_SIMPLE_NAME_AS_SUPERCLASS">Bug type NM_SAME_SIMPLE_NAME_AS_SUPERCLASS (click for details)</a>
          <br/>In class org.apache.hadoop.filecache.DistributedCache<br/>In class org.apache.hadoop.mapreduce.filecache.DistributedCache<br/>At DistributedCache.java:[line 29]</p>

          This class has a simple name that is identical to that of its superclass, except
          that its superclass is in a different package (e.g., <code>alpha.Foo</code> extends <code>beta.Foo</code>).
          This can be exceptionally confusing, create lots of situations in which you have to look at import statements
          to resolve references and creates many
          opportunities to accidently define methods that do not override methods in their superclasses

          Show
          Vinod Kumar Vavilapalli added a comment - I have run the mapred tests, all of them passed (!, surprise, surprise). Ran ant-test also. Results: [exec] -1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 8 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] -1 findbugs. The patch appears to introduce 1 new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. The findBugs warning is unavoidable. <a href="#NM_SAME_SIMPLE_NAME_AS_SUPERCLASS">Bug type NM_SAME_SIMPLE_NAME_AS_SUPERCLASS (click for details)</a> <br/>In class org.apache.hadoop.filecache.DistributedCache<br/>In class org.apache.hadoop.mapreduce.filecache.DistributedCache<br/>At DistributedCache.java: [line 29] </p> This class has a simple name that is identical to that of its superclass, except that its superclass is in a different package (e.g., <code>alpha.Foo</code> extends <code>beta.Foo</code>). This can be exceptionally confusing, create lots of situations in which you have to look at import statements to resolve references and creates many opportunities to accidently define methods that do not override methods in their superclasses
          Hide
          Vinod Kumar Vavilapalli added a comment -

          contrib tests also passed. This patch is committable.

          Going by the recent discussions on the mailing lists about crossing-working on the projects, I am outlining the steps for committing these patches:

          • Apply the patch MAPREDUCE -711-20090709-common.txt to common project.
          • "svn remove" the directories src/java/org/apache/hadoop/filecache/ and src/test/core/org/apache/hadoop/filecache/
          • Build common project to generate hadoop-core-0.21.0-dev.jar.
          • Copy the above jar to mapreduce and hdfs project lib directories and commit this jar(the changes to this jar) to both the repositories.
          • svn up and apply the patch MAPREDUCE -711-20090710.txt to mapred project. Compilation shouldn't be broken.
          • Commit the patch to mapreduce project.
          Show
          Vinod Kumar Vavilapalli added a comment - contrib tests also passed. This patch is committable. Going by the recent discussions on the mailing lists about crossing-working on the projects, I am outlining the steps for committing these patches: Apply the patch MAPREDUCE -711-20090709-common.txt to common project. "svn remove" the directories src/java/org/apache/hadoop/filecache/ and src/test/core/org/apache/hadoop/filecache/ Build common project to generate hadoop-core-0.21.0-dev.jar. Copy the above jar to mapreduce and hdfs project lib directories and commit this jar(the changes to this jar) to both the repositories. svn up and apply the patch MAPREDUCE -711-20090710.txt to mapred project. Compilation shouldn't be broken. Commit the patch to mapreduce project.
          Hide
          Vinod Kumar Vavilapalli added a comment -

          Owen, can you commit this? Thanks!

          Show
          Vinod Kumar Vavilapalli added a comment - Owen, can you commit this? Thanks!
          Hide
          Philip Zeyliger added a comment -

          Ping?

          Show
          Philip Zeyliger added a comment - Ping?
          Hide
          Hemanth Yamijala added a comment -

          To summarize:

          • We moved the distributed cache out of common, as no one else is using it.
          • We moved it into mapred, with the old package name and it is deprecated.
          • We repackaged the classes into o.a.h.mapreduce.filecache.DistributedCache that the deprecated classes in mapred extend.

          This is blocking other JIRAs. I think the approach here is approved by Owen (from comments above) and from Philip, with the exception of the package name. I think committing this soon will unblock other work. So I propose going ahead after running tests once more on the new trunk. We could change the package name to o.a.h.mapreduce.distributedcache in a follow-up JIRA. As long as it will be in Hadoop 0.21, we can change the package name without worrying about compatibility.

          Show
          Hemanth Yamijala added a comment - To summarize: We moved the distributed cache out of common, as no one else is using it. We moved it into mapred, with the old package name and it is deprecated. We repackaged the classes into o.a.h.mapreduce.filecache.DistributedCache that the deprecated classes in mapred extend. This is blocking other JIRAs. I think the approach here is approved by Owen (from comments above) and from Philip, with the exception of the package name. I think committing this soon will unblock other work. So I propose going ahead after running tests once more on the new trunk. We could change the package name to o.a.h.mapreduce.distributedcache in a follow-up JIRA. As long as it will be in Hadoop 0.21, we can change the package name without worrying about compatibility.
          Hide
          Hemanth Yamijala added a comment -

          test-patch results for common changes:

               [exec] +1 overall.
               [exec]
               [exec]     +1 @author.  The patch does not contain any @author tags.
               [exec]
               [exec]     +1 tests included.  The patch appears to include 2 new or modified tests.
               [exec]
               [exec]     +1 javadoc.  The javadoc tool did not generate any warning messages.
               [exec]
               [exec]     +1 javac.  The applied patch does not increase the total number of javac compiler warnings.
               [exec]
               [exec]     +1 findbugs.  The patch does not introduce any new Findbugs warnings.
               [exec]
               [exec]     +1 release audit.  The applied patch does not increase the total number of release audit warnings.
               [exec]
          
          Show
          Hemanth Yamijala added a comment - test-patch results for common changes: [exec] +1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 2 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec]
          Hide
          Hemanth Yamijala added a comment -

          common tests passed.

          Show
          Hemanth Yamijala added a comment - common tests passed.
          Hide
          Hemanth Yamijala added a comment -

          Mapred tests, including contrib tests passed except for two timeouts - TestRecoveryManager and TestReduceFetch, but these are failing even without the patch.

          Show
          Hemanth Yamijala added a comment - Mapred tests, including contrib tests passed except for two timeouts - TestRecoveryManager and TestReduceFetch, but these are failing even without the patch.
          Hide
          Hemanth Yamijala added a comment -

          HDFS tests passed with the new common and mapreduce jars. I verified that with just the new common jars, the tests fail, because the distributed cache classes are removed from the common jar files, and that they pass with the new mapreduce jars. This means that the new common and mapreduce jars need to be committed to the HDFS subprojects.

          Show
          Hemanth Yamijala added a comment - HDFS tests passed with the new common and mapreduce jars. I verified that with just the new common jars, the tests fail, because the distributed cache classes are removed from the common jar files, and that they pass with the new mapreduce jars. This means that the new common and mapreduce jars need to be committed to the HDFS subprojects.
          Hide
          Hemanth Yamijala added a comment -

          So, we are going to follow this process for getting the changes into all projects (Giri is helping me with the build and commit as it spans multiple projects):

          • First, I'm going to commit the changes to common.
          • We'll trigger a hudson build of common and get the common jar files.
          • Then, we'll commit changes of the common jar files and the changes to mapreduce sources to mapreduce.
          • We'll trigger a hudson build of mapreduce and get the mapreduce jar files as well.
          • Then, we'll commit the common jars and mapreduce jars to hdfs project.
          Show
          Hemanth Yamijala added a comment - So, we are going to follow this process for getting the changes into all projects (Giri is helping me with the build and commit as it spans multiple projects): First, I'm going to commit the changes to common. We'll trigger a hudson build of common and get the common jar files. Then, we'll commit changes of the common jar files and the changes to mapreduce sources to mapreduce. We'll trigger a hudson build of mapreduce and get the mapreduce jar files as well. Then, we'll commit the common jars and mapreduce jars to hdfs project.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk #58 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk/58/)
          . Removed Distributed Cache from Common, to move it under Map/Reduce. Contributed by Vinod Kumar Vavilapalli.

          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk #58 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk/58/ ) . Removed Distributed Cache from Common, to move it under Map/Reduce. Contributed by Vinod Kumar Vavilapalli.
          Hide
          Giridharan Kesavan added a comment -
          Show
          Giridharan Kesavan added a comment - http://hudson.zones.apache.org/hudson/view/Common/job/Hadoop-Common-trunk/59/artifact/trunk/hadoop-core-2009-08-17_11-33-14.tar.gz we can use this hadoop-core aftifact for checkin into mapreduce/lib folder.
          Hide
          Hemanth Yamijala added a comment -

          Thanks, Giri. I used the link Hudson posted (http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk/58/) to get the jars, and committed the changes to mapreduce trunk.

          Now, I think it would be good to commit the changes to the HDFS projects as well. I've requested Giri to trigger a Map/Reduce build so that we can get the official Map/Reduce jars. I plan to commit to HDFS tomorrow.

          Show
          Hemanth Yamijala added a comment - Thanks, Giri. I used the link Hudson posted ( http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk/58/ ) to get the jars, and committed the changes to mapreduce trunk. Now, I think it would be good to commit the changes to the HDFS projects as well. I've requested Giri to trigger a Map/Reduce build so that we can get the official Map/Reduce jars. I plan to commit to HDFS tomorrow.
          Hide
          Giridharan Kesavan added a comment -

          mapreduce build triggered.
          http://hudson.zones.apache.org/hudson/view/Mapreduce/job/Hadoop-Mapreduce-trunk/52/
          this build would include the changes.

          This build is yet to start ; as its waiting for an executor on hudson.

          tnx

          Show
          Giridharan Kesavan added a comment - mapreduce build triggered. http://hudson.zones.apache.org/hudson/view/Mapreduce/job/Hadoop-Mapreduce-trunk/52/ this build would include the changes. This build is yet to start ; as its waiting for an executor on hudson. tnx
          Hide
          Suresh Srinivas added a comment -

          I spoke to Nicholas about this. Can you please run tests on Hudson (Giridharan could help with it I suppose) and commit the changes to HDFS when the tests pass.

          Show
          Suresh Srinivas added a comment - I spoke to Nicholas about this. Can you please run tests on Hudson (Giridharan could help with it I suppose) and commit the changes to HDFS when the tests pass.
          Hide
          Hemanth Yamijala added a comment -

          Can you please run tests on Hudson (Giridharan could help with it I suppose) and commit the changes to HDFS when the tests pass.

          I have already run the tests with the updated jars locally. There does not appear to be a way to run these off Hudson. So, we are planning to commit the jars and then trigger a Hudson HDFS build to make sure things work still. If something breaks, we will revert the commit and check again. (But given they pass locally, I am hoping we won't get to it).

          Also, the MapReduce build failure in the tests is being tracked in MAPREDUCE-880 and is unrelated to this commit.

          Giri, can you please commit the common and Map/Reduce jars to HDFS and trigger a build ?

          Show
          Hemanth Yamijala added a comment - Can you please run tests on Hudson (Giridharan could help with it I suppose) and commit the changes to HDFS when the tests pass. I have already run the tests with the updated jars locally. There does not appear to be a way to run these off Hudson. So, we are planning to commit the jars and then trigger a Hudson HDFS build to make sure things work still. If something breaks, we will revert the commit and check again. (But given they pass locally, I am hoping we won't get to it). Also, the MapReduce build failure in the tests is being tracked in MAPREDUCE-880 and is unrelated to this commit. Giri, can you please commit the common and Map/Reduce jars to HDFS and trigger a build ?
          Hide
          Giridharan Kesavan added a comment -

          Updated hdfs/lib with common and mapreduce jars from rev 804918 & 805081 resp.

          Triggered a hdfs trunk build (build added to build queue, as vesta is still running a patch build).
          http://hudson.zones.apache.org/hudson/view/Hdfs/job/Hadoop-Hdfs-trunk/52/

          Show
          Giridharan Kesavan added a comment - Updated hdfs/lib with common and mapreduce jars from rev 804918 & 805081 resp. Triggered a hdfs trunk build (build added to build queue, as vesta is still running a patch build). http://hudson.zones.apache.org/hudson/view/Hdfs/job/Hadoop-Hdfs-trunk/52/
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #53 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/53/)
          . Updated common and mapreduce jars from rev 804918 & 805081 resp.

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #53 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/53/ ) . Updated common and mapreduce jars from rev 804918 & 805081 resp.
          Hide
          Hemanth Yamijala added a comment -

          HDFS tests have also passed. Now, all the projects are sync'ed up.

          I committed this to trunk. Thanks, Vinod !

          Show
          Hemanth Yamijala added a comment - HDFS tests have also passed. Now, all the projects are sync'ed up. I committed this to trunk. Thanks, Vinod !
          Hide
          Ravi Gummadi added a comment -

          Patch for Hadoop 0.20 Yahoo! distribution. Not for commit here.

          Show
          Ravi Gummadi added a comment - Patch for Hadoop 0.20 Yahoo! distribution. Not for commit here.

            People

            • Assignee:
              Vinod Kumar Vavilapalli
              Reporter:
              Owen O'Malley
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development