Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-12689

S3 filesystem operations stopped working correctly

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.7.0
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: tools
    • Labels:
    • Target Version/s:

      Description

      HADOOP-10542 was resolved by replacing "return null;" with throwing IOException. This causes several S3 filesystem operations to fail (possibly more code is expecting that null return value; these are just the calls I noticed):

      S3FileSystem.getFileStatus() (which no longer raises FileNotFoundException but instead IOException)
      FileSystem.exists() (which no longer returns false but instead raises IOException)
      S3FileSystem.create() (which no longer succeeds but instead raises IOException)

      Run command:

      hadoop distcp hdfs://localhost:9000/test s3://xxx:yyy@com.bar.foo/

      Resulting stack trace:

      2015-12-11 10:04:34,030 FATAL [IPC Server handler 6 on 44861] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1449826461866_0005_m_000006_0 - exited : java.io.IOException: /test doesn't exist
      at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:170)
      at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:221)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:606)
      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
      at com.sun.proxy.$Proxy17.retrieveINode(Unknown Source)
      at org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:340)
      at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:230)
      at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
      at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
      at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
      at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
      at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:415)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
      at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

      changing the "raise IOE..." to "return null" fixes all of the above code sites and allows distcp to succeed.

        Issue Links

          Activity

          Hide
          mattpaduano Matthew Paduano added a comment -

          The attached patch reverts the one line edit by HADOOP-10542 and includes code to address the possible NPE if a block is requested but not found in the S3 store. This seems the easiest way to resolve the possible NPE from 10542 and not rewrite many of the S3 file system operations that expect null to be returned by get() when an object is missing (and/or a FileNotFoundExecption to be raised, not IOException)

          Show
          mattpaduano Matthew Paduano added a comment - The attached patch reverts the one line edit by HADOOP-10542 and includes code to address the possible NPE if a block is requested but not found in the S3 store. This seems the easiest way to resolve the possible NPE from 10542 and not rewrite many of the S3 file system operations that expect null to be returned by get() when an object is missing (and/or a FileNotFoundExecption to be raised, not IOException)
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 8m 3s trunk passed
          +1 compile 0m 12s trunk passed with JDK v1.8.0_66
          +1 compile 0m 13s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 10s trunk passed
          +1 mvnsite 0m 19s trunk passed
          +1 mvneclipse 0m 12s trunk passed
          +1 findbugs 0m 31s trunk passed
          +1 javadoc 0m 13s trunk passed with JDK v1.8.0_66
          +1 javadoc 0m 15s trunk passed with JDK v1.7.0_91
          +1 mvninstall 0m 18s the patch passed
          +1 compile 0m 9s the patch passed with JDK v1.8.0_66
          +1 javac 0m 9s the patch passed
          +1 compile 0m 11s the patch passed with JDK v1.7.0_91
          +1 javac 0m 11s the patch passed
          +1 checkstyle 0m 10s the patch passed
          +1 mvnsite 0m 15s the patch passed
          +1 mvneclipse 0m 10s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 0m 36s the patch passed
          +1 javadoc 0m 11s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 13s the patch passed with JDK v1.7.0_91
          +1 unit 0m 10s hadoop-aws in the patch passed with JDK v1.8.0_66.
          +1 unit 0m 11s hadoop-aws in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 18s Patch does not generate ASF License warnings.
          25m 6s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12780416/HADOOP-12689.01.patch
          JIRA Issue HADOOP-12689
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux a6525f1c552e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 778146e
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8339/testReport/
          modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
          Max memory used 75MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8339/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 8m 3s trunk passed +1 compile 0m 12s trunk passed with JDK v1.8.0_66 +1 compile 0m 13s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 10s trunk passed +1 mvnsite 0m 19s trunk passed +1 mvneclipse 0m 12s trunk passed +1 findbugs 0m 31s trunk passed +1 javadoc 0m 13s trunk passed with JDK v1.8.0_66 +1 javadoc 0m 15s trunk passed with JDK v1.7.0_91 +1 mvninstall 0m 18s the patch passed +1 compile 0m 9s the patch passed with JDK v1.8.0_66 +1 javac 0m 9s the patch passed +1 compile 0m 11s the patch passed with JDK v1.7.0_91 +1 javac 0m 11s the patch passed +1 checkstyle 0m 10s the patch passed +1 mvnsite 0m 15s the patch passed +1 mvneclipse 0m 10s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 0m 36s the patch passed +1 javadoc 0m 11s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 13s the patch passed with JDK v1.7.0_91 +1 unit 0m 10s hadoop-aws in the patch passed with JDK v1.8.0_66. +1 unit 0m 11s hadoop-aws in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 18s Patch does not generate ASF License warnings. 25m 6s Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12780416/HADOOP-12689.01.patch JIRA Issue HADOOP-12689 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux a6525f1c552e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 778146e Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8339/testReport/ modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws Max memory used 75MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8339/console This message was automatically generated.
          Hide
          raviprak Ravi Prakash added a comment -

          Patch looks good to me. Thanks Matt. I'll commit by EOD tomorrow if there are no objections

          Show
          raviprak Ravi Prakash added a comment - Patch looks good to me. Thanks Matt. I'll commit by EOD tomorrow if there are no objections
          Hide
          stevel@apache.org Steve Loughran added a comment -

          If there's a regression that hasn't been caught, we need tests to verify that the regression is not only fixed, that it stays fixed.

          w.r.t applying/submitting patches against the object stores, it is critical that the tests are manually run with the credentials:

          https://wiki.apache.org/hadoop/HowToContribute#Submitting_patches_against_object_stores_such_as_Amazon_S3.2C_OpenStack_Swift_and_Microsoft_Azure

          ravi : I'm afraid that applies to you too. Please don't commit this without the test runs and the results logged here.

          Show
          stevel@apache.org Steve Loughran added a comment - If there's a regression that hasn't been caught, we need tests to verify that the regression is not only fixed, that it stays fixed. w.r.t applying/submitting patches against the object stores, it is critical that the tests are manually run with the credentials: https://wiki.apache.org/hadoop/HowToContribute#Submitting_patches_against_object_stores_such_as_Amazon_S3.2C_OpenStack_Swift_and_Microsoft_Azure ravi : I'm afraid that applies to you too. Please don't commit this without the test runs and the results logged here.
          Hide
          raviprak Ravi Prakash added a comment -

          Thanks for all your work with the FileSystem contracts, test and documentation Steve! Much appreciated. Thanks for pointing to the documentation. I was not aware of it.

          We are running with this patch inside Altiscale. I tested it also on my local laptop against a bucket in the "us-west-1" region (with a simple command). Matt also pointed me to http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/testing.html although he couldn't find tests for S3 (for S3n and S3a they exist).

          As an aside, I don't see the manual test results for HADOOP-10542 in that JIRA either. Were they run?

          Show
          raviprak Ravi Prakash added a comment - Thanks for all your work with the FileSystem contracts, test and documentation Steve! Much appreciated. Thanks for pointing to the documentation. I was not aware of it. We are running with this patch inside Altiscale. I tested it also on my local laptop against a bucket in the "us-west-1" region (with a simple command). Matt also pointed me to http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/testing.html although he couldn't find tests for S3 (for S3n and S3a they exist). As an aside, I don't see the manual test results for HADOOP-10542 in that JIRA either. Were they run?
          Hide
          raviprak Ravi Prakash added a comment -

          Here's my log of what I ran

          [raviprak@ravi trunk]$ hdfs dfs -get s3://<SOMEBUCKET>/<SOMEFILE>
          -get: Fatal internal error
          java.lang.NullPointerException
          	at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveBlock(Jets3tFileSystemStore.java:248)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:497)
          	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:255)
          	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
          	at com.sun.proxy.$Proxy4.retrieveBlock(Unknown Source)
          	at org.apache.hadoop.fs.s3.S3InputStream.blockSeekTo(S3InputStream.java:170)
          	at org.apache.hadoop.fs.s3.S3InputStream.read(S3InputStream.java:129)
          	at java.io.DataInputStream.read(DataInputStream.java:100)
          	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:88)
          	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:62)
          	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:122)
          	at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:482)
          	at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:404)
          	at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:339)
          	at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:274)
          	at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:259)
          	at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:321)
          	at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:293)
          	at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:254)
          	at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:275)
          	at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:259)
          	at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:225)
          	at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
          	at org.apache.hadoop.fs.shell.Command.run(Command.java:166)
          	at org.apache.hadoop.fs.FsShell.run(FsShell.java:319)
          	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
          	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
          	at org.apache.hadoop.fs.FsShell.main(FsShell.java:377)
          

          After the patch

          [raviprak@ravi trunk]$ hdfs dfs -get s3://<SOMEBUCKET>/<SOMEFILE>
          get: Block missing from S3 store: block_-3447106738088433847
          

          When the file is valid, I was successfully able to get the file

          Show
          raviprak Ravi Prakash added a comment - Here's my log of what I ran [raviprak@ravi trunk]$ hdfs dfs -get s3: //<SOMEBUCKET>/<SOMEFILE> -get: Fatal internal error java.lang.NullPointerException at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveBlock(Jets3tFileSystemStore.java:248) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:255) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at com.sun.proxy.$Proxy4.retrieveBlock(Unknown Source) at org.apache.hadoop.fs.s3.S3InputStream.blockSeekTo(S3InputStream.java:170) at org.apache.hadoop.fs.s3.S3InputStream.read(S3InputStream.java:129) at java.io.DataInputStream.read(DataInputStream.java:100) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:88) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:62) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:122) at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:482) at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:404) at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:339) at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:274) at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:259) at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:321) at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:293) at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:254) at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:275) at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:259) at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:225) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119) at org.apache.hadoop.fs.shell.Command.run(Command.java:166) at org.apache.hadoop.fs.FsShell.run(FsShell.java:319) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:377) After the patch [raviprak@ravi trunk]$ hdfs dfs -get s3: //<SOMEBUCKET>/<SOMEFILE> get: Block missing from S3 store: block_-3447106738088433847 When the file is valid, I was successfully able to get the file
          Hide
          raviprak Ravi Prakash added a comment -

          In light of the above, I am still on track to commit this patch EOD. Matt, if you'd like, please feel free to create a new JIRA for adding the contract tests for S3 (although I fear it may be going the way of the dinosaur). I just don't want that effort blocking this patch. Steve Loughran Please let me know if you still have objections.

          Show
          raviprak Ravi Prakash added a comment - In light of the above, I am still on track to commit this patch EOD. Matt, if you'd like, please feel free to create a new JIRA for adding the contract tests for S3 (although I fear it may be going the way of the dinosaur). I just don't want that effort blocking this patch. Steve Loughran Please let me know if you still have objections.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9055 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9055/)
          HADOOP-12689. S3 filesystem operations stopped working correctly (raviprak: rev 2d16f40dab291a29b3fc005221b12fd587615d4e)

          • hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9055 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9055/ ) HADOOP-12689 . S3 filesystem operations stopped working correctly (raviprak: rev 2d16f40dab291a29b3fc005221b12fd587615d4e) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/Jets3tFileSystemStore.java hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          raviprak Ravi Prakash added a comment -

          I've committed this change to trunk and branch-2. Thanks for your contribution Matt and for your review Steve!

          Show
          raviprak Ravi Prakash added a comment - I've committed this change to trunk and branch-2. Thanks for your contribution Matt and for your review Steve!
          Hide
          stevel@apache.org Steve Loughran added a comment -

          -1

          Ravi. please, no: not without new tests to show this problem is fixed. if the last fix was a regression, then we need something more to show the regression has gone away. That doesn't have to be a general purpose contract test, something in in hadoop-tools/hadoop-aws package.

          look at the jenkins

          The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          

          S3n and siblings are burning sore in the Hadoop codebase: undermaintained, undertested and incredibly brittle to change.

          If HADOOP-10542 did break things —and I trust your claim there— then it slipped through the current s3 test suite. We need another test to make sure this problem never comes back. It doesn't have to be a full contract test, something in hadoop-aws will be enough. But saying "we can add a test later" isn't the right tactic —we both know "later" means "never" in this context. We also need all the existing s3 tests run to make sure this patch doesn't change anything else, either.

          Can we roll this back and do another iteration of the patch which does include a test? As mandating the "patches include tests" policy is the only way we can keep test coverage up, especially on something this brittle

          sorry

          Show
          stevel@apache.org Steve Loughran added a comment - -1 Ravi. please, no: not without new tests to show this problem is fixed. if the last fix was a regression, then we need something more to show the regression has gone away. That doesn't have to be a general purpose contract test, something in in hadoop-tools/hadoop-aws package. look at the jenkins The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. S3n and siblings are burning sore in the Hadoop codebase: undermaintained, undertested and incredibly brittle to change. If HADOOP-10542 did break things —and I trust your claim there— then it slipped through the current s3 test suite. We need another test to make sure this problem never comes back. It doesn't have to be a full contract test, something in hadoop-aws will be enough. But saying "we can add a test later" isn't the right tactic —we both know "later" means "never" in this context. We also need all the existing s3 tests run to make sure this patch doesn't change anything else, either. Can we roll this back and do another iteration of the patch which does include a test? As mandating the "patches include tests" policy is the only way we can keep test coverage up, especially on something this brittle sorry
          Hide
          raviprak Ravi Prakash added a comment -

          Steve! Your -1 here is completely unwarranted.
          1. I don't remember a resolution overriding the ability of committers to commit patches without tests. If all patches must contain tests, I'm happy to go about -1ing all the JIRAs which have been committed already without a patch.
          2. I shouldn't have to point out the hypocrisy here. YOU yourself committed HADOOP-10542 without tests.

          I'd ask you to reconsider your -1. If you insist, I'll roll the patch back.

          As it were, Matt IS working on adding tests. So in this case "later" did NOT mean "never". Thanks

          Show
          raviprak Ravi Prakash added a comment - Steve! Your -1 here is completely unwarranted. 1. I don't remember a resolution overriding the ability of committers to commit patches without tests. If all patches must contain tests, I'm happy to go about -1ing all the JIRAs which have been committed already without a patch. 2. I shouldn't have to point out the hypocrisy here. YOU yourself committed HADOOP-10542 without tests. I'd ask you to reconsider your -1. If you insist, I'll roll the patch back. As it were, Matt IS working on adding tests. So in this case "later" did NOT mean "never". Thanks
          Hide
          stevel@apache.org Steve Loughran added a comment -

          removing -1, as yes, there are tests now. I just didn't want those tests to get forgotten about.

          Show
          stevel@apache.org Steve Loughran added a comment - removing -1, as yes, there are tests now. I just didn't want those tests to get forgotten about.
          Hide
          raviprak Ravi Prakash added a comment -

          Thanks for your consideration Steve! I appreciate all the great work you did for the contract tests and your efforts to keep the S3 implementations stable. I'm testing and reviewing Matt's patch on HADOOP-12696. Let's continue there.

          Show
          raviprak Ravi Prakash added a comment - Thanks for your consideration Steve! I appreciate all the great work you did for the contract tests and your efforts to keep the S3 implementations stable. I'm testing and reviewing Matt's patch on HADOOP-12696 . Let's continue there.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          w.r.t to HADOOP-10542, I though there were tests in the system. i shouldn't have committed it then —and we are all suffering now because of that

          Show
          stevel@apache.org Steve Loughran added a comment - w.r.t to HADOOP-10542 , I though there were tests in the system. i shouldn't have committed it then —and we are all suffering now because of that
          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          Ravi Prakash, there is a branch-2.8 where you need to land this patch for it to be in 2.8.0.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - Ravi Prakash , there is a branch-2.8 where you need to land this patch for it to be in 2.8.0.
          Hide
          raviprak Ravi Prakash added a comment -

          Thanks for the heads up Vinod! I must have missed the notification of that branch-2.8 being cut. I've cherry-picked this patch into branch-2.8 as well.

          Show
          raviprak Ravi Prakash added a comment - Thanks for the heads up Vinod! I must have missed the notification of that branch-2.8 being cut. I've cherry-picked this patch into branch-2.8 as well.

            People

            • Assignee:
              mattpaduano Matthew Paduano
              Reporter:
              mattpaduano Matthew Paduano
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development