Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.8.0
    • Fix Version/s: 2.8.0, 3.0.0-alpha2
    • Component/s: fs/s3
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Currently S3AUtils.translateException doesn't recognise interruptions; it just sees an AmazonClientException chain which is then relayed up.

      Proposed: look for an InterruptedIOException at the base of the chain of exceptions, map it to an InterruptedIOException

        Activity

        Hide
        stevel@apache.org Steve Loughran added a comment -

        before

        2016-12-02 11:59:22,623 [JobGenerator] WARN  dstream.FileInputDStream (Logging.scala:logWarning(87)) - Error finding new files
        org.apache.hadoop.fs.s3a.AWSClientIOException: getFileStatus on s3a://hwdev-steve-new/spark-cloud/S3AStreamingSuite/streaming/streaming: com.amazonaws.AbortedException: : 
        	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:116)
        	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:93)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1636)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1427)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1403)
        	at org.apache.hadoop.fs.Globber.listStatus(Globber.java:76)
        	at org.apache.hadoop.fs.Globber.doGlob(Globber.java:234)
        	at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
        	at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1978)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:2119)
        	at org.apache.spark.streaming.dstream.FileInputDStream.findNewFiles(FileInputDStream.scala:205)
        	at org.apache.spark.streaming.dstream.FileInputDStream.compute(FileInputDStream.scala:149)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334)
        	at scala.Option.orElse(Option.scala:289)
        	at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331)
        	at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334)
        	at scala.Option.orElse(Option.scala:289)
        	at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331)
        	at org.apache.spark.streaming.dstream.FilteredDStream.compute(FilteredDStream.scala:36)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334)
        	at scala.Option.orElse(Option.scala:289)
        	at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331)
        	at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334)
        	at scala.Option.orElse(Option.scala:289)
        	at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331)
        	at org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:48)
        	at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:117)
        	at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116)
        	at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        	at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        	at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
        	at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
        	at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116)
        	at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:249)
        	at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:247)
        	at scala.util.Try$.apply(Try.scala:192)
        	at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:247)
        	at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:183)
        	at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:89)
        	at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88)
        	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        Caused by: com.amazonaws.AbortedException: 
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleInterruptedException(AmazonHttpClient.java:710)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:620)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:586)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
        	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
        	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041)
        	at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1175)
        	at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1150)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:940)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1616)
        	... 70 more
        Caused by: com.amazonaws.http.timers.client.SdkInterruptedException
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.checkInterrupted(AmazonHttpClient.java:754)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.checkInterrupted(AmazonHttpClient.java:740)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:919)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
        	... 78 more
        
        Show
        stevel@apache.org Steve Loughran added a comment - before 2016-12-02 11:59:22,623 [JobGenerator] WARN dstream.FileInputDStream (Logging.scala:logWarning(87)) - Error finding new files org.apache.hadoop.fs.s3a.AWSClientIOException: getFileStatus on s3a: //hwdev-steve- new /spark-cloud/S3AStreamingSuite/streaming/streaming: com.amazonaws.AbortedException: : at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:116) at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:93) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1636) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1427) at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1403) at org.apache.hadoop.fs.Globber.listStatus(Globber.java:76) at org.apache.hadoop.fs.Globber.doGlob(Globber.java:234) at org.apache.hadoop.fs.Globber.glob(Globber.java:148) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1978) at org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:2119) at org.apache.spark.streaming.dstream.FileInputDStream.findNewFiles(FileInputDStream.scala:205) at org.apache.spark.streaming.dstream.FileInputDStream.compute(FileInputDStream.scala:149) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.FilteredDStream.compute(FilteredDStream.scala:36) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:48) at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:117) at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116) at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:249) at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:247) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:247) at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:183) at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:89) at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) Caused by: com.amazonaws.AbortedException: at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleInterruptedException(AmazonHttpClient.java:710) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:620) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:586) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041) at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1175) at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1150) at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:940) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1616) ... 70 more Caused by: com.amazonaws.http.timers.client.SdkInterruptedException at com.amazonaws.http.AmazonHttpClient$RequestExecutor.checkInterrupted(AmazonHttpClient.java:754) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.checkInterrupted(AmazonHttpClient.java:740) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:919) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618) ... 78 more
        Hide
        stevel@apache.org Steve Loughran added a comment -

        After

        2016-12-02 14:30:25,692 [JobGenerator] WARN  dstream.FileInputDStream (Logging.scala:logWarning(87)) - Error finding new files
        java.io.InterruptedIOException: getFileStatus on s3a://hwdev-steve-new/spark-cloud/S3AStreamingSuite/streaming/streaming: com.amazonaws.AbortedException: 
        	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:118)
        	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:94)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1685)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1476)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1452)
        	at org.apache.hadoop.fs.Globber.listStatus(Globber.java:76)
        	at org.apache.hadoop.fs.Globber.doGlob(Globber.java:234)
        	at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
        	at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1978)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:2168)
        	at org.apache.spark.streaming.dstream.FileInputDStream.findNewFiles(FileInputDStream.scala:205)
        	at org.apache.spark.streaming.dstream.FileInputDStream.compute(FileInputDStream.scala:149)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334)
        	at scala.Option.orElse(Option.scala:289)
        	at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331)
        	at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334)
        	at scala.Option.orElse(Option.scala:289)
        	at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331)
        	at org.apache.spark.streaming.dstream.FilteredDStream.compute(FilteredDStream.scala:36)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334)
        	at scala.Option.orElse(Option.scala:289)
        	at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331)
        	at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
        	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
        	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336)
        	at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334)
        	at scala.Option.orElse(Option.scala:289)
        	at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331)
        	at org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:48)
        	at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:117)
        	at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116)
        	at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        	at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        	at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
        	at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
        	at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116)
        	at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:249)
        	at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:247)
        	at scala.util.Try$.apply(Try.scala:192)
        	at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:247)
        	at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:183)
        	at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:89)
        	at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88)
        	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        Caused by: com.amazonaws.AbortedException: 
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleInterruptedException(AmazonHttpClient.java:710)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:620)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:586)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
        	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
        	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041)
        	at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1175)
        	at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1150)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:989)
        	at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1665)
        	... 70 more
        Caused by: com.amazonaws.http.timers.client.SdkInterruptedException
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.checkInterrupted(AmazonHttpClient.java:754)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.checkInterrupted(AmazonHttpClient.java:740)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:919)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
        	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
        	... 78 more
        2016-12-02 14:30:25,695 [JobGenerator] INFO  dstream.FileInputDStream (Logging.scala:logInfo(54)) - New files at time 1480689014000 ms:
        
        2016-12-02 14:30:25,698 [JobGenerator] INFO  scheduler.JobScheduler (Logging.scala:logInfo(54)) - Added jobs for time 1480689014000 ms
        2016-12-02 14:30:25,699 [JobScheduler] INFO  scheduler.JobScheduler (Logging.scala:logInfo(54)) - Starting job streaming job 1480689014000 ms.0 from job set of time 1480689014000 ms
        2016-12-02 14:30:25,699 [ScalaTest-main-running-S3AStreamingSuite] INFO  scheduler.JobGenerator (Logging.scala:logInfo(54)) - Stopped JobGenerator
        
        Show
        stevel@apache.org Steve Loughran added a comment - After 2016-12-02 14:30:25,692 [JobGenerator] WARN dstream.FileInputDStream (Logging.scala:logWarning(87)) - Error finding new files java.io.InterruptedIOException: getFileStatus on s3a: //hwdev-steve- new /spark-cloud/S3AStreamingSuite/streaming/streaming: com.amazonaws.AbortedException: at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:118) at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:94) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1685) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1476) at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1452) at org.apache.hadoop.fs.Globber.listStatus(Globber.java:76) at org.apache.hadoop.fs.Globber.doGlob(Globber.java:234) at org.apache.hadoop.fs.Globber.glob(Globber.java:148) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1978) at org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:2168) at org.apache.spark.streaming.dstream.FileInputDStream.findNewFiles(FileInputDStream.scala:205) at org.apache.spark.streaming.dstream.FileInputDStream.compute(FileInputDStream.scala:149) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.FilteredDStream.compute(FilteredDStream.scala:36) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:48) at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:117) at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116) at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:249) at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:247) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:247) at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:183) at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:89) at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) Caused by: com.amazonaws.AbortedException: at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleInterruptedException(AmazonHttpClient.java:710) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:620) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:586) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041) at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1175) at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1150) at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:989) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1665) ... 70 more Caused by: com.amazonaws.http.timers.client.SdkInterruptedException at com.amazonaws.http.AmazonHttpClient$RequestExecutor.checkInterrupted(AmazonHttpClient.java:754) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.checkInterrupted(AmazonHttpClient.java:740) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:919) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618) ... 78 more 2016-12-02 14:30:25,695 [JobGenerator] INFO dstream.FileInputDStream (Logging.scala:logInfo(54)) - New files at time 1480689014000 ms: 2016-12-02 14:30:25,698 [JobGenerator] INFO scheduler.JobScheduler (Logging.scala:logInfo(54)) - Added jobs for time 1480689014000 ms 2016-12-02 14:30:25,699 [JobScheduler] INFO scheduler.JobScheduler (Logging.scala:logInfo(54)) - Starting job streaming job 1480689014000 ms.0 from job set of time 1480689014000 ms 2016-12-02 14:30:25,699 [ScalaTest-main-running-S3AStreamingSuite] INFO scheduler.JobGenerator (Logging.scala:logInfo(54)) - Stopped JobGenerator
        Hide
        stevel@apache.org Steve Loughran added a comment -

        One issue: should all AbortedExceptions be uprated to interruptedExceptions? HADOOP-13811 hints at other ways in which interruptions may surface. However, looking for specific error texts is dangerous and brittle

        Show
        stevel@apache.org Steve Loughran added a comment - One issue: should all AbortedExceptions be uprated to interruptedExceptions? HADOOP-13811 hints at other ways in which interruptions may surface. However, looking for specific error texts is dangerous and brittle
        Hide
        stevel@apache.org Steve Loughran added a comment -

        Patch 001; scan for type package scoped for testing; translateException using the probe & optional convert during handling of AmazonClientException

        Show
        stevel@apache.org Steve Loughran added a comment - Patch 001; scan for type package scoped for testing; translateException using the probe & optional convert during handling of AmazonClientException
        Hide
        stevel@apache.org Steve Loughran added a comment -

        test: s3a ireland

        Show
        stevel@apache.org Steve Loughran added a comment - test: s3a ireland
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 15s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 8m 23s trunk passed
        +1 compile 0m 20s trunk passed
        +1 checkstyle 0m 14s trunk passed
        +1 mvnsite 0m 30s trunk passed
        +1 mvneclipse 0m 17s trunk passed
        +1 findbugs 0m 32s trunk passed
        +1 javadoc 0m 15s trunk passed
        +1 mvninstall 0m 20s the patch passed
        +1 compile 0m 19s the patch passed
        +1 javac 0m 19s the patch passed
        +1 checkstyle 0m 11s the patch passed
        +1 mvnsite 0m 23s the patch passed
        +1 mvneclipse 0m 15s the patch passed
        -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
        +1 findbugs 0m 40s the patch passed
        +1 javadoc 0m 14s the patch passed
        +1 unit 0m 23s hadoop-aws in the patch passed.
        +1 asflicense 0m 20s The patch does not generate ASF License warnings.
        15m 9s



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:a9ad5d6
        JIRA Issue HADOOP-13857
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12841529/HADOOP-13857-001.patch
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 7693fb93c4cc 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 0cfd7ad
        Default Java 1.8.0_111
        findbugs v3.0.0
        whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/11184/artifact/patchprocess/whitespace-eol.txt
        Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11184/testReport/
        modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws
        Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11184/console
        Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 8m 23s trunk passed +1 compile 0m 20s trunk passed +1 checkstyle 0m 14s trunk passed +1 mvnsite 0m 30s trunk passed +1 mvneclipse 0m 17s trunk passed +1 findbugs 0m 32s trunk passed +1 javadoc 0m 15s trunk passed +1 mvninstall 0m 20s the patch passed +1 compile 0m 19s the patch passed +1 javac 0m 19s the patch passed +1 checkstyle 0m 11s the patch passed +1 mvnsite 0m 23s the patch passed +1 mvneclipse 0m 15s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 findbugs 0m 40s the patch passed +1 javadoc 0m 14s the patch passed +1 unit 0m 23s hadoop-aws in the patch passed. +1 asflicense 0m 20s The patch does not generate ASF License warnings. 15m 9s Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HADOOP-13857 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12841529/HADOOP-13857-001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 7693fb93c4cc 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 0cfd7ad Default Java 1.8.0_111 findbugs v3.0.0 whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/11184/artifact/patchprocess/whitespace-eol.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11184/testReport/ modules C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11184/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
        Hide
        liuml07 Mingliang Liu added a comment -

        +1

        Committed to trunk through branch-2.8 branches. Thanks for your contribution Steve Loughran.

        Show
        liuml07 Mingliang Liu added a comment - +1 Committed to trunk through branch-2.8 branches. Thanks for your contribution Steve Loughran .
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10931 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10931/)
        HADOOP-13857. S3AUtils.translateException to map (wrapped) (liuml07: rev 2ff84a00405e977b1fd791cfb974244580dd5ae8)

        • (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
        • (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10931 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10931/ ) HADOOP-13857 . S3AUtils.translateException to map (wrapped) (liuml07: rev 2ff84a00405e977b1fd791cfb974244580dd5ae8) (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java

          People

          • Assignee:
            stevel@apache.org Steve Loughran
            Reporter:
            stevel@apache.org Steve Loughran
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development