Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-12756

Incorporate Aliyun OSS file system implementation

    Details

    • Type: New Feature
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: HADOOP-12756
    • Fix Version/s: HADOOP-12756, 3.0.0-alpha2
    • Component/s: fs
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      Aliyun OSS is widely used among China’s cloud users and this work implemented a new Hadoop compatible filesystem AliyunOSSFileSystem with oss scheme, similar to the s3a and azure support.
      Show
      Aliyun OSS is widely used among China’s cloud users and this work implemented a new Hadoop compatible filesystem AliyunOSSFileSystem with oss scheme, similar to the s3a and azure support.

      Description

      Aliyun OSS is widely used among China’s cloud users, but currently it is not easy to access data laid on OSS storage from user’s Hadoop/Spark application, because of no original support for OSS in Hadoop.

      This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, Spark/Hadoop applications can read/write data from OSS without any code change. Narrowing the gap between user’s APP and data storage, like what have been done for S3 in Hadoop

      1. Aliyun-OSS-integration.pdf
        144 kB
        shimingfei
      2. Aliyun-OSS-integration-v2.pdf
        79 kB
        Genmao Yu
      3. HADOOP-12756.003.patch
        95 kB
        Kai Zheng
      4. HADOOP-12756.004.patch
        95 kB
        Kai Zheng
      5. HADOOP-12756.005.patch
        96 kB
        shimingfei
      6. HADOOP-12756.006.patch
        102 kB
        shimingfei
      7. HADOOP-12756.007.patch
        103 kB
        shimingfei
      8. HADOOP-12756.008.patch
        103 kB
        Kai Zheng
      9. HADOOP-12756.009.patch
        103 kB
        Kai Zheng
      10. HADOOP-12756.010.patch
        147 kB
        shimingfei
      11. HADOOP-12756-v02.patch
        92 kB
        shimingfei
      12. HCFS User manual.md
        10 kB
        Ling Zhou
      13. OSS integration.pdf
        225 kB
        Ling Zhou

        Issue Links

          Activity

          Hide
          shimingfei shimingfei added a comment -

          abstract for the work

          Show
          shimingfei shimingfei added a comment - abstract for the work
          Hide
          chenghao Cheng Hao added a comment -

          +1 This is critical for AliYun users when integrated with MapReduce/Spark etc.

          Show
          chenghao Cheng Hao added a comment - +1 This is critical for AliYun users when integrated with MapReduce/Spark etc.
          Hide
          stevel@apache.org Steve Loughran added a comment -
          1. would need its own module under hadoop-tools
          2. testability is always a problem with object stores —can those of in the EU/US test against it?
          3. Have a look at the filesystem specification docs to see what to do, especially in things object stores don't do well (rename, delete)
          4. you'll need to implement all the FS contract tests
          5. and help verify everything still works before any release.

          Object stores are the under-supported bit of the hadoop codebase. Be advised that there's generally little enthusiasm for adding another one; we don't spend enough time looking after the s3 one.

          Note also that as spark 2.0is going to load lib/*, which is needed to pick up the hadoop-aws and amazon-aws-s3 JARs, you don't need to get your code into Hadoop 2.9+ for it to be supported by spark. write something which builds against Hadoop 2.6+ (or earlier), and get it into that dir and it'll automatically be picked up by hadoop and spark. This is your fastest way to getting into people's hands

          finally, don't be afraid to subscribe to hadoop's common dev list and talk about this

          Show
          stevel@apache.org Steve Loughran added a comment - would need its own module under hadoop-tools testability is always a problem with object stores —can those of in the EU/US test against it? Have a look at the filesystem specification docs to see what to do, especially in things object stores don't do well (rename, delete) you'll need to implement all the FS contract tests and help verify everything still works before any release. Object stores are the under-supported bit of the hadoop codebase. Be advised that there's generally little enthusiasm for adding another one; we don't spend enough time looking after the s3 one. Note also that as spark 2.0is going to load lib/*, which is needed to pick up the hadoop-aws and amazon-aws-s3 JARs, you don't need to get your code into Hadoop 2.9+ for it to be supported by spark. write something which builds against Hadoop 2.6+ (or earlier), and get it into that dir and it'll automatically be picked up by hadoop and spark. This is your fastest way to getting into people's hands finally, don't be afraid to subscribe to hadoop's common dev list and talk about this
          Hide
          cnauroth Chris Nauroth added a comment -

          Thank you for the proposal! +1 to Steve's comments and a few more of my own:

          1. Regarding OSSFileSystem, please keep in mind that there are 2 parallel APIs for file system access now. One is FileSystem, which you've already mentioned. The other is FileContext (the user-facing API) which bridges to an implementation of AbstractFileSystem (the internal service provider's API). For full integration, you'll want to provide an implementation for both of these APIs. Most of the time, it's easy to provide an implementation of AbstractFileSystem by subclassing DelegateToFileSystem so that it does a passthrough to the FileSystem implementation you already wrote.
          2. Regarding this statement:

            Application access OSS through network, an additional proxy can be configured, all information can be set and passed to OSSFileSystem through Hadoop configuration.

            I don't think I understand this part. Is the idea that the Hadoop client need not be configured with authentication credentials, and then the proxy would inject credentials before forwarding to Aliyun OSS? If so, then is this proxy something that is planned as part of the code donation to Hadoop, or is the proxy an external component?

          3. Please make sure that credentials are integrated with our Credential Provider API. There are more details on this in other JIRAs, and documentation is under way in HADOOP-11031. The short story is that you just need to make sure sensitive credentials are read by using the Configuration#getPassword method.
          4. Since Aliyun OSS is an object store, I assume there must be some strategy for mapping the concept of hierarchical directories and files onto a flat key-value namespace. It would help future maintainers if you could add details on the mapping strategy in the design document. For an example, take a look at the PDF design document for the Azure file system attached to HADOOP-9629.
          5. Please plan on contributing end user documentation that is at least as detailed as the documentation for the existing object store integrations. For examples, see S3, Azure and Swift. It would be great to discuss what portions of the API are implemented and what is not implemented. For example, many object store file systems choose not to implement append. Discussion of semantics is important too. For example, most of object store file systems differ from HDFS in that rename is not atomic.
          6. Regarding testability, Azure has support for running tests against a local emulator of the remote service. (See the Azure doc page linked above for more details.) This goes beyond mock-based testing so that it's an integration test. It's not as realistic as connecting to the real service, but it can be a useful option for people who want to test without paying for an account or suffering long trans-continental latency. Is there a similar emulation capability for Aliyun OSS?
          Show
          cnauroth Chris Nauroth added a comment - Thank you for the proposal! +1 to Steve's comments and a few more of my own: Regarding OSSFileSystem , please keep in mind that there are 2 parallel APIs for file system access now. One is FileSystem , which you've already mentioned. The other is FileContext (the user-facing API) which bridges to an implementation of AbstractFileSystem (the internal service provider's API). For full integration, you'll want to provide an implementation for both of these APIs. Most of the time, it's easy to provide an implementation of AbstractFileSystem by subclassing DelegateToFileSystem so that it does a passthrough to the FileSystem implementation you already wrote. Regarding this statement: Application access OSS through network, an additional proxy can be configured, all information can be set and passed to OSSFileSystem through Hadoop configuration. I don't think I understand this part. Is the idea that the Hadoop client need not be configured with authentication credentials, and then the proxy would inject credentials before forwarding to Aliyun OSS? If so, then is this proxy something that is planned as part of the code donation to Hadoop, or is the proxy an external component? Please make sure that credentials are integrated with our Credential Provider API. There are more details on this in other JIRAs, and documentation is under way in HADOOP-11031 . The short story is that you just need to make sure sensitive credentials are read by using the Configuration#getPassword method. Since Aliyun OSS is an object store, I assume there must be some strategy for mapping the concept of hierarchical directories and files onto a flat key-value namespace. It would help future maintainers if you could add details on the mapping strategy in the design document. For an example, take a look at the PDF design document for the Azure file system attached to HADOOP-9629 . Please plan on contributing end user documentation that is at least as detailed as the documentation for the existing object store integrations. For examples, see S3 , Azure and Swift . It would be great to discuss what portions of the API are implemented and what is not implemented. For example, many object store file systems choose not to implement append. Discussion of semantics is important too. For example, most of object store file systems differ from HDFS in that rename is not atomic. Regarding testability, Azure has support for running tests against a local emulator of the remote service. (See the Azure doc page linked above for more details.) This goes beyond mock-based testing so that it's an integration test. It's not as realistic as connecting to the real service, but it can be a useful option for people who want to test without paying for an account or suffering long trans-continental latency. Is there a similar emulation capability for Aliyun OSS?
          Hide
          chenghao Cheng Hao added a comment -

          Thank you so much Steve Loughran, Chris Nauroth for the comments and suggestions, that's really helpful.

          AliYun OSS has large number of users at China, and we can see its fast growing in the future; we realize being part of Hadoop file system will make life easier for OSS users, particularly, application developers from different ecosystem(Pig, Tez, Impala, Spark, Flink, HBase, Tachyon etc.), Hadoop is probably the only common area that all of them familiar with, that's the motive we are working on it. And as part of the collaboration with AliYun, Intel(and AliYun) has strong willing to maintain the code and keep it in high quality.

          Show
          chenghao Cheng Hao added a comment - Thank you so much Steve Loughran , Chris Nauroth for the comments and suggestions, that's really helpful. AliYun OSS has large number of users at China, and we can see its fast growing in the future; we realize being part of Hadoop file system will make life easier for OSS users, particularly, application developers from different ecosystem(Pig, Tez, Impala, Spark, Flink, HBase, Tachyon etc.), Hadoop is probably the only common area that all of them familiar with, that's the motive we are working on it. And as part of the collaboration with AliYun, Intel(and AliYun) has strong willing to maintain the code and keep it in high quality.
          Hide
          shimingfei shimingfei added a comment -

          Thanks for your detailed comments Steve.
          OSS is very like S3, so the testing will be similar. we already have an implementation, and it works fine with our use cases and micro-benchmarks(sort and terasort) on both Hadoop and spark.

          You are right that the work can be packaged as an independent jar, and users' app can load it as external library. But we think it is better to integrate it into Hadoop, as an module under hadoop tools for maintenance and ease of use purpose.

          Show
          shimingfei shimingfei added a comment - Thanks for your detailed comments Steve. OSS is very like S3, so the testing will be similar. we already have an implementation, and it works fine with our use cases and micro-benchmarks(sort and terasort) on both Hadoop and spark. You are right that the work can be packaged as an independent jar, and users' app can load it as external library. But we think it is better to integrate it into Hadoop, as an module under hadoop tools for maintenance and ease of use purpose.
          Hide
          shimingfei shimingfei added a comment -

          Thanks for your detailed comments Steve.
          OSS is very like S3, so the testing will be similar. we already have an implementation, and it works fine with our use cases and micro-benchmarks(sort and terasort) on both Hadoop and spark.

          You are right that the work can be packaged as an independent jar, and users' app can load it as external library. But we think it is better to integrate it into Hadoop, as an module under hadoop tools for maintenance and ease of use purpose.

          Show
          shimingfei shimingfei added a comment - Thanks for your detailed comments Steve. OSS is very like S3, so the testing will be similar. we already have an implementation, and it works fine with our use cases and micro-benchmarks(sort and terasort) on both Hadoop and spark. You are right that the work can be packaged as an independent jar, and users' app can load it as external library. But we think it is better to integrate it into Hadoop, as an module under hadoop tools for maintenance and ease of use purpose.
          Hide
          shimingfei shimingfei added a comment -

          Thanks Chris. it is very helpful.

          1. The intention of this work was to make Spark/Hadoop applications be able to read/write data from OSS, not completely run Hadoop/Spark over it, because of some limitation on OSS(or object stores). the FileSystem API is offered, just like S3
          2. Clients should hold credentials, proxy is just used to access the OSS service as an configuration of client.
          3. Thanks for your suggestions, we will follow that specification.
          4. yes, OSS support the mapping, we will add more description for this.
          5. sure, we will offer more docs for end users, and currently the approach of renaming in OSS is copy and delete, like S3.
          6. currently, our implementation doesn't have emulation capability, We will look into it.

          Show
          shimingfei shimingfei added a comment - Thanks Chris. it is very helpful. 1. The intention of this work was to make Spark/Hadoop applications be able to read/write data from OSS, not completely run Hadoop/Spark over it, because of some limitation on OSS(or object stores). the FileSystem API is offered, just like S3 2. Clients should hold credentials, proxy is just used to access the OSS service as an configuration of client. 3. Thanks for your suggestions, we will follow that specification. 4. yes, OSS support the mapping, we will add more description for this. 5. sure, we will offer more docs for end users, and currently the approach of renaming in OSS is copy and delete, like S3. 6. currently, our implementation doesn't have emulation capability, We will look into it.
          Hide
          cnauroth Chris Nauroth added a comment -

          Is there an English language version available for the Aliyun OSS manual? I tried these URLs, but I couldn't find an English version.

          https://www.aliyun.com/product/oss/?spm=5176.383663.3.8.e3Rlwi&lang=en

          http://imgs-storage.cdn.aliyuncs.com/help/oss/oss_api_20140709.pdf?spm=5176.383663.9.1.28BjPF&file=oss_api_20140709.pdf

          2. Clients should hold credentials, proxy is just used to access the OSS service as an configuration of client.

          Is this just referring to a traditional HTTP proxy, and you can configure the client to route through the proxy instead of directly contacting Aliyun OSS?

          Show
          cnauroth Chris Nauroth added a comment - Is there an English language version available for the Aliyun OSS manual? I tried these URLs, but I couldn't find an English version. https://www.aliyun.com/product/oss/?spm=5176.383663.3.8.e3Rlwi&lang=en http://imgs-storage.cdn.aliyuncs.com/help/oss/oss_api_20140709.pdf?spm=5176.383663.9.1.28BjPF&file=oss_api_20140709.pdf 2. Clients should hold credentials, proxy is just used to access the OSS service as an configuration of client. Is this just referring to a traditional HTTP proxy, and you can configure the client to route through the proxy instead of directly contacting Aliyun OSS?
          Hide
          shimingfei shimingfei added a comment -

          I only have chinese version currently, don't know whether there is english version, but I will confirm with Aliyun guys about that.
          For the proxy thing. Yes it is a common proxy used to access Aliyun services.

          -------- 原始邮件 --------
          发件人: "Chris Nauroth (JIRA)" <jira@apache.org>
          日期: 2016-02-06 09:10 (GMT+08:00)
          收件人: devmaillists@gmail.com
          主题: [jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file
          system implementation

              [ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15135392#comment-15135392 ]

          Chris Nauroth commented on HADOOP-12756:
          ----------------------------------------

          Is there an English language version available for the Aliyun OSS manual?  I tried these URLs, but I couldn't find an English version.

          https://www.aliyun.com/product/oss/?spm=5176.383663.3.8.e3Rlwi&lang=en

          http://imgs-storage.cdn.aliyuncs.com/help/oss/oss_api_20140709.pdf?spm=5176.383663.9.1.28BjPF&file=oss_api_20140709.pdf

          2. Clients should hold credentials, proxy is just used to access the OSS service as an configuration of client.

          Is this just referring to a traditional HTTP proxy, and you can configure the client to route through the proxy instead of directly contacting Aliyun OSS?


          This message was sent by Atlassian JIRA
          (v6.3.4#6332)
          ​​​​​​​​​​​​​​​

          Show
          shimingfei shimingfei added a comment - I only have chinese version currently, don't know whether there is english version, but I will confirm with Aliyun guys about that. For the proxy thing. Yes it is a common proxy used to access Aliyun services. -------- 原始邮件 -------- 发件人: "Chris Nauroth (JIRA)" <jira@apache.org> 日期: 2016-02-06 09:10 (GMT+08:00) 收件人: devmaillists@gmail.com 主题: [jira] [Commented] ( HADOOP-12756 ) Incorporate Aliyun OSS file system implementation     [ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15135392#comment-15135392 ] Chris Nauroth commented on HADOOP-12756 : ---------------------------------------- Is there an English language version available for the Aliyun OSS manual?  I tried these URLs, but I couldn't find an English version. https://www.aliyun.com/product/oss/?spm=5176.383663.3.8.e3Rlwi&lang=en http://imgs-storage.cdn.aliyuncs.com/help/oss/oss_api_20140709.pdf?spm=5176.383663.9.1.28BjPF&file=oss_api_20140709.pdf 2. Clients should hold credentials, proxy is just used to access the OSS service as an configuration of client. Is this just referring to a traditional HTTP proxy, and you can configure the client to route through the proxy instead of directly contacting Aliyun OSS? – This message was sent by Atlassian JIRA (v6.3.4#6332) ​​​​​​​​​​​​​​​
          Hide
          eddyxu Lei (Eddy) Xu added a comment -
          Show
          eddyxu Lei (Eddy) Xu added a comment - Hi, Chris Nauroth I found the english version of documents here: http://intl.aliyun.com/docs?spm=a3c0i.6010000.66003.24.o7PtZ9#/pub/oss_en_us
          Hide
          raynow Ling Zhou added a comment -

          Code for OSS filesystem

          Show
          raynow Ling Zhou added a comment - Code for OSS filesystem
          Hide
          guoxu1231 Shawn Guo added a comment -

          Hi Guys

          Just searched Hadoop Jira list and found this JIRA task about Aliyun OSS & Hadoop Integration.
          I have a simiar subproject in incubation stage, and have similar functionality as in this patch.
          @shimingfei, I'm thinking if we could collaborate on this? do you have a email? we could discuss it for details.

          Thanks

          Show
          guoxu1231 Shawn Guo added a comment - Hi Guys Just searched Hadoop Jira list and found this JIRA task about Aliyun OSS & Hadoop Integration. I have a simiar subproject in incubation stage, and have similar functionality as in this patch. @shimingfei, I'm thinking if we could collaborate on this? do you have a email? we could discuss it for details. Thanks
          Hide
          hitliuyi Yi Liu added a comment -

          Thanks for mingfei and Lei for the work.

          Hi Steve Loughran and Chris Nauroth, regarding testability, they have talked with me offline, Aliyun created an account for the test and they retained the account for Hadoop, they wanted to pass the username/password through "-D" in mvn command. So the basic functionalities could be verified by unit tests. Does this make sense to you?

          Mingfei and Lei:
          About the ali-oss client, does it rely on a different version of httpclient? Could we use the version which hadoop is using?

          I will post my detailed comments later.

          Show
          hitliuyi Yi Liu added a comment - Thanks for mingfei and Lei for the work. Hi Steve Loughran and Chris Nauroth , regarding testability, they have talked with me offline, Aliyun created an account for the test and they retained the account for Hadoop, they wanted to pass the username/password through "-D" in mvn command. So the basic functionalities could be verified by unit tests. Does this make sense to you? Mingfei and Lei: About the ali-oss client, does it rely on a different version of httpclient? Could we use the version which hadoop is using? I will post my detailed comments later.
          Hide
          hitliuyi Yi Liu added a comment -

          Also the name "oss" is abbreviation of Object Store Service, it's too generic, I think we need to change the name to ali-oss or some other names which other people can understand what it is at first glance.

          Show
          hitliuyi Yi Liu added a comment - Also the name "oss" is abbreviation of Object Store Service, it's too generic, I think we need to change the name to ali-oss or some other names which other people can understand what it is at first glance.
          Hide
          raynow Ling Zhou added a comment -

          Hi Liu Yi,
          Ali-oss client does rely on a higher version of httpclient, it is required by Aliyun OSS SDK. Currently it uses 4.4 version, and hadoop uses 4.2.5 version which does not work for OSS SDK. And about the name "oss", we will change it according to your suggestion.

          Hi Steve Loughran,
          We have read the filesystem specification docs, this implementation is similar to S3A. So operations like rename and delete are still not atomic. Aliyun OSS is much like general object storage system except it is strong consistent.

          Show
          raynow Ling Zhou added a comment - Hi Liu Yi, Ali-oss client does rely on a higher version of httpclient, it is required by Aliyun OSS SDK. Currently it uses 4.4 version, and hadoop uses 4.2.5 version which does not work for OSS SDK. And about the name "oss", we will change it according to your suggestion. Hi Steve Loughran, We have read the filesystem specification docs, this implementation is similar to S3A. So operations like rename and delete are still not atomic. Aliyun OSS is much like general object storage system except it is strong consistent.
          Hide
          stevel@apache.org Steve Loughran added a comment -
          1. I agree: name change, especially as OSS is also the acronym "Open Source Software". Make it hadoo-aliyun (or some other obvious name). Keeping it bigger than just OSS allows for more features for the platform to go in later
          2. There's an open JIRA on incrementing http components; HADOOP-12767; I'm expecting this to go in for Hadoop 2.9. which is what this patch can target (hence: work directly with hadoop-trunk for your dev & patches, not branch-2.8)
          3. All version dependencies must be declared in hadoop-project/pom.xml; it's how we make sure versions are consistent.
          4. regarding passing down usernames, this must be done via the test/resources/auth-keys.xml file. Look at the aws or openstack modules to see how the tests are automatically skipped if undefined. See also how to keep your credentials private. Using the hadoop XML files lets you also test credential provider integration, which we'll also expect
          5. Have a look at the s3a work, especially those items in phase i: stabilisation, HADOOP-11571, Make sure that the patch avoids those same problems (e.g how to close vs abort streams, swallowing FileNotFoundExceptions during the final delete phase). S3a phase II, HADOOP-11694 contains some other bugs, but is otherwise performance work. It's probably best to wait one iteration before doing the performance version, get things stable first.
          6. nice to see all the tests!
          Show
          stevel@apache.org Steve Loughran added a comment - I agree: name change, especially as OSS is also the acronym "Open Source Software". Make it hadoo-aliyun (or some other obvious name). Keeping it bigger than just OSS allows for more features for the platform to go in later There's an open JIRA on incrementing http components; HADOOP-12767 ; I'm expecting this to go in for Hadoop 2.9. which is what this patch can target (hence: work directly with hadoop-trunk for your dev & patches, not branch-2.8) All version dependencies must be declared in hadoop-project/pom.xml ; it's how we make sure versions are consistent. regarding passing down usernames, this must be done via the test/resources/auth-keys.xml file. Look at the aws or openstack modules to see how the tests are automatically skipped if undefined. See also how to keep your credentials private . Using the hadoop XML files lets you also test credential provider integration, which we'll also expect Have a look at the s3a work, especially those items in phase i: stabilisation, HADOOP-11571 , Make sure that the patch avoids those same problems (e.g how to close vs abort streams, swallowing FileNotFoundExceptions during the final delete phase). S3a phase II, HADOOP-11694 contains some other bugs, but is otherwise performance work. It's probably best to wait one iteration before doing the performance version, get things stable first. nice to see all the tests!
          Hide
          raynow Ling Zhou added a comment -

          Thank you for your comments Steve, they are very helpful.
          1.The name OSS does have many meanings, so we will use hadoop-aliyun or some other name to replace hadoop-oss.
          2.We will work with the lastest hadoop-trunk and look for approaches to solve http-client dependency conflicts.
          3.We will make sure all dependencies will be declared in hadoop-project/pom.xml.
          4.It's a little different in the this implementation with aws module, and we will talk to Yi offline for more about that.
          5.Yes stability is the first, and performance work can be done in the next phase. Now we will focus on stability and learn more about s3a work.

          Show
          raynow Ling Zhou added a comment - Thank you for your comments Steve, they are very helpful. 1.The name OSS does have many meanings, so we will use hadoop-aliyun or some other name to replace hadoop-oss. 2.We will work with the lastest hadoop-trunk and look for approaches to solve http-client dependency conflicts. 3.We will make sure all dependencies will be declared in hadoop-project/pom.xml. 4.It's a little different in the this implementation with aws module, and we will talk to Yi offline for more about that. 5.Yes stability is the first, and performance work can be done in the next phase. Now we will focus on stability and learn more about s3a work.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          thanks, this is a good roadmap. Don't expect everything to be perfect before it goes in ... that first release should stay "stabiising".

          One thing that is good in that first release is proxy support ... now I no longer work behind a proxy I forget its problems, but its something that anyone stuck behind an enterprise firewall probably needs.

          Show
          stevel@apache.org Steve Loughran added a comment - thanks, this is a good roadmap. Don't expect everything to be perfect before it goes in ... that first release should stay "stabiising". One thing that is good in that first release is proxy support ... now I no longer work behind a proxy I forget its problems, but its something that anyone stuck behind an enterprise firewall probably needs.
          Hide
          drankye Kai Zheng added a comment -

          When provide the new revision, please also stick to the coding style, use the good patch name pattern and submit it (to trigger the Jenkins building). Thanks.

          Show
          drankye Kai Zheng added a comment - When provide the new revision, please also stick to the coding style, use the good patch name pattern and submit it (to trigger the Jenkins building). Thanks.
          Hide
          lingzhou Ling Zhou added a comment -

          Hi everyone. Our patch is updated. The code is after the latest hadoop trunk.
          1. OSS keys are passed via the test/resources/auth-keys.xml file, the same way as S3A.
          2. Module is renamed to hadoop-aliyun.
          3. Since httpclient in hadoop trunk is updated to 4.5.2, the dependency conflict issue is solved.

          We have prepared individual Aliyun OSS account to test this patch. But how will the account info and other test properties be configured and tests can be properly triggered?

          Thank you for your response.

          Show
          lingzhou Ling Zhou added a comment - Hi everyone. Our patch is updated. The code is after the latest hadoop trunk. 1. OSS keys are passed via the test/resources/auth-keys.xml file, the same way as S3A. 2. Module is renamed to hadoop-aliyun. 3. Since httpclient in hadoop trunk is updated to 4.5.2, the dependency conflict issue is solved. We have prepared individual Aliyun OSS account to test this patch. But how will the account info and other test properties be configured and tests can be properly triggered? Thank you for your response.
          Hide
          drankye Kai Zheng added a comment -

          But how will the account info and other test properties be configured and tests can be properly triggered?

          Do these info change? Does the account info need to be protected or is it suitable to be exposed in this open source world?

          If the aliyun keys can be configured in the test resource file, why such info isn't suitable in the similar way?

          Show
          drankye Kai Zheng added a comment - But how will the account info and other test properties be configured and tests can be properly triggered? Do these info change? Does the account info need to be protected or is it suitable to be exposed in this open source world? If the aliyun keys can be configured in the test resource file, why such info isn't suitable in the similar way?
          Hide
          shimingfei shimingfei added a comment -

          Hi Kai,
          The credential info doesn't change, it is created for Hadoop community to do the functional test, and we can't expose it to all Hadoop users.

          Show
          shimingfei shimingfei added a comment - Hi Kai, The credential info doesn't change, it is created for Hadoop community to do the functional test, and we can't expose it to all Hadoop users.
          Hide
          drankye Kai Zheng added a comment -

          Thanks Mingfei.

          The problem is it's hard to distinguish Hadoop users from Hadoop community developers.

          The bundle of codes should be automatically tested by Yetus to ensure it's well maintained across release and release. The required account info credential must be somewhere in configuration or scripts and there is no mechanism to protect them from being known by some people. IMO, Aliyun should be aware of this and OK about the info being used in the open source project in the way when providing it. Otherwise, I don't aware any way it could proceed.

          Show
          drankye Kai Zheng added a comment - Thanks Mingfei. The problem is it's hard to distinguish Hadoop users from Hadoop community developers. The bundle of codes should be automatically tested by Yetus to ensure it's well maintained across release and release. The required account info credential must be somewhere in configuration or scripts and there is no mechanism to protect them from being known by some people. IMO, Aliyun should be aware of this and OK about the info being used in the open source project in the way when providing it. Otherwise, I don't aware any way it could proceed.
          Hide
          cnauroth Chris Nauroth added a comment -

          The situation is similar to the testing challenges we face for other similar modules: hadoop-aws, hadoop-azure and hadoop-openstack. Right now, these modules don't really get tested during pre-commit, except for a few true unit tests that don't require integration with an external service. Instead, it's up to reviewers to get credentials to the external service, either using a personal account or an employer's account, and configure the tests to run from their development environments.

          There is enough activity happening in these modules that it would be in our best interest to get the tests running from pre-commit. I haven't put much thought into how this would work. Kai is right that the credentials need to go somewhere accessible by each Jenkins host that runs a Hadoop pre-commit build. However, we wouldn't want those credentials accessible by the whole Internet or really even the whole Apache contributor community. I expect it will take coordination with the Apache infra team to get this done correctly.

          Show
          cnauroth Chris Nauroth added a comment - The situation is similar to the testing challenges we face for other similar modules: hadoop-aws, hadoop-azure and hadoop-openstack. Right now, these modules don't really get tested during pre-commit, except for a few true unit tests that don't require integration with an external service. Instead, it's up to reviewers to get credentials to the external service, either using a personal account or an employer's account, and configure the tests to run from their development environments. There is enough activity happening in these modules that it would be in our best interest to get the tests running from pre-commit. I haven't put much thought into how this would work. Kai is right that the credentials need to go somewhere accessible by each Jenkins host that runs a Hadoop pre-commit build. However, we wouldn't want those credentials accessible by the whole Internet or really even the whole Apache contributor community. I expect it will take coordination with the Apache infra team to get this done correctly.
          Hide
          lingzhou Ling Zhou added a comment -

          Thanks Kai.

          There are two configuration files. test/resources/contract-test-options.xml and test/resources/auth-keys.xml.
          It seems these two files should not be included in source code, as what .gitingore has excluded. Maybe we can provide these two files separately?

          Show
          lingzhou Ling Zhou added a comment - Thanks Kai. There are two configuration files. test/resources/contract-test-options.xml and test/resources/auth-keys.xml. It seems these two files should not be included in source code, as what .gitingore has excluded. Maybe we can provide these two files separately?
          Hide
          drankye Kai Zheng added a comment -

          Thanks Chris Nauroth for documenting the current situations for existing similar modules. It's very helpful!
          I thought the new module could also follow the used pattern for the short term.

          I expect it will take coordination with the Apache infra team to get this done correctly.

          A simply way to achieve so would be, have a dedicated host (or vm) equipped with all these credentials and run all the tests daily. If this sounds good to go, I can fire a INFRA jira asking for the support.

          Show
          drankye Kai Zheng added a comment - Thanks Chris Nauroth for documenting the current situations for existing similar modules. It's very helpful! I thought the new module could also follow the used pattern for the short term. I expect it will take coordination with the Apache infra team to get this done correctly. A simply way to achieve so would be, have a dedicated host (or vm) equipped with all these credentials and run all the tests daily. If this sounds good to go, I can fire a INFRA jira asking for the support.
          Hide
          drankye Kai Zheng added a comment -

          Submitted the patch to trigger the building.

          Show
          drankye Kai Zheng added a comment - Submitted the patch to trigger the building.
          Hide
          cnauroth Chris Nauroth added a comment -

          I thought the new module could also follow the used pattern for the short term.

          Yes, I agree. I don't think a larger infra solution needs to be tied directly to this patch.

          A simply way to achieve so would be, have a dedicated host (or vm) equipped with all these credentials and run all the tests daily.

          This would be nice, but I think pre-commit would be the big win for the community. That would save a lot of time for those of us currently doing long test runs on our dev machines verifying patches on those modules. I'd recommend a wider conversation on the dev mailing lists before filing any specific requests to infra.

          Show
          cnauroth Chris Nauroth added a comment - I thought the new module could also follow the used pattern for the short term. Yes, I agree. I don't think a larger infra solution needs to be tied directly to this patch. A simply way to achieve so would be, have a dedicated host (or vm) equipped with all these credentials and run all the tests daily. This would be nice, but I think pre-commit would be the big win for the community. That would save a lot of time for those of us currently doing long test runs on our dev machines verifying patches on those modules. I'd recommend a wider conversation on the dev mailing lists before filing any specific requests to infra.
          Hide
          hitliuyi Yi Liu added a comment -

          Agree with Chris Nauroth. The credentials need to go somewhere accessible by each Jenkins host that runs a Hadoop pre-commit build.

          have a dedicated host (or vm) equipped with all these credentials and run all the tests daily
          

          Kai, I think it's not to find a dedicated host, instead, we need to make the auth-keys.xml available on all the Jenkins hosts that run Hadoop pre-commit build. Not sure whether it's easy to support this by the INFRA.

          It seems these two files should not be included in source code, as what .gitingore has excluded. Maybe we can provide these two files separately?
          

          Ling Zhou, please don't add the credentials in patch. It's unexpected.

          Show
          hitliuyi Yi Liu added a comment - Agree with Chris Nauroth . The credentials need to go somewhere accessible by each Jenkins host that runs a Hadoop pre-commit build. have a dedicated host (or vm) equipped with all these credentials and run all the tests daily Kai, I think it's not to find a dedicated host, instead, we need to make the auth-keys.xml available on all the Jenkins hosts that run Hadoop pre-commit build. Not sure whether it's easy to support this by the INFRA. It seems these two files should not be included in source code, as what .gitingore has excluded. Maybe we can provide these two files separately? Ling Zhou , please don't add the credentials in patch. It's unexpected.
          Hide
          hitliuyi Yi Liu added a comment -

          I'd recommend a wider conversation on the dev mailing lists before filing any specific requests to infra.

          +1 for this.

          Another thing for the "auth-keys.xml", currently we use the credential file instead of normal Hadoop configuration property, I think the reason is it's more secure and the user can control the linux file permissions of "auth-keys.xml". Could we allow the normal Hadoop configuration property for the credentials too, then we can specify the credentials through mvn build command line which could be more easily supported by the INFRA. While user can still use the "auth-keys.xml" in practice.

          Show
          hitliuyi Yi Liu added a comment - I'd recommend a wider conversation on the dev mailing lists before filing any specific requests to infra. +1 for this. Another thing for the "auth-keys.xml", currently we use the credential file instead of normal Hadoop configuration property, I think the reason is it's more secure and the user can control the linux file permissions of "auth-keys.xml". Could we allow the normal Hadoop configuration property for the credentials too, then we can specify the credentials through mvn build command line which could be more easily supported by the INFRA. While user can still use the "auth-keys.xml" in practice.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 17s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 14 new or modified test files.
          0 mvndep 0m 40s Maven dependency ordering for branch
          +1 mvninstall 7m 45s trunk passed
          +1 compile 7m 22s trunk passed
          +1 checkstyle 1m 22s trunk passed
          +1 mvnsite 8m 38s trunk passed
          +1 mvneclipse 0m 48s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools .
          +1 findbugs 0m 0s trunk passed
          -1 javadoc 4m 8s root in trunk failed.
          0 mvndep 0m 16s Maven dependency ordering for patch
          -1 mvninstall 0m 11s hadoop-aliyun in the patch failed.
          -1 mvninstall 1m 2s hadoop-tools in the patch failed.
          -1 mvninstall 6m 21s root in the patch failed.
          +1 compile 6m 26s the patch passed
          -1 javac 6m 26s root generated 1 new + 697 unchanged - 0 fixed = 698 total (was 697)
          -1 checkstyle 1m 22s root: The patch generated 115 new + 0 unchanged - 0 fixed = 115 total (was 0)
          +1 mvnsite 8m 21s the patch passed
          +1 mvneclipse 0m 39s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 7s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools .
          +1 findbugs 0m 29s the patch passed
          -1 javadoc 4m 0s root in the patch failed.
          -1 unit 156m 21s root in the patch failed.
          +1 asflicense 0m 26s The patch does not generate ASF License warnings.
          217m 51s



          Reason Tests
          Failed junit tests hadoop.mapreduce.tools.TestCLI
            hadoop.yarn.server.resourcemanager.TestRMRestart
            hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens
            hadoop.yarn.server.resourcemanager.TestClientRMTokens
            hadoop.yarn.server.resourcemanager.TestAMAuthorization



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:2c91fd8
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12806548/HADOOP-12756-v02.patch
          JIRA Issue HADOOP-12756
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux 5c22c983fea8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / cfb860d
          Default Java 1.8.0_91
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/branch-javadoc-root.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aliyun.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/patch-mvninstall-hadoop-tools.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/patch-mvninstall-root.txt
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/diff-compile-javac-root.txt
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/diff-checkstyle-root.txt
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/patch-javadoc-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/patch-unit-root.txt
          unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/testReport/
          modules C: hadoop-project hadoop-tools/hadoop-aliyun hadoop-tools . U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 14 new or modified test files. 0 mvndep 0m 40s Maven dependency ordering for branch +1 mvninstall 7m 45s trunk passed +1 compile 7m 22s trunk passed +1 checkstyle 1m 22s trunk passed +1 mvnsite 8m 38s trunk passed +1 mvneclipse 0m 48s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools . +1 findbugs 0m 0s trunk passed -1 javadoc 4m 8s root in trunk failed. 0 mvndep 0m 16s Maven dependency ordering for patch -1 mvninstall 0m 11s hadoop-aliyun in the patch failed. -1 mvninstall 1m 2s hadoop-tools in the patch failed. -1 mvninstall 6m 21s root in the patch failed. +1 compile 6m 26s the patch passed -1 javac 6m 26s root generated 1 new + 697 unchanged - 0 fixed = 698 total (was 697) -1 checkstyle 1m 22s root: The patch generated 115 new + 0 unchanged - 0 fixed = 115 total (was 0) +1 mvnsite 8m 21s the patch passed +1 mvneclipse 0m 39s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 7s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools . +1 findbugs 0m 29s the patch passed -1 javadoc 4m 0s root in the patch failed. -1 unit 156m 21s root in the patch failed. +1 asflicense 0m 26s The patch does not generate ASF License warnings. 217m 51s Reason Tests Failed junit tests hadoop.mapreduce.tools.TestCLI   hadoop.yarn.server.resourcemanager.TestRMRestart   hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens   hadoop.yarn.server.resourcemanager.TestClientRMTokens   hadoop.yarn.server.resourcemanager.TestAMAuthorization Subsystem Report/Notes Docker Image:yetus/hadoop:2c91fd8 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12806548/HADOOP-12756-v02.patch JIRA Issue HADOOP-12756 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux 5c22c983fea8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / cfb860d Default Java 1.8.0_91 javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/branch-javadoc-root.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aliyun.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/patch-mvninstall-hadoop-tools.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/patch-mvninstall-root.txt javac https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/diff-compile-javac-root.txt checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/diff-checkstyle-root.txt javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/patch-javadoc-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/patch-unit-root.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/testReport/ modules C: hadoop-project hadoop-tools/hadoop-aliyun hadoop-tools . U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9601/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          aw Allen Wittenauer added a comment -

          The jenkins servers should not be considered secure servers.

          There are several hundred people that have direct access to them via the jenkins ui and several thousand people via precommit. Keep in mind that the whole point behind precommit is that it runs arbitrary code; a patch may contain any sort of change that may get installed and executed via maven. All precommit jobs across all projects run as the same user so UNIX file permissions aren't going to help you here either.

          In other words, if precommit has access to it, so does everyone else on the Internet.

          Show
          aw Allen Wittenauer added a comment - The jenkins servers should not be considered secure servers. There are several hundred people that have direct access to them via the jenkins ui and several thousand people via precommit. Keep in mind that the whole point behind precommit is that it runs arbitrary code ; a patch may contain any sort of change that may get installed and executed via maven. All precommit jobs across all projects run as the same user so UNIX file permissions aren't going to help you here either. In other words, if precommit has access to it, so does everyone else on the Internet.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          I concur with Allen Wittenauer: if a jenkins server is running submitted code, then it is precisely one patch submission away from having credentials leaked.

          There's a set of problems that need to be addressed when working with object stores

          1. Development: does your own code work?
          2. Patch review: does a newly submitted patch work?
          3. Regression testing: does the branch/trunk work?

          *Development* Development in a module for a specific infra: aws, openstack, azure must, obviously, require the credentials to test there. More subtly, changes to the filesystem APIs and tests need testing too. In HADOOP-13207, for example, I have to test all implementations of an abstract contract test: local, rawlocal, HDFS, s3a, azure.

          Regression Testing This is what makes reviewing object store patches hard. The reviewer needs to have the credentials, first prescan the review to make sure it doesn't leak information (that's both malicious attacks and simply over-zealous logging). Then they need to do a test run, which is about 30-60 minutes —which is why it's pretty frustrating if the patch fails. Hence the policy of: nobody will look at your patch until you declare which infra your tests successfully completed against. It forces the developers to apply due diligence.

          Maybe, just maybe, this could be partially automated. As an example in spark PRs, a set of committers can add a comment, 'Jenkins, test this" and the UCB Jenkins engine will run a test. If we could something like that, with a patch test only kicking off after human intervention, we could improve patch review.

          Regression Testing

          This is an area where a private jenkins instance with the credentials can contribute: nightly test runs of the object store module(s) —and a process for reacting to failures. We do this internally a lot, where the escalation process is: someone gets to fix the failure. It's that escalation process which needs to be set up —its not enough for a private Jenkins machine/VM to send emails saying a test run failed, it needs having people on the developer lists who care and can react. That means, you get to stay on the dev lists —welcome!

          Note that in SPARK-7481 I'm adding end-to-end testing through spark; you can see it at work by comparing an s3a test run with the hadoop 2.6 profile vs hadoop-2.7. the 2.6 one is clearly broken —if we'd had those tests up earlier, that'd have been clear at the time. I'm designing that module to be extensible, once it's in, adding dependencies and tests for a new FS should be straightforward

          Show
          stevel@apache.org Steve Loughran added a comment - I concur with Allen Wittenauer : if a jenkins server is running submitted code, then it is precisely one patch submission away from having credentials leaked. There's a set of problems that need to be addressed when working with object stores Development: does your own code work? Patch review: does a newly submitted patch work? Regression testing: does the branch/trunk work? * Development * Development in a module for a specific infra: aws, openstack, azure must, obviously, require the credentials to test there. More subtly, changes to the filesystem APIs and tests need testing too. In HADOOP-13207 , for example, I have to test all implementations of an abstract contract test: local, rawlocal, HDFS, s3a, azure. Regression Testing This is what makes reviewing object store patches hard. The reviewer needs to have the credentials, first prescan the review to make sure it doesn't leak information (that's both malicious attacks and simply over-zealous logging). Then they need to do a test run, which is about 30-60 minutes —which is why it's pretty frustrating if the patch fails. Hence the policy of: nobody will look at your patch until you declare which infra your tests successfully completed against. It forces the developers to apply due diligence. Maybe, just maybe, this could be partially automated. As an example in spark PRs, a set of committers can add a comment, 'Jenkins, test this" and the UCB Jenkins engine will run a test. If we could something like that, with a patch test only kicking off after human intervention, we could improve patch review. Regression Testing This is an area where a private jenkins instance with the credentials can contribute: nightly test runs of the object store module(s) —and a process for reacting to failures. We do this internally a lot, where the escalation process is: someone gets to fix the failure. It's that escalation process which needs to be set up —its not enough for a private Jenkins machine/VM to send emails saying a test run failed, it needs having people on the developer lists who care and can react. That means, you get to stay on the dev lists —welcome! Note that in SPARK-7481 I'm adding end-to-end testing through spark; you can see it at work by comparing an s3a test run with the hadoop 2.6 profile vs hadoop-2.7. the 2.6 one is clearly broken —if we'd had those tests up earlier, that'd have been clear at the time. I'm designing that module to be extensible, once it's in, adding dependencies and tests for a new FS should be straightforward
          Hide
          drankye Kai Zheng added a comment -

          Uploading the update patch on behalf of Mingfei and Ling

          Show
          drankye Kai Zheng added a comment - Uploading the update patch on behalf of Mingfei and Ling
          Hide
          lingzhou Ling Zhou added a comment -

          Thanks Kai,
          Patch is updated.
          1. Resolve commons-beanutils dependency conflit.
          2. Update pom in hadoop-tools-dist.
          3. Fix coding style issues to pass checkstyle checks.

          Show
          lingzhou Ling Zhou added a comment - Thanks Kai, Patch is updated. 1. Resolve commons-beanutils dependency conflit. 2. Update pom in hadoop-tools-dist. 3. Fix coding style issues to pass checkstyle checks.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          -1 docker 0m 5s Docker failed to build yetus/hadoop:2c91fd8.



          Subsystem Report/Notes
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808594/HADOOP-12756.003.patch
          JIRA Issue HADOOP-12756
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9676/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 docker 0m 5s Docker failed to build yetus/hadoop:2c91fd8. Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808594/HADOOP-12756.003.patch JIRA Issue HADOOP-12756 Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9676/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          drankye Kai Zheng added a comment -

          Uploaded the updated patch for Ling.

          Show
          drankye Kai Zheng added a comment - Uploaded the updated patch for Ling.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 35s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 14 new or modified test files.
          0 mvndep 0m 16s Maven dependency ordering for branch
          +1 mvninstall 8m 8s trunk passed
          +1 compile 8m 35s trunk passed
          +1 checkstyle 1m 31s trunk passed
          +1 mvnsite 10m 33s trunk passed
          +1 mvneclipse 0m 43s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools .
          +1 findbugs 0m 0s trunk passed
          +1 javadoc 6m 10s trunk passed
          0 mvndep 0m 17s Maven dependency ordering for patch
          +1 mvninstall 10m 7s the patch passed
          +1 compile 7m 6s the patch passed
          +1 javac 7m 6s the patch passed
          +1 checkstyle 1m 26s the patch passed
          +1 mvnsite 9m 21s the patch passed
          +1 mvneclipse 0m 40s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 8s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist
          +1 findbugs 0m 29s the patch passed
          +1 javadoc 5m 13s the patch passed
          -1 unit 11m 26s root in the patch failed.
          +1 asflicense 0m 19s The patch does not generate ASF License warnings.
          83m 48s



          Reason Tests
          Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:2c91fd8
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808654/HADOOP-12756.004.patch
          JIRA Issue HADOOP-12756
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux 144154f2d3a6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / e620530
          Default Java 1.8.0_91
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9678/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9678/testReport/
          modules C: hadoop-project hadoop-tools/hadoop-aliyun . hadoop-tools hadoop-tools/hadoop-tools-dist U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9678/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 35s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 14 new or modified test files. 0 mvndep 0m 16s Maven dependency ordering for branch +1 mvninstall 8m 8s trunk passed +1 compile 8m 35s trunk passed +1 checkstyle 1m 31s trunk passed +1 mvnsite 10m 33s trunk passed +1 mvneclipse 0m 43s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . +1 findbugs 0m 0s trunk passed +1 javadoc 6m 10s trunk passed 0 mvndep 0m 17s Maven dependency ordering for patch +1 mvninstall 10m 7s the patch passed +1 compile 7m 6s the patch passed +1 javac 7m 6s the patch passed +1 checkstyle 1m 26s the patch passed +1 mvnsite 9m 21s the patch passed +1 mvneclipse 0m 40s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 8s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist +1 findbugs 0m 29s the patch passed +1 javadoc 5m 13s the patch passed -1 unit 11m 26s root in the patch failed. +1 asflicense 0m 19s The patch does not generate ASF License warnings. 83m 48s Reason Tests Failed junit tests hadoop.metrics2.impl.TestGangliaMetrics Subsystem Report/Notes Docker Image:yetus/hadoop:2c91fd8 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808654/HADOOP-12756.004.patch JIRA Issue HADOOP-12756 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux 144154f2d3a6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e620530 Default Java 1.8.0_91 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9678/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9678/testReport/ modules C: hadoop-project hadoop-tools/hadoop-aliyun . hadoop-tools hadoop-tools/hadoop-tools-dist U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9678/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          drankye Kai Zheng added a comment -

          Have looked at the patch, some comments about the codes:

          1. There are lots of configuration items introduced in Constants. It would be great if we could follow the pattern used in CommonConfigurationKeys to name the property keys. For example, in

           // Time until we give up on a connection to oss
           public static final String SOCKET_TIMEOUT = "fs.oss.connection.timeout";
           public static final int DEFAULT_SOCKET_TIMEOUT = 200000;
          

          SOCKET_TIMEOUT -> SOCKET_TIMEOUT_KEY

          2. As discussed above, better to: org.apache.hadoop.fs.oss -> org.apache.hadoop.fs.aliyun.oss, OSSFileSystem -> AliyunOssFileSystem.

          3. In the delete operation:

                if (!recursive) {
                  FileStatus[] statuses = listStatus(status.getPath());
                  if (statuses.length > 0) {
                    throw new IOException("Cannot remove " + path + ": Is a directory!");
                  } else {
                    // Delete empty directory without '-r'
                    ossClient.deleteObject(bucketName, key);
                    statistics.incrementWriteOps(1);
                  }
                } else {
                   // A large block here 
                }
          

          The exception message could be more specific, like: "Cannot remote the directory, it's not empty".
          And the comment "Delete empty directory without '-r'" should be a little above.
          The large block of codes for delete the directory with '-r' would be better to move to a separate function.

              //TODO: optimize logic here
              try {
                Path pPath = status.getPath().getParent();
                FileStatus pStatus = getFileStatus(pPath);
                if (pStatus.isDirectory()) {
                  return true;
                } else {
                  throw new IOException("Path " + pPath +
                      " is assumed to be a directory!");
                }
              } catch (FileNotFoundException fnfe) {
                return mkdir(bucketName, pathToKey(status.getPath().getParent()));
              }
          

          Any explanation about this post processing logic after the File/Directory object is deleted successfully? Why the parent directory could be not found, and then we need to do the mkdir here?

          4. Do you want to rename validatePath to validatePathForMkdir, as indicated by the thrown exception:

                    throw new FileAlreadyExistsException(String.format(
                        "Can't make directory for path '%s' since it is a file.", fPart));
          

          5. Please also check codes in other operations like rename, similarly.

          6. Could we move the very AliyunOss specific logic codes to an utility class to simplify the codes in the file system impl and input/output stream?

          The tests look complete and great. Thanks!

          Show
          drankye Kai Zheng added a comment - Have looked at the patch, some comments about the codes: 1. There are lots of configuration items introduced in Constants . It would be great if we could follow the pattern used in CommonConfigurationKeys to name the property keys. For example, in // Time until we give up on a connection to oss public static final String SOCKET_TIMEOUT = "fs.oss.connection.timeout" ; public static final int DEFAULT_SOCKET_TIMEOUT = 200000; SOCKET_TIMEOUT -> SOCKET_TIMEOUT_KEY 2. As discussed above, better to: org.apache.hadoop.fs.oss -> org.apache.hadoop.fs.aliyun.oss, OSSFileSystem -> AliyunOssFileSystem. 3. In the delete operation: if (!recursive) { FileStatus[] statuses = listStatus(status.getPath()); if (statuses.length > 0) { throw new IOException( "Cannot remove " + path + ": Is a directory!" ); } else { // Delete empty directory without '-r' ossClient.deleteObject(bucketName, key); statistics.incrementWriteOps(1); } } else { // A large block here } The exception message could be more specific, like: "Cannot remote the directory, it's not empty". And the comment "Delete empty directory without '-r'" should be a little above. The large block of codes for delete the directory with '-r' would be better to move to a separate function. //TODO: optimize logic here try { Path pPath = status.getPath().getParent(); FileStatus pStatus = getFileStatus(pPath); if (pStatus.isDirectory()) { return true ; } else { throw new IOException( "Path " + pPath + " is assumed to be a directory!" ); } } catch (FileNotFoundException fnfe) { return mkdir(bucketName, pathToKey(status.getPath().getParent())); } Any explanation about this post processing logic after the File/Directory object is deleted successfully? Why the parent directory could be not found, and then we need to do the mkdir here? 4. Do you want to rename validatePath to validatePathForMkdir , as indicated by the thrown exception: throw new FileAlreadyExistsException( String .format( "Can't make directory for path '%s' since it is a file." , fPart)); 5. Please also check codes in other operations like rename , similarly. 6. Could we move the very AliyunOss specific logic codes to an utility class to simplify the codes in the file system impl and input/output stream? The tests look complete and great. Thanks!
          Hide
          shimingfei shimingfei added a comment - - edited

          Kai Zheng Thanks for your suggestions! I have modified the code according to your comments:

          1. Done
          2. Done
          3. The check is used to make sure that when a file is deleted, the parent directory still exists. for example, when a file named "/temp/tests/test0" is deleted, and if parent directory "/temp/tests/" doesn't exist, then it will be created as parent directory
          4. We keep the method name not changed, it is used to check whether the path is valid, if "/temp/tests" exists and the size is not zero, then "/temp/tests/test0" is not a valid file name.
          6. we don't have strong intentions to do it currently, because most of the logic can not be reused, and many parameters are needed for the utility functions

          Show
          shimingfei shimingfei added a comment - - edited Kai Zheng Thanks for your suggestions! I have modified the code according to your comments: 1. Done 2. Done 3. The check is used to make sure that when a file is deleted, the parent directory still exists. for example, when a file named "/temp/tests/test0" is deleted, and if parent directory "/temp/tests/" doesn't exist, then it will be created as parent directory 4. We keep the method name not changed, it is used to check whether the path is valid, if "/temp/tests" exists and the size is not zero, then "/temp/tests/test0" is not a valid file name. 6. we don't have strong intentions to do it currently, because most of the logic can not be reused, and many parameters are needed for the utility functions
          Hide
          stevel@apache.org Steve Loughran added a comment -

          I'm really pleased with the progress here —and have to apologise for not putting in the time it deserves reviewing it: Kai Zheng has been handling that well.

          How should we progress here? I think we should consider some intermediate milestone for it to be ready for a feature branch, then iterate on that for broader testing and review. It's never going to be "perfect" when it goes it —there's always scope for improvement— i think we initially should be happy all builds and runs, the contract tests can pass, that initial downstream runs don't find problems and then do what we've done with the other object stores: ship it and see what surfaces in the field. At a guess, it'll be authentication.

          codewise,

          • do make sure that FileSystem.close() always works, even before initialize() is invoked. I'm not sure that holds right now.
          • I think the password pickup from the config should support CredentialProviders, the way that that S3A does. That provides a significantly more secure option. Having it in there from the outset lets the docs recommend this from the outset.
          Show
          stevel@apache.org Steve Loughran added a comment - I'm really pleased with the progress here —and have to apologise for not putting in the time it deserves reviewing it: Kai Zheng has been handling that well. How should we progress here? I think we should consider some intermediate milestone for it to be ready for a feature branch, then iterate on that for broader testing and review. It's never going to be "perfect" when it goes it —there's always scope for improvement— i think we initially should be happy all builds and runs, the contract tests can pass, that initial downstream runs don't find problems and then do what we've done with the other object stores: ship it and see what surfaces in the field. At a guess, it'll be authentication. codewise, do make sure that FileSystem.close() always works, even before initialize() is invoked. I'm not sure that holds right now. I think the password pickup from the config should support CredentialProviders, the way that that S3A does. That provides a significantly more secure option. Having it in there from the outset lets the docs recommend this from the outset.
          Hide
          shimingfei shimingfei added a comment -

          Steve Loughran Thanks for your useful suggestions. I have updated the code.

          Two main changes:
          1. make sure ossClient is not null before calling close() on it. and make sure super.close() can be called.
          2. add CredentialProvider support, just like S3A

          Show
          shimingfei shimingfei added a comment - Steve Loughran Thanks for your useful suggestions. I have updated the code. Two main changes: 1. make sure ossClient is not null before calling close() on it. and make sure super.close() can be called. 2. add CredentialProvider support, just like S3A
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 29s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 14 new or modified test files.
          0 mvndep 0m 12s Maven dependency ordering for branch
          +1 mvninstall 7m 27s trunk passed
          +1 compile 7m 32s trunk passed
          +1 checkstyle 1m 21s trunk passed
          +1 mvnsite 9m 46s trunk passed
          +1 mvneclipse 0m 49s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools .
          +1 findbugs 0m 0s trunk passed
          +1 javadoc 6m 2s trunk passed
          0 mvndep 0m 23s Maven dependency ordering for patch
          +1 mvninstall 11m 0s the patch passed
          +1 compile 7m 33s the patch passed
          +1 javac 7m 33s the patch passed
          -0 checkstyle 1m 30s root: The patch generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0)
          +1 mvnsite 9m 6s the patch passed
          +1 mvneclipse 0m 49s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 10s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools .
          +1 findbugs 0m 36s the patch passed
          +1 javadoc 4m 51s the patch passed
          -1 unit 33m 32s root in the patch failed.
          +1 asflicense 0m 27s The patch does not generate ASF License warnings.
          124m 30s



          Reason Tests
          Failed junit tests hadoop.crypto.key.kms.server.TestKMS



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:85209cc
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12813023/HADOOP-12756.006.patch
          JIRA Issue HADOOP-12756
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux 88120e562b6a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 6314843
          Default Java 1.8.0_91
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9871/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9871/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9871/testReport/
          modules C: hadoop-project hadoop-tools/hadoop-aliyun hadoop-tools/hadoop-tools-dist hadoop-tools . U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9871/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 29s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 14 new or modified test files. 0 mvndep 0m 12s Maven dependency ordering for branch +1 mvninstall 7m 27s trunk passed +1 compile 7m 32s trunk passed +1 checkstyle 1m 21s trunk passed +1 mvnsite 9m 46s trunk passed +1 mvneclipse 0m 49s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . +1 findbugs 0m 0s trunk passed +1 javadoc 6m 2s trunk passed 0 mvndep 0m 23s Maven dependency ordering for patch +1 mvninstall 11m 0s the patch passed +1 compile 7m 33s the patch passed +1 javac 7m 33s the patch passed -0 checkstyle 1m 30s root: The patch generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0) +1 mvnsite 9m 6s the patch passed +1 mvneclipse 0m 49s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 10s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . +1 findbugs 0m 36s the patch passed +1 javadoc 4m 51s the patch passed -1 unit 33m 32s root in the patch failed. +1 asflicense 0m 27s The patch does not generate ASF License warnings. 124m 30s Reason Tests Failed junit tests hadoop.crypto.key.kms.server.TestKMS Subsystem Report/Notes Docker Image:yetus/hadoop:85209cc JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12813023/HADOOP-12756.006.patch JIRA Issue HADOOP-12756 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux 88120e562b6a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 6314843 Default Java 1.8.0_91 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9871/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9871/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9871/testReport/ modules C: hadoop-project hadoop-tools/hadoop-aliyun hadoop-tools/hadoop-tools-dist hadoop-tools . U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9871/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 31s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 14 new or modified test files.
          0 mvndep 0m 11s Maven dependency ordering for branch
          +1 mvninstall 6m 25s trunk passed
          +1 compile 6m 33s trunk passed
          +1 checkstyle 1m 21s trunk passed
          +1 mvnsite 8m 27s trunk passed
          +1 mvneclipse 1m 7s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools .
          +1 findbugs 0m 0s trunk passed
          +1 javadoc 4m 34s trunk passed
          0 mvndep 0m 18s Maven dependency ordering for patch
          +1 mvninstall 8m 9s the patch passed
          +1 compile 6m 36s the patch passed
          +1 javac 6m 36s the patch passed
          -0 checkstyle 1m 23s root: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0)
          +1 mvnsite 8m 22s the patch passed
          +1 mvneclipse 0m 52s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix.
          +1 xml 0m 8s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist
          +1 findbugs 0m 35s the patch passed
          -1 javadoc 4m 39s root generated 1 new + 11566 unchanged - 0 fixed = 11567 total (was 11566)
          -1 unit 32m 41s root in the patch failed.
          +1 asflicense 0m 27s The patch does not generate ASF License warnings.
          114m 46s



          Reason Tests
          Failed junit tests hadoop.crypto.key.kms.server.TestKMS



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:85209cc
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12813029/HADOOP-12756.006.patch
          JIRA Issue HADOOP-12756
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux beca28ccabf6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 6314843
          Default Java 1.8.0_91
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9872/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9872/artifact/patchprocess/whitespace-eol.txt
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/9872/artifact/patchprocess/diff-javadoc-javadoc-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9872/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9872/testReport/
          modules C: hadoop-project hadoop-tools/hadoop-aliyun . hadoop-tools hadoop-tools/hadoop-tools-dist U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9872/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 31s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 14 new or modified test files. 0 mvndep 0m 11s Maven dependency ordering for branch +1 mvninstall 6m 25s trunk passed +1 compile 6m 33s trunk passed +1 checkstyle 1m 21s trunk passed +1 mvnsite 8m 27s trunk passed +1 mvneclipse 1m 7s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . +1 findbugs 0m 0s trunk passed +1 javadoc 4m 34s trunk passed 0 mvndep 0m 18s Maven dependency ordering for patch +1 mvninstall 8m 9s the patch passed +1 compile 6m 36s the patch passed +1 javac 6m 36s the patch passed -0 checkstyle 1m 23s root: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) +1 mvnsite 8m 22s the patch passed +1 mvneclipse 0m 52s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. +1 xml 0m 8s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist +1 findbugs 0m 35s the patch passed -1 javadoc 4m 39s root generated 1 new + 11566 unchanged - 0 fixed = 11567 total (was 11566) -1 unit 32m 41s root in the patch failed. +1 asflicense 0m 27s The patch does not generate ASF License warnings. 114m 46s Reason Tests Failed junit tests hadoop.crypto.key.kms.server.TestKMS Subsystem Report/Notes Docker Image:yetus/hadoop:85209cc JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12813029/HADOOP-12756.006.patch JIRA Issue HADOOP-12756 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux beca28ccabf6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 6314843 Default Java 1.8.0_91 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/9872/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/9872/artifact/patchprocess/whitespace-eol.txt javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/9872/artifact/patchprocess/diff-javadoc-javadoc-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/9872/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9872/testReport/ modules C: hadoop-project hadoop-tools/hadoop-aliyun . hadoop-tools hadoop-tools/hadoop-tools-dist U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9872/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          +1 for the idea of a feature branch. That will give you an easier time iterating on the code and improving it. I'd be happy to help set that up. Let me know if you'd like to proceed.

          Just reinforcing some of my earlier comments, we consider it very important to have documentation in place and a way for committers to run the contract tests integrated with the live service. Without those in place, long-term maintenance of this codebase is unlikely to succeed. I would ask for those items to be completed before voting +1 on a branch merge to trunk.

          Show
          cnauroth Chris Nauroth added a comment - +1 for the idea of a feature branch. That will give you an easier time iterating on the code and improving it. I'd be happy to help set that up. Let me know if you'd like to proceed. Just reinforcing some of my earlier comments, we consider it very important to have documentation in place and a way for committers to run the contract tests integrated with the live service. Without those in place, long-term maintenance of this codebase is unlikely to succeed. I would ask for those items to be completed before voting +1 on a branch merge to trunk.
          Hide
          drankye Kai Zheng added a comment -

          Thanks Steve Loughran and Chris Nauroth for providing the thoughts! You two great guys asked for a feature branch for this, according to the latest discussion in the community, I thought we should probably have it then. I'm not sure the development guys here are familiar with the approach, but let me expain about it to them by off-line first. I personally feel that, with the feature branch, it could be even faster to get all the thing ready and delivered, if it's wanted.

          A question could you help clarify, Chris:

          a way for committers to run the contract tests integrated with the live service.

          According to our previous discussion, a committer to prepare for committing the codes (now the branch to merge) could run the tests using the way we do for existing cloud modules. Sure for long term, we need to think about a perfect solution for such live service integrations, look like Steve Loughran has already some ideas and even work about this.

          Show
          drankye Kai Zheng added a comment - Thanks Steve Loughran and Chris Nauroth for providing the thoughts! You two great guys asked for a feature branch for this, according to the latest discussion in the community, I thought we should probably have it then. I'm not sure the development guys here are familiar with the approach, but let me expain about it to them by off-line first. I personally feel that, with the feature branch, it could be even faster to get all the thing ready and delivered, if it's wanted. A question could you help clarify, Chris: a way for committers to run the contract tests integrated with the live service. According to our previous discussion, a committer to prepare for committing the codes (now the branch to merge) could run the tests using the way we do for existing cloud modules. Sure for long term, we need to think about a perfect solution for such live service integrations, look like Steve Loughran has already some ideas and even work about this.
          Hide
          cnauroth Chris Nauroth added a comment -

          According to our previous discussion, a committer to prepare for committing the codes (now the branch to merge) could run the tests using the way we do for existing cloud modules. Sure for long term, we need to think about a perfect solution for such live service integrations, look like Steve Loughran has already some ideas and even work about this.

          Kai Zheng, to clarify, I am asking that there be a way for committers who have Aliyun cloud credentials to run the contract tests live against the real Aliyun Object Storage Service. I am not demanding that we go beyond that (like a true pre-commit solution) within the scope of the Aliyun integration effort.

          Another way to think of this is that I am asking for the Aliyun integration effort to provide equivalent testing support as our other supported alternative file systems, like WASB and S3A. Currently, that means use of the standard contract tests with a capability to configure credentials and run them from a developer environment integrated with the corresponding back-end service.

          Thank you!

          Show
          cnauroth Chris Nauroth added a comment - According to our previous discussion, a committer to prepare for committing the codes (now the branch to merge) could run the tests using the way we do for existing cloud modules. Sure for long term, we need to think about a perfect solution for such live service integrations, look like Steve Loughran has already some ideas and even work about this. Kai Zheng , to clarify, I am asking that there be a way for committers who have Aliyun cloud credentials to run the contract tests live against the real Aliyun Object Storage Service. I am not demanding that we go beyond that (like a true pre-commit solution) within the scope of the Aliyun integration effort. Another way to think of this is that I am asking for the Aliyun integration effort to provide equivalent testing support as our other supported alternative file systems, like WASB and S3A. Currently, that means use of the standard contract tests with a capability to configure credentials and run them from a developer environment integrated with the corresponding back-end service. Thank you!
          Hide
          drankye Kai Zheng added a comment -

          Thanks Chris for the clarifying and further thoughts. It sound good to me.

          Show
          drankye Kai Zheng added a comment - Thanks Chris for the clarifying and further thoughts. It sound good to me.
          Hide
          shimingfei shimingfei added a comment -

          Kai Zheng Chris Nauroth Steve Loughran Thanks for your suggestions. That's great that creating a branch for OSS integration, and we can do following up optimizations based on that branch. Kai, we can talk about the next work off-line.

          Show
          shimingfei shimingfei added a comment - Kai Zheng Chris Nauroth Steve Loughran Thanks for your suggestions. That's great that creating a branch for OSS integration, and we can do following up optimizations based on that branch. Kai, we can talk about the next work off-line.
          Hide
          drankye Kai Zheng added a comment -

          Hi Chris Nauroth,

          I have some off-line discussions with shimingfei, it looks pretty good to have a branch for this feature as you and Steve suggested. Would you proceed to create it? Kindly let me know when we have it. Thanks for the taking!

          Show
          drankye Kai Zheng added a comment - Hi Chris Nauroth , I have some off-line discussions with shimingfei , it looks pretty good to have a branch for this feature as you and Steve suggested. Would you proceed to create it? Kindly let me know when we have it. Thanks for the taking!
          Hide
          cnauroth Chris Nauroth added a comment -

          Glad to help! I have created a new feature branch in git and a new fix version in JIRA, both named "HADOOP-12756".

          Show
          cnauroth Chris Nauroth added a comment - Glad to help! I have created a new feature branch in git and a new fix version in JIRA, both named " HADOOP-12756 ".
          Hide
          drankye Kai Zheng added a comment -

          Thanks Chris!

          Show
          drankye Kai Zheng added a comment - Thanks Chris!
          Hide
          uncleGen Genmao Yu added a comment -

          Glad to see the progress, I have also do some works about supporting OSS in Hadoop. There are some stability problems which we should pay attention to, include but not limited to:
          1. OSS will close long-time connection(> 3h) and idle connection(>1minute), while it is pretty common.
          2. The 'copy' operation is time-consuming, so we could use the existing Job/Task executing logic, i.e. copy temp result from temp directory to final directory.

          I will open anonther JIRA to fix above-mentioned issue based on your work. What is your suggestions? shimingfei, Kai Zheng

          Show
          uncleGen Genmao Yu added a comment - Glad to see the progress, I have also do some works about supporting OSS in Hadoop. There are some stability problems which we should pay attention to, include but not limited to: 1. OSS will close long-time connection(> 3h) and idle connection(>1minute), while it is pretty common. 2. The 'copy' operation is time-consuming, so we could use the existing Job/Task executing logic, i.e. copy temp result from temp directory to final directory. I will open anonther JIRA to fix above-mentioned issue based on your work. What is your suggestions? shimingfei , Kai Zheng
          Hide
          uncleGen Genmao Yu added a comment -

          Another one small usability suggestion, we can define an OSS URI, like: oss://accessKeyId:accessKeySecret@bucket.endpoint/path/to/file. It will may be more convenient than setting in hadoop configuration. As it is really a small improvement, we can do it in this JIRA.

          Show
          uncleGen Genmao Yu added a comment - Another one small usability suggestion, we can define an OSS URI, like: oss://accessKeyId:accessKeySecret@bucket.endpoint/path/to/file. It will may be more convenient than setting in hadoop configuration. As it is really a small improvement, we can do it in this JIRA.
          Hide
          shimingfei shimingfei added a comment -

          Do you suggest to get credential information from URI? I think current code has handled that case. in getCredentialProvider() method of AliyunOSSFileSystem, we get user and authority information from the URI.

          Show
          shimingfei shimingfei added a comment - Do you suggest to get credential information from URI? I think current code has handled that case. in getCredentialProvider() method of AliyunOSSFileSystem, we get user and authority information from the URI.
          Hide
          uncleGen Genmao Yu added a comment -

          Oops, I found it.

          Show
          uncleGen Genmao Yu added a comment - Oops, I found it.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          -1 to grabbing user and password for this FS. your secrets end up being logged everywhere, as lots of code: error messages, logs, etc all assume that the URIs are safe to print

          We tried in HADOOP-3733 to strip out user:pass logging, but it doesn't work. Instead it does best effort, tells the user off —and warns that it'll be removed from future versions.

          If the current version of this patch does grab user:password, that'll need to be cut out. We don't want to repeat the same security risk. Sorry

          What is good is to use the Configuration.getPassword() to use the credential management in Hadoop —the secrets will then be encrypted outside the file.

          it could also be good to support the ability to specify passwords in the config as an optional s3a.login.$

          {endpoint}

          .$

          {bucket}

          .user= and the same for password= values. That way you can have the config files set up with different logins for different accounts, so do cross account disctp work.

          Show
          stevel@apache.org Steve Loughran added a comment - -1 to grabbing user and password for this FS. your secrets end up being logged everywhere, as lots of code: error messages, logs, etc all assume that the URIs are safe to print We tried in HADOOP-3733 to strip out user:pass logging, but it doesn't work. Instead it does best effort, tells the user off —and warns that it'll be removed from future versions. If the current version of this patch does grab user:password, that'll need to be cut out. We don't want to repeat the same security risk. Sorry What is good is to use the Configuration.getPassword() to use the credential management in Hadoop —the secrets will then be encrypted outside the file. it could also be good to support the ability to specify passwords in the config as an optional s3a.login.$ {endpoint} .$ {bucket} .user= and the same for password= values. That way you can have the config files set up with different logins for different accounts, so do cross account disctp work.
          Hide
          shimingfei shimingfei added a comment -

          Thanks Steve Loughran
          Yes, it is a potential problem. we will remove the logic that gets credentials from URL, and will also support having config files set up logins for different accounts.

          Show
          shimingfei shimingfei added a comment - Thanks Steve Loughran Yes, it is a potential problem. we will remove the logic that gets credentials from URL, and will also support having config files set up logins for different accounts.
          Hide
          uncleGen Genmao Yu added a comment - - edited

          Yep, we met the same problem and attempted to scratch the accessKeyId/accessKeySecret information, but indeed it did not work properly. For better security, we need to cut out the accessKeyId/accessKeySecret from OSS URI.

          Show
          uncleGen Genmao Yu added a comment - - edited Yep, we met the same problem and attempted to scratch the accessKeyId/accessKeySecret information, but indeed it did not work properly. For better security, we need to cut out the accessKeyId/accessKeySecret from OSS URI.
          Hide
          uncleGen Genmao Yu added a comment - - edited

          shimingfei IMHO, when do 'multipartUploadObject' operation in class 'AliyunOSSOutputStream', the part number is less than or equal to 10000, so the part size need to be limited by 'fs.oss.multipart.upload.size' and part number upper limit (now is 10000). See the doc here.

          Show
          uncleGen Genmao Yu added a comment - - edited shimingfei IMHO, when do 'multipartUploadObject' operation in class 'AliyunOSSOutputStream', the part number is less than or equal to 10000, so the part size need to be limited by 'fs.oss.multipart.upload.size' and part number upper limit (now is 10000). See the doc here .
          Hide
          shimingfei shimingfei added a comment -

          Good catch! it is a potential problem.

          Show
          shimingfei shimingfei added a comment - Good catch! it is a potential problem.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 14s Docker mode activated.
          +1 @author 0m 1s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 14 new or modified test files.
          0 mvndep 0m 13s Maven dependency ordering for branch
          +1 mvninstall 6m 35s trunk passed
          +1 compile 7m 13s trunk passed
          +1 checkstyle 1m 30s trunk passed
          +1 mvnsite 11m 14s trunk passed
          +1 mvneclipse 2m 9s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools .
          +1 findbugs 0m 0s trunk passed
          +1 javadoc 5m 25s trunk passed
          0 mvndep 0m 8s Maven dependency ordering for patch
          -1 mvninstall 0m 8s root in the patch failed.
          -1 mvninstall 0m 8s hadoop-tools in the patch failed.
          -1 mvninstall 0m 6s hadoop-aliyun in the patch failed.
          -1 mvninstall 0m 11s hadoop-tools-dist in the patch failed.
          -1 compile 0m 8s root in the patch failed.
          -1 javac 0m 8s root in the patch failed.
          +1 checkstyle 0m 10s the patch passed
          -1 mvnsite 0m 10s root in the patch failed.
          -1 mvneclipse 0m 9s root in the patch failed.
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 xml 0m 8s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: . hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist
          -1 findbugs 0m 10s hadoop-aliyun in the patch failed.
          -1 javadoc 0m 12s root in the patch failed.
          -1 unit 0m 12s root in the patch failed.
          0 asflicense 0m 13s ASF License check generated no output?
          58m 21s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821189/HADOOP-12756.007.patch
          JIRA Issue HADOOP-12756
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux 4847a06e169d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 3d191cc
          Default Java 1.8.0_101
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-mvninstall-root.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-mvninstall-hadoop-tools.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aliyun.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-tools-dist.txt
          compile https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-compile-root.txt
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-compile-root.txt
          mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-mvnsite-root.txt
          mvneclipse https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-mvneclipse-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/whitespace-eol.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-findbugs-hadoop-tools_hadoop-aliyun.txt
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-javadoc-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/testReport/
          modules C: . hadoop-project hadoop-tools hadoop-tools/hadoop-aliyun hadoop-tools/hadoop-tools-dist U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 1s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 14 new or modified test files. 0 mvndep 0m 13s Maven dependency ordering for branch +1 mvninstall 6m 35s trunk passed +1 compile 7m 13s trunk passed +1 checkstyle 1m 30s trunk passed +1 mvnsite 11m 14s trunk passed +1 mvneclipse 2m 9s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . +1 findbugs 0m 0s trunk passed +1 javadoc 5m 25s trunk passed 0 mvndep 0m 8s Maven dependency ordering for patch -1 mvninstall 0m 8s root in the patch failed. -1 mvninstall 0m 8s hadoop-tools in the patch failed. -1 mvninstall 0m 6s hadoop-aliyun in the patch failed. -1 mvninstall 0m 11s hadoop-tools-dist in the patch failed. -1 compile 0m 8s root in the patch failed. -1 javac 0m 8s root in the patch failed. +1 checkstyle 0m 10s the patch passed -1 mvnsite 0m 10s root in the patch failed. -1 mvneclipse 0m 9s root in the patch failed. -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 8s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: . hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist -1 findbugs 0m 10s hadoop-aliyun in the patch failed. -1 javadoc 0m 12s root in the patch failed. -1 unit 0m 12s root in the patch failed. 0 asflicense 0m 13s ASF License check generated no output? 58m 21s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821189/HADOOP-12756.007.patch JIRA Issue HADOOP-12756 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux 4847a06e169d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 3d191cc Default Java 1.8.0_101 mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-mvninstall-root.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-mvninstall-hadoop-tools.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aliyun.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-tools-dist.txt compile https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-compile-root.txt javac https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-compile-root.txt mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-mvnsite-root.txt mvneclipse https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-mvneclipse-root.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/whitespace-eol.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-findbugs-hadoop-tools_hadoop-aliyun.txt javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-javadoc-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/testReport/ modules C: . hadoop-project hadoop-tools hadoop-tools/hadoop-aliyun hadoop-tools/hadoop-tools-dist U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10130/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 14s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 14 new or modified test files.
          0 mvndep 1m 46s Maven dependency ordering for branch
          +1 mvninstall 7m 47s trunk passed
          +1 compile 8m 3s trunk passed
          +1 checkstyle 1m 22s trunk passed
          +1 mvnsite 10m 2s trunk passed
          +1 mvneclipse 1m 0s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools .
          +1 findbugs 0m 0s trunk passed
          +1 javadoc 4m 44s trunk passed
          0 mvndep 0m 8s Maven dependency ordering for patch
          -1 mvninstall 0m 9s root in the patch failed.
          -1 mvninstall 0m 8s hadoop-tools in the patch failed.
          -1 mvninstall 0m 7s hadoop-aliyun in the patch failed.
          -1 mvninstall 0m 11s hadoop-tools-dist in the patch failed.
          -1 compile 0m 8s root in the patch failed.
          -1 javac 0m 8s root in the patch failed.
          +1 checkstyle 0m 9s the patch passed
          -1 mvnsite 0m 9s root in the patch failed.
          -1 mvneclipse 0m 9s root in the patch failed.
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 xml 0m 8s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: . hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist
          -1 findbugs 0m 9s hadoop-aliyun in the patch failed.
          -1 javadoc 0m 11s root in the patch failed.
          -1 unit 0m 12s root in the patch failed.
          0 asflicense 0m 12s ASF License check generated no output?
          58m 57s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821191/HADOOP-12756.007.patch
          JIRA Issue HADOOP-12756
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux 0ca519987e92 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 3d191cc
          Default Java 1.8.0_101
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-mvninstall-root.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-mvninstall-hadoop-tools.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aliyun.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-tools-dist.txt
          compile https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-compile-root.txt
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-compile-root.txt
          mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-mvnsite-root.txt
          mvneclipse https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-mvneclipse-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/whitespace-eol.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-findbugs-hadoop-tools_hadoop-aliyun.txt
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-javadoc-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/testReport/
          modules C: . hadoop-project hadoop-tools hadoop-tools/hadoop-aliyun hadoop-tools/hadoop-tools-dist U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 14 new or modified test files. 0 mvndep 1m 46s Maven dependency ordering for branch +1 mvninstall 7m 47s trunk passed +1 compile 8m 3s trunk passed +1 checkstyle 1m 22s trunk passed +1 mvnsite 10m 2s trunk passed +1 mvneclipse 1m 0s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . +1 findbugs 0m 0s trunk passed +1 javadoc 4m 44s trunk passed 0 mvndep 0m 8s Maven dependency ordering for patch -1 mvninstall 0m 9s root in the patch failed. -1 mvninstall 0m 8s hadoop-tools in the patch failed. -1 mvninstall 0m 7s hadoop-aliyun in the patch failed. -1 mvninstall 0m 11s hadoop-tools-dist in the patch failed. -1 compile 0m 8s root in the patch failed. -1 javac 0m 8s root in the patch failed. +1 checkstyle 0m 9s the patch passed -1 mvnsite 0m 9s root in the patch failed. -1 mvneclipse 0m 9s root in the patch failed. -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 8s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: . hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist -1 findbugs 0m 9s hadoop-aliyun in the patch failed. -1 javadoc 0m 11s root in the patch failed. -1 unit 0m 12s root in the patch failed. 0 asflicense 0m 12s ASF License check generated no output? 58m 57s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821191/HADOOP-12756.007.patch JIRA Issue HADOOP-12756 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux 0ca519987e92 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 3d191cc Default Java 1.8.0_101 mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-mvninstall-root.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-mvninstall-hadoop-tools.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aliyun.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-tools-dist.txt compile https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-compile-root.txt javac https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-compile-root.txt mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-mvnsite-root.txt mvneclipse https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-mvneclipse-root.txt whitespace https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/whitespace-eol.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-findbugs-hadoop-tools_hadoop-aliyun.txt javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-javadoc-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/testReport/ modules C: . hadoop-project hadoop-tools hadoop-tools/hadoop-aliyun hadoop-tools/hadoop-tools-dist U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10131/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 14 new or modified test files.
          0 mvndep 3m 12s Maven dependency ordering for branch
          +1 mvninstall 7m 57s trunk passed
          +1 compile 7m 38s trunk passed
          +1 checkstyle 1m 31s trunk passed
          +1 mvnsite 10m 13s trunk passed
          +1 mvneclipse 0m 58s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools .
          +1 findbugs 0m 0s trunk passed
          +1 javadoc 4m 39s trunk passed
          0 mvndep 0m 8s Maven dependency ordering for patch
          -1 mvninstall 0m 8s root in the patch failed.
          -1 mvninstall 0m 7s hadoop-tools in the patch failed.
          -1 mvninstall 0m 7s hadoop-aliyun in the patch failed.
          -1 mvninstall 0m 10s hadoop-tools-dist in the patch failed.
          -1 compile 0m 9s root in the patch failed.
          -1 javac 0m 9s root in the patch failed.
          +1 checkstyle 0m 9s the patch passed
          -1 mvnsite 0m 9s root in the patch failed.
          -1 mvneclipse 0m 10s root in the patch failed.
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 8s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: . hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist
          -1 findbugs 0m 10s hadoop-aliyun in the patch failed.
          -1 javadoc 0m 11s root in the patch failed.
          -1 unit 0m 11s root in the patch failed.
          0 asflicense 0m 11s ASF License check generated no output?
          60m 20s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821262/HADOOP-12756.007.patch
          JIRA Issue HADOOP-12756
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux 66b572db7010 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 34ccaa8
          Default Java 1.8.0_101
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-mvninstall-root.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-mvninstall-hadoop-tools.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aliyun.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-tools-dist.txt
          compile https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-compile-root.txt
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-compile-root.txt
          mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-mvnsite-root.txt
          mvneclipse https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-mvneclipse-root.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-findbugs-hadoop-tools_hadoop-aliyun.txt
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-javadoc-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/testReport/
          modules C: . hadoop-project hadoop-tools hadoop-tools/hadoop-aliyun hadoop-tools/hadoop-tools-dist U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 14 new or modified test files. 0 mvndep 3m 12s Maven dependency ordering for branch +1 mvninstall 7m 57s trunk passed +1 compile 7m 38s trunk passed +1 checkstyle 1m 31s trunk passed +1 mvnsite 10m 13s trunk passed +1 mvneclipse 0m 58s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . +1 findbugs 0m 0s trunk passed +1 javadoc 4m 39s trunk passed 0 mvndep 0m 8s Maven dependency ordering for patch -1 mvninstall 0m 8s root in the patch failed. -1 mvninstall 0m 7s hadoop-tools in the patch failed. -1 mvninstall 0m 7s hadoop-aliyun in the patch failed. -1 mvninstall 0m 10s hadoop-tools-dist in the patch failed. -1 compile 0m 9s root in the patch failed. -1 javac 0m 9s root in the patch failed. +1 checkstyle 0m 9s the patch passed -1 mvnsite 0m 9s root in the patch failed. -1 mvneclipse 0m 10s root in the patch failed. +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 8s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: . hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist -1 findbugs 0m 10s hadoop-aliyun in the patch failed. -1 javadoc 0m 11s root in the patch failed. -1 unit 0m 11s root in the patch failed. 0 asflicense 0m 11s ASF License check generated no output? 60m 20s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821262/HADOOP-12756.007.patch JIRA Issue HADOOP-12756 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux 66b572db7010 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 34ccaa8 Default Java 1.8.0_101 mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-mvninstall-root.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-mvninstall-hadoop-tools.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aliyun.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-tools-dist.txt compile https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-compile-root.txt javac https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-compile-root.txt mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-mvnsite-root.txt mvneclipse https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-mvneclipse-root.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-findbugs-hadoop-tools_hadoop-aliyun.txt javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-javadoc-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/testReport/ modules C: . hadoop-project hadoop-tools hadoop-tools/hadoop-aliyun hadoop-tools/hadoop-tools-dist U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10136/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          uncleGen Genmao Yu added a comment -

          mingfei.shi
          mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-1 -Ptest-patch -Pparallel-tests -P!shelltest -Pnative -Drequire.libwebhdfs -Drequire.snappy -Drequire.openssl -Drequire.fuse -Drequire.test.libhadoop clean test -fae
          [ERROR] The project org.apache.hadoop:hadoop-aliyun:3.0.0-alpha1-SNAPSHOT (/testptch/hadoop/hadoop-tools/hadoop-aliyun/pom.xml) has 1 error
          [ERROR] Non-resolvable parent POM: Could not find artifact org.apache.hadoop:hadoop-project:pom:3.0.0-alpha1-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 18, column 11 -> [Help 2]

          Show
          uncleGen Genmao Yu added a comment - mingfei.shi mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-1 -Ptest-patch -Pparallel-tests -P!shelltest -Pnative -Drequire.libwebhdfs -Drequire.snappy -Drequire.openssl -Drequire.fuse -Drequire.test.libhadoop clean test -fae [ERROR] The project org.apache.hadoop:hadoop-aliyun:3.0.0-alpha1-SNAPSHOT (/testptch/hadoop/hadoop-tools/hadoop-aliyun/pom.xml) has 1 error [ERROR] Non-resolvable parent POM: Could not find artifact org.apache.hadoop:hadoop-project:pom:3.0.0-alpha1-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 18, column 11 -> [Help 2]
          Hide
          drankye Kai Zheng added a comment -

          I have updated the HADOOP-12756 branch to sync with the latest trunk. Let's try the patch once more, reloading the same patch with new version.

          Show
          drankye Kai Zheng added a comment - I have updated the HADOOP-12756 branch to sync with the latest trunk. Let's try the patch once more, reloading the same patch with new version.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 14 new or modified test files.
          0 mvndep 0m 14s Maven dependency ordering for branch
          +1 mvninstall 7m 13s trunk passed
          +1 compile 6m 48s trunk passed
          +1 checkstyle 1m 22s trunk passed
          +1 mvnsite 9m 8s trunk passed
          +1 mvneclipse 0m 59s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools .
          +1 findbugs 0m 0s trunk passed
          +1 javadoc 4m 39s trunk passed
          0 mvndep 0m 8s Maven dependency ordering for patch
          -1 mvninstall 0m 8s root in the patch failed.
          -1 mvninstall 0m 7s hadoop-tools in the patch failed.
          -1 mvninstall 0m 7s hadoop-aliyun in the patch failed.
          -1 mvninstall 0m 10s hadoop-tools-dist in the patch failed.
          -1 compile 0m 9s root in the patch failed.
          -1 javac 0m 9s root in the patch failed.
          +1 checkstyle 0m 9s the patch passed
          -1 mvnsite 0m 9s root in the patch failed.
          -1 mvneclipse 0m 9s root in the patch failed.
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 8s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: . hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist
          -1 findbugs 0m 9s hadoop-aliyun in the patch failed.
          -1 javadoc 0m 12s root in the patch failed.
          -1 unit 0m 11s root in the patch failed.
          0 asflicense 0m 12s ASF License check generated no output?
          54m 39s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821347/HADOOP-12756.008.patch
          JIRA Issue HADOOP-12756
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux c557ac6a89df 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9f473cf
          Default Java 1.8.0_101
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-mvninstall-root.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-mvninstall-hadoop-tools.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aliyun.txt
          mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-tools-dist.txt
          compile https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-compile-root.txt
          javac https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-compile-root.txt
          mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-mvnsite-root.txt
          mvneclipse https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-mvneclipse-root.txt
          findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-findbugs-hadoop-tools_hadoop-aliyun.txt
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-javadoc-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/testReport/
          modules C: . hadoop-project hadoop-tools hadoop-tools/hadoop-aliyun hadoop-tools/hadoop-tools-dist U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 14 new or modified test files. 0 mvndep 0m 14s Maven dependency ordering for branch +1 mvninstall 7m 13s trunk passed +1 compile 6m 48s trunk passed +1 checkstyle 1m 22s trunk passed +1 mvnsite 9m 8s trunk passed +1 mvneclipse 0m 59s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . +1 findbugs 0m 0s trunk passed +1 javadoc 4m 39s trunk passed 0 mvndep 0m 8s Maven dependency ordering for patch -1 mvninstall 0m 8s root in the patch failed. -1 mvninstall 0m 7s hadoop-tools in the patch failed. -1 mvninstall 0m 7s hadoop-aliyun in the patch failed. -1 mvninstall 0m 10s hadoop-tools-dist in the patch failed. -1 compile 0m 9s root in the patch failed. -1 javac 0m 9s root in the patch failed. +1 checkstyle 0m 9s the patch passed -1 mvnsite 0m 9s root in the patch failed. -1 mvneclipse 0m 9s root in the patch failed. +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 8s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: . hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist -1 findbugs 0m 9s hadoop-aliyun in the patch failed. -1 javadoc 0m 12s root in the patch failed. -1 unit 0m 11s root in the patch failed. 0 asflicense 0m 12s ASF License check generated no output? 54m 39s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821347/HADOOP-12756.008.patch JIRA Issue HADOOP-12756 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux c557ac6a89df 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9f473cf Default Java 1.8.0_101 mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-mvninstall-root.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-mvninstall-hadoop-tools.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-aliyun.txt mvninstall https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-tools-dist.txt compile https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-compile-root.txt javac https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-compile-root.txt mvnsite https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-mvnsite-root.txt mvneclipse https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-mvneclipse-root.txt findbugs https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-findbugs-hadoop-tools_hadoop-aliyun.txt javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-javadoc-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/testReport/ modules C: . hadoop-project hadoop-tools hadoop-tools/hadoop-aliyun hadoop-tools/hadoop-tools-dist U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10142/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          drankye Kai Zheng added a comment -

          Mingfei,

          The patch needs to be updated against HADOOP-12756 branch. The major relevant change is 3.0.0-alpha1-SNAPSHOT -> 3.0.0-alpha2-SNAPSHOT. Please check and make sure you're working/patching based on the same branch. Thanks.

          Show
          drankye Kai Zheng added a comment - Mingfei, The patch needs to be updated against HADOOP-12756 branch. The major relevant change is 3.0.0-alpha1-SNAPSHOT -> 3.0.0-alpha2-SNAPSHOT. Please check and make sure you're working/patching based on the same branch. Thanks.
          Hide
          shimingfei shimingfei added a comment -

          Thanks Kai

          I have updated the patch. HADOOP-12756.007.patch

          Thanks
          Mingfei

          Show
          shimingfei shimingfei added a comment - Thanks Kai I have updated the patch. HADOOP-12756 .007.patch Thanks Mingfei
          Hide
          shimingfei shimingfei added a comment -

          Kai Zheng
          I have rebased the patch against latest HADOOP-12756 branch

          Show
          shimingfei shimingfei added a comment - Kai Zheng I have rebased the patch against latest HADOOP-12756 branch
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 9m 4s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 14 new or modified test files.
          0 mvndep 5m 15s Maven dependency ordering for branch
          +1 mvninstall 8m 42s trunk passed
          +1 compile 8m 33s trunk passed
          +1 checkstyle 1m 40s trunk passed
          +1 mvnsite 11m 40s trunk passed
          +1 mvneclipse 2m 57s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools .
          +1 findbugs 0m 0s trunk passed
          +1 javadoc 5m 36s trunk passed
          0 mvndep 0m 24s Maven dependency ordering for patch
          +1 mvninstall 10m 26s the patch passed
          +1 compile 8m 18s the patch passed
          +1 javac 8m 18s the patch passed
          -0 checkstyle 1m 42s root: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0)
          +1 mvnsite 9m 23s the patch passed
          +1 mvneclipse 1m 6s the patch passed
          +1 whitespace 0m 1s The patch has no whitespace issues.
          +1 xml 0m 9s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist
          +1 findbugs 0m 38s the patch passed
          -1 javadoc 4m 57s root generated 1 new + 11509 unchanged - 0 fixed = 11510 total (was 11509)
          -1 unit 82m 34s root in the patch failed.
          -1 asflicense 0m 28s The patch generated 2 ASF License warnings.
          195m 39s



          Reason Tests
          Failed junit tests hadoop.yarn.logaggregation.TestAggregatedLogFormat



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821501/HADOOP-12756.007.patch
          JIRA Issue HADOOP-12756
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux de0785aae1a8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9f473cf
          Default Java 1.8.0_101
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10148/artifact/patchprocess/diff-checkstyle-root.txt
          javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/10148/artifact/patchprocess/diff-javadoc-javadoc-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10148/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10148/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HADOOP-Build/10148/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-project hadoop-tools/hadoop-aliyun . hadoop-tools hadoop-tools/hadoop-tools-dist U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10148/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 9m 4s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 14 new or modified test files. 0 mvndep 5m 15s Maven dependency ordering for branch +1 mvninstall 8m 42s trunk passed +1 compile 8m 33s trunk passed +1 checkstyle 1m 40s trunk passed +1 mvnsite 11m 40s trunk passed +1 mvneclipse 2m 57s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . +1 findbugs 0m 0s trunk passed +1 javadoc 5m 36s trunk passed 0 mvndep 0m 24s Maven dependency ordering for patch +1 mvninstall 10m 26s the patch passed +1 compile 8m 18s the patch passed +1 javac 8m 18s the patch passed -0 checkstyle 1m 42s root: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) +1 mvnsite 9m 23s the patch passed +1 mvneclipse 1m 6s the patch passed +1 whitespace 0m 1s The patch has no whitespace issues. +1 xml 0m 9s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist +1 findbugs 0m 38s the patch passed -1 javadoc 4m 57s root generated 1 new + 11509 unchanged - 0 fixed = 11510 total (was 11509) -1 unit 82m 34s root in the patch failed. -1 asflicense 0m 28s The patch generated 2 ASF License warnings. 195m 39s Reason Tests Failed junit tests hadoop.yarn.logaggregation.TestAggregatedLogFormat Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821501/HADOOP-12756.007.patch JIRA Issue HADOOP-12756 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux de0785aae1a8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9f473cf Default Java 1.8.0_101 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10148/artifact/patchprocess/diff-checkstyle-root.txt javadoc https://builds.apache.org/job/PreCommit-HADOOP-Build/10148/artifact/patchprocess/diff-javadoc-javadoc-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10148/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10148/testReport/ asflicense https://builds.apache.org/job/PreCommit-HADOOP-Build/10148/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-project hadoop-tools/hadoop-aliyun . hadoop-tools hadoop-tools/hadoop-tools-dist U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10148/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 14 new or modified test files.
          0 mvndep 0m 14s Maven dependency ordering for branch
          +1 mvninstall 6m 34s trunk passed
          +1 compile 6m 46s trunk passed
          +1 checkstyle 1m 23s trunk passed
          +1 mvnsite 9m 13s trunk passed
          +1 mvneclipse 1m 0s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools .
          +1 findbugs 0m 0s trunk passed
          +1 javadoc 4m 41s trunk passed
          0 mvndep 1m 2s Maven dependency ordering for patch
          +1 mvninstall 8m 33s the patch passed
          +1 compile 6m 47s the patch passed
          +1 javac 6m 47s the patch passed
          -0 checkstyle 1m 27s root: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0)
          +1 mvnsite 9m 16s the patch passed
          +1 mvneclipse 1m 26s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 9s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist
          +1 findbugs 0m 35s the patch passed
          +1 javadoc 4m 45s the patch passed
          -1 unit 20m 25s root in the patch failed.
          -1 asflicense 0m 25s The patch generated 2 ASF License warnings.
          107m 1s



          Reason Tests
          Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821554/HADOOP-12756.007.patch
          JIRA Issue HADOOP-12756
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux 6d6fe78a0013 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / a5fb298
          Default Java 1.8.0_101
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10151/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10151/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10151/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HADOOP-Build/10151/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-project hadoop-tools/hadoop-aliyun . hadoop-tools hadoop-tools/hadoop-tools-dist U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10151/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 14 new or modified test files. 0 mvndep 0m 14s Maven dependency ordering for branch +1 mvninstall 6m 34s trunk passed +1 compile 6m 46s trunk passed +1 checkstyle 1m 23s trunk passed +1 mvnsite 9m 13s trunk passed +1 mvneclipse 1m 0s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . +1 findbugs 0m 0s trunk passed +1 javadoc 4m 41s trunk passed 0 mvndep 1m 2s Maven dependency ordering for patch +1 mvninstall 8m 33s the patch passed +1 compile 6m 47s the patch passed +1 javac 6m 47s the patch passed -0 checkstyle 1m 27s root: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) +1 mvnsite 9m 16s the patch passed +1 mvneclipse 1m 26s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 9s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist +1 findbugs 0m 35s the patch passed +1 javadoc 4m 45s the patch passed -1 unit 20m 25s root in the patch failed. -1 asflicense 0m 25s The patch generated 2 ASF License warnings. 107m 1s Reason Tests Timed out junit tests org.apache.hadoop.http.TestHttpServerLifecycle Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12821554/HADOOP-12756.007.patch JIRA Issue HADOOP-12756 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux 6d6fe78a0013 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / a5fb298 Default Java 1.8.0_101 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10151/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10151/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10151/testReport/ asflicense https://builds.apache.org/job/PreCommit-HADOOP-Build/10151/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-project hadoop-tools/hadoop-aliyun . hadoop-tools hadoop-tools/hadoop-tools-dist U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10151/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          shimingfei shimingfei added a comment -

          Kai Zheng
          it seems that the last two failures are not caused by this patch.

          Show
          shimingfei shimingfei added a comment - Kai Zheng it seems that the last two failures are not caused by this patch.
          Hide
          drankye Kai Zheng added a comment -

          Hi Mingfei,

          Thanks for the update. The latest patch looks good to me. Could you run the added tests and post the results here? Thanks.

          +1 pending on the test results.

          Show
          drankye Kai Zheng added a comment - Hi Mingfei, Thanks for the update. The latest patch looks good to me. Could you run the added tests and post the results here? Thanks. +1 pending on the test results.
          Hide
          shimingfei shimingfei added a comment -

          Kai Zheng Thanks!
          the tests pass on my machine:
          [INFO] Scanning for projects...
          [INFO]
          [INFO] ------------------------------------------------------------------------
          [INFO] Building Apache Hadoop Aliyun OSS support 3.0.0-alpha2-SNAPSHOT
          [INFO] ------------------------------------------------------------------------
          [INFO]
          [INFO] — maven-clean-plugin:2.5:clean (default-clean) @ hadoop-aliyun —
          [INFO] Deleting /home/shimingfei/Pocs/CustomerCases/ali_oss/hadoop-tools/hadoop-aliyun/target
          [INFO]
          [INFO] — maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-aliyun —
          [INFO] Executing tasks

          main:
          [mkdir] Created dir: /home/shimingfei/Pocs/CustomerCases/ali_oss/hadoop-tools/hadoop-aliyun/target/test-dir
          [INFO] Executed tasks
          [INFO]
          [INFO] — maven-remote-resources-plugin:1.5:process (default) @ hadoop-aliyun —
          [INFO]
          [INFO] — maven-resources-plugin:2.6:resources (default-resources) @ hadoop-aliyun —
          [INFO] Using 'UTF-8' encoding to copy filtered resources.
          [INFO] skip non existing resourceDirectory /home/shimingfei/Pocs/CustomerCases/ali_oss/hadoop-tools/hadoop-aliyun/src/main/resources
          [INFO] Copying 2 resources
          [INFO]
          [INFO] — maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-aliyun —
          [INFO] Compiling 6 source files to /home/shimingfei/Pocs/CustomerCases/ali_oss/hadoop-tools/hadoop-aliyun/target/classes
          [INFO]
          [INFO] — maven-dependency-plugin:2.2:list (deplist) @ hadoop-aliyun —
          [INFO]
          [INFO] — maven-resources-plugin:2.6:testResources (default-testResources) @ hadoop-aliyun —
          [INFO] Using 'UTF-8' encoding to copy filtered resources.
          [INFO] Copying 5 resources
          [INFO] Copying 2 resources
          [INFO]
          [INFO] — maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hadoop-aliyun —
          [INFO] Compiling 11 source files to /home/shimingfei/Pocs/CustomerCases/ali_oss/hadoop-tools/hadoop-aliyun/target/test-classes
          [INFO]
          [INFO] — maven-surefire-plugin:2.17:test (default-test) @ hadoop-aliyun —
          [INFO] Surefire report directory: /home/shimingfei/Pocs/CustomerCases/ali_oss/hadoop-tools/hadoop-aliyun/target/surefire-reports

          -------------------------------------------------------
          T E S T S
          -------------------------------------------------------

          -------------------------------------------------------
          T E S T S
          -------------------------------------------------------
          Running org.apache.hadoop.fs.aliyun.oss.TestOSSOutputStream
          Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.906 sec - in org.apache.hadoop.fs.aliyun.oss.TestOSSOutputStream
          Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractMkdir
          Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.393 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractMkdir
          Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractRename
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.947 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractRename
          Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractSeek
          Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.084 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractSeek
          Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractOpen
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.622 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractOpen
          Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractCreate
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 2.776 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractCreate
          Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractDelete
          Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.193 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractDelete
          Running org.apache.hadoop.fs.aliyun.oss.TestOSSFileSystemContract
          Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.588 sec - in org.apache.hadoop.fs.aliyun.oss.TestOSSFileSystemContract
          Running org.apache.hadoop.fs.aliyun.oss.TestOSSInputStream
          Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.163 sec - in org.apache.hadoop.fs.aliyun.oss.TestOSSInputStream

          Results :

          Tests run: 99, Failures: 0, Errors: 0, Skipped: 3

          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD SUCCESS
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 59.688 s
          [INFO] Finished at: 2016-08-04T17:59:13+08:00
          [INFO] Final Memory: 34M/380M
          [INFO] ------------------------------------------------------------------------

          Show
          shimingfei shimingfei added a comment - Kai Zheng Thanks! the tests pass on my machine: [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Apache Hadoop Aliyun OSS support 3.0.0-alpha2-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] — maven-clean-plugin:2.5:clean (default-clean) @ hadoop-aliyun — [INFO] Deleting /home/shimingfei/Pocs/CustomerCases/ali_oss/hadoop-tools/hadoop-aliyun/target [INFO] [INFO] — maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-aliyun — [INFO] Executing tasks main: [mkdir] Created dir: /home/shimingfei/Pocs/CustomerCases/ali_oss/hadoop-tools/hadoop-aliyun/target/test-dir [INFO] Executed tasks [INFO] [INFO] — maven-remote-resources-plugin:1.5:process (default) @ hadoop-aliyun — [INFO] [INFO] — maven-resources-plugin:2.6:resources (default-resources) @ hadoop-aliyun — [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/shimingfei/Pocs/CustomerCases/ali_oss/hadoop-tools/hadoop-aliyun/src/main/resources [INFO] Copying 2 resources [INFO] [INFO] — maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-aliyun — [INFO] Compiling 6 source files to /home/shimingfei/Pocs/CustomerCases/ali_oss/hadoop-tools/hadoop-aliyun/target/classes [INFO] [INFO] — maven-dependency-plugin:2.2:list (deplist) @ hadoop-aliyun — [INFO] [INFO] — maven-resources-plugin:2.6:testResources (default-testResources) @ hadoop-aliyun — [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 5 resources [INFO] Copying 2 resources [INFO] [INFO] — maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hadoop-aliyun — [INFO] Compiling 11 source files to /home/shimingfei/Pocs/CustomerCases/ali_oss/hadoop-tools/hadoop-aliyun/target/test-classes [INFO] [INFO] — maven-surefire-plugin:2.17:test (default-test) @ hadoop-aliyun — [INFO] Surefire report directory: /home/shimingfei/Pocs/CustomerCases/ali_oss/hadoop-tools/hadoop-aliyun/target/surefire-reports ------------------------------------------------------- T E S T S ------------------------------------------------------- ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.hadoop.fs.aliyun.oss.TestOSSOutputStream Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.906 sec - in org.apache.hadoop.fs.aliyun.oss.TestOSSOutputStream Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractMkdir Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.393 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractMkdir Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractRename Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.947 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractRename Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractSeek Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.084 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractSeek Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractOpen Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.622 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractOpen Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractCreate Tests run: 6, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 2.776 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractCreate Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractDelete Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.193 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractDelete Running org.apache.hadoop.fs.aliyun.oss.TestOSSFileSystemContract Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.588 sec - in org.apache.hadoop.fs.aliyun.oss.TestOSSFileSystemContract Running org.apache.hadoop.fs.aliyun.oss.TestOSSInputStream Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.163 sec - in org.apache.hadoop.fs.aliyun.oss.TestOSSInputStream Results : Tests run: 99, Failures: 0, Errors: 0, Skipped: 3 [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 59.688 s [INFO] Finished at: 2016-08-04T17:59:13+08:00 [INFO] Final Memory: 34M/380M [INFO] ------------------------------------------------------------------------
          Hide
          uncleGen Genmao Yu added a comment -

          great work!

          Show
          uncleGen Genmao Yu added a comment - great work!
          Hide
          drankye Kai Zheng added a comment -

          The results look great! Will commit it shortly to the branch, with the minor check styles fixed.

          Show
          drankye Kai Zheng added a comment - The results look great! Will commit it shortly to the branch, with the minor check styles fixed.
          Hide
          drankye Kai Zheng added a comment -

          Uploaded the committed 009 version with the minor check styles fixed.

          Show
          drankye Kai Zheng added a comment - Uploaded the committed 009 version with the minor check styles fixed.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 22s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 14 new or modified test files.
          0 mvndep 1m 53s Maven dependency ordering for branch
          +1 mvninstall 7m 34s trunk passed
          +1 compile 8m 33s trunk passed
          +1 checkstyle 1m 28s trunk passed
          +1 mvnsite 11m 2s trunk passed
          +1 mvneclipse 2m 43s trunk passed
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools .
          +1 findbugs 0m 0s trunk passed
          +1 javadoc 5m 11s trunk passed
          0 mvndep 0m 21s Maven dependency ordering for patch
          +1 mvninstall 8m 53s the patch passed
          +1 compile 7m 1s the patch passed
          +1 javac 7m 1s the patch passed
          -0 checkstyle 1m 27s root: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0)
          +1 mvnsite 9m 28s the patch passed
          +1 mvneclipse 1m 7s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 8s The patch has no ill-formed XML file.
          0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist
          +1 findbugs 0m 35s the patch passed
          +1 javadoc 5m 8s the patch passed
          -1 unit 91m 7s root in the patch failed.
          -1 asflicense 0m 34s The patch generated 2 ASF License warnings.
          186m 51s



          Reason Tests
          Failed junit tests hadoop.tracing.TestTracing
            hadoop.hdfs.TestCrcCorruption
            hadoop.yarn.logaggregation.TestAggregatedLogFormat



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12822032/HADOOP-12756.007.patch
          JIRA Issue HADOOP-12756
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle
          uname Linux 7a53da705206 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 08e3338
          Default Java 1.8.0_101
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10175/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10175/artifact/patchprocess/patch-unit-root.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10175/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HADOOP-Build/10175/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-project hadoop-tools/hadoop-aliyun . hadoop-tools hadoop-tools/hadoop-tools-dist U: .
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10175/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 22s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 14 new or modified test files. 0 mvndep 1m 53s Maven dependency ordering for branch +1 mvninstall 7m 34s trunk passed +1 compile 8m 33s trunk passed +1 checkstyle 1m 28s trunk passed +1 mvnsite 11m 2s trunk passed +1 mvneclipse 2m 43s trunk passed 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . +1 findbugs 0m 0s trunk passed +1 javadoc 5m 11s trunk passed 0 mvndep 0m 21s Maven dependency ordering for patch +1 mvninstall 8m 53s the patch passed +1 compile 7m 1s the patch passed +1 javac 7m 1s the patch passed -0 checkstyle 1m 27s root: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) +1 mvnsite 9m 28s the patch passed +1 mvneclipse 1m 7s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 8s The patch has no ill-formed XML file. 0 findbugs 0m 0s Skipped patched modules with no Java source: hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist +1 findbugs 0m 35s the patch passed +1 javadoc 5m 8s the patch passed -1 unit 91m 7s root in the patch failed. -1 asflicense 0m 34s The patch generated 2 ASF License warnings. 186m 51s Reason Tests Failed junit tests hadoop.tracing.TestTracing   hadoop.hdfs.TestCrcCorruption   hadoop.yarn.logaggregation.TestAggregatedLogFormat Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12822032/HADOOP-12756.007.patch JIRA Issue HADOOP-12756 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle uname Linux 7a53da705206 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 08e3338 Default Java 1.8.0_101 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10175/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HADOOP-Build/10175/artifact/patchprocess/patch-unit-root.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10175/testReport/ asflicense https://builds.apache.org/job/PreCommit-HADOOP-Build/10175/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-project hadoop-tools/hadoop-aliyun . hadoop-tools hadoop-tools/hadoop-tools-dist U: . Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10175/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          drankye Kai Zheng added a comment -

          Committed to HADOOP-12756 branch. Thanks shimingfei and Lin for this large contribution! Thanks Steve Loughran, Chris Nauroth, Yi Liu and etc. for the reviewing and guidance.

          Show
          drankye Kai Zheng added a comment - Committed to HADOOP-12756 branch. Thanks shimingfei and Lin for this large contribution! Thanks Steve Loughran , Chris Nauroth , Yi Liu and etc. for the reviewing and guidance.
          Hide
          shimingfei shimingfei added a comment -

          Kai Zheng
          Thanks for your reviewing and guidance.

          Show
          shimingfei shimingfei added a comment - Kai Zheng Thanks for your reviewing and guidance.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          OK, so we have this in a feature branch, and are now doing the iterations on that until we are happy with everything?

          Now it's in (and when I come back from vacation) I need to sit down and do a review of this, including test runs. I'll need some help getting set up for testing here —I do need this as when I make changes to the FS contract tests I need to test all the object stores.

          Show
          stevel@apache.org Steve Loughran added a comment - OK, so we have this in a feature branch, and are now doing the iterations on that until we are happy with everything? Now it's in (and when I come back from vacation) I need to sit down and do a review of this, including test runs. I'll need some help getting set up for testing here —I do need this as when I make changes to the FS contract tests I need to test all the object stores.
          Hide
          drankye Kai Zheng added a comment -

          Thanks Steve for the thoughts and continuously taking care of this!

          are now doing the iterations on that until we are happy with everything?

          It looks like that as I did see some JIRAs and activities were going on the branch.

          I'll need some help getting set up for testing here — I do need this as when I make changes to the FS contract tests I need to test all the object stores.

          This sounds great and sure somebody should/will help.

          shimingfei, would you update the overall status here and address Steve's questions? Thanks!

          Show
          drankye Kai Zheng added a comment - Thanks Steve for the thoughts and continuously taking care of this! are now doing the iterations on that until we are happy with everything? It looks like that as I did see some JIRAs and activities were going on the branch. I'll need some help getting set up for testing here — I do need this as when I make changes to the FS contract tests I need to test all the object stores. This sounds great and sure somebody should/will help. shimingfei , would you update the overall status here and address Steve's questions? Thanks!
          Hide
          shimingfei shimingfei added a comment - - edited

          Kai Zheng Steve Loughran Thanks for your reviewing and comments.
          We are doing iterations on current code base, and will propose a proposal to merge to trunk.
          Current code uses FS contract test, and the test can be enabled/disabled by configurations,
          we are glad to help to set up the tests for Aliyun OSS, when the FS contract tests are changed

          Show
          shimingfei shimingfei added a comment - - edited Kai Zheng Steve Loughran Thanks for your reviewing and comments. We are doing iterations on current code base, and will propose a proposal to merge to trunk. Current code uses FS contract test, and the test can be enabled/disabled by configurations, we are glad to help to set up the tests for Aliyun OSS, when the FS contract tests are changed
          Hide
          drankye Kai Zheng added a comment -

          and will propose a proposal to merge to trunk.

          I'd suggest you hold this for some bit as Steve Loughran said he planned to do some review on the work. Steve is a busy gentleman and the review may take some time to happen. He may likely have some good findings to process and the merging proposal could be out when he feels good as well. Thanks.

          Show
          drankye Kai Zheng added a comment - and will propose a proposal to merge to trunk. I'd suggest you hold this for some bit as Steve Loughran said he planned to do some review on the work. Steve is a busy gentleman and the review may take some time to happen. He may likely have some good findings to process and the merging proposal could be out when he feels good as well. Thanks.
          Hide
          drankye Kai Zheng added a comment -

          Hi shimingfei, Genmao Yu,

          The scheme for this new file system is still using oss://, but as very earlier discussion mentioned, oss is too generic and can mean open source software, object store service and etc., as Steve Loughran and Yi Liu pointed out. Could we change it to use a more specific one? Maybe like alioss? Hope it won't be too late. Thanks.

          Also note the doc needs to be updated.

          Show
          drankye Kai Zheng added a comment - Hi shimingfei , Genmao Yu , The scheme for this new file system is still using oss:// , but as very earlier discussion mentioned, oss is too generic and can mean open source software , object store service and etc., as Steve Loughran and Yi Liu pointed out. Could we change it to use a more specific one? Maybe like alioss ? Hope it won't be too late. Thanks. Also note the doc needs to be updated.
          Hide
          uncleGen Genmao Yu added a comment - - edited

          Kai Zheng +1 to your suggestion, but the truth is many developers are familiar with ‘oss://’ in Aliyun E-MapReduce, and Aliyun OSS itself is using 'oss://' in many places, like https://help.aliyun.com/document_detail/32185.html. So, i think it is better to continue to use 'oss://'.

          Show
          uncleGen Genmao Yu added a comment - - edited Kai Zheng +1 to your suggestion, but the truth is many developers are familiar with ‘oss://’ in Aliyun E-MapReduce, and Aliyun OSS itself is using 'oss://' in many places, like https://help.aliyun.com/document_detail/32185.html . So, i think it is better to continue to use 'oss://'.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          if it's in use elsewhere, we may as well stick with it.

          Show
          stevel@apache.org Steve Loughran added a comment - if it's in use elsewhere, we may as well stick with it.
          Hide
          drankye Kai Zheng added a comment -

          Yeah, quite agree. Thanks Genmao Yu for the clarifying and Steve Loughran for the confirm.

          Show
          drankye Kai Zheng added a comment - Yeah, quite agree. Thanks Genmao Yu for the clarifying and Steve Loughran for the confirm.
          Hide
          uncleGen Genmao Yu added a comment -

          Great, let us continue HADOOP-13584

          Show
          uncleGen Genmao Yu added a comment - Great, let us continue HADOOP-13584
          Hide
          uncleGen Genmao Yu added a comment -

          Kai Zheng I hava updated the design document, please have a look, Thanks.

          Show
          uncleGen Genmao Yu added a comment - Kai Zheng I hava updated the design document, please have a look, Thanks.
          Hide
          drankye Kai Zheng added a comment -

          Thanks Genmao for updating the doc. It looks pretty nice now.

          Show
          drankye Kai Zheng added a comment - Thanks Genmao for updating the doc. It looks pretty nice now.
          Hide
          drankye Kai Zheng added a comment -

          Hi Steve Loughran, Chris Nauroth, Yi Liu, Allen Wittenauer, Uma Maheswara Rao G, Lei (Eddy) Xu and all,

          Thank you for your suggestions, guidances and driving! Since the branch was created, this runs pretty well. The code base is improved and cleaned, more tests are added, the user documentation is provided, and the design doc is updated, aligning with the conventions for aws/s3a, azure and etc. Importantly, the major functionality has already been running in production environment for quite some time and many users are expecting to see this as part of formal Hadoop offering. Recently I looked around this reviewing the work and now feel it's time to merge it into trunk. The whole work generates a patch in HADOOP-13584 in size of 138 kB, which isn't quite large. It adds a new standalone module (hadoop-aliyun) in hadoop-tools and doesn't affect existing functionalities. There shouldn't be conflict merging into trunk and the risk is low. Note after this is in, some optimization work will be easier to do altogether with on-going efforts, like HADOOP-13345, making Hadoop become more and more friendly towards cloud platforms. I'm happy to contribute in this direction as well with my colleagues.

          Hope this sounds good to everybody and your further suggestion is very welcome. If no further inputs in following days, I'd like to take and help do the merge. Thank you.

          Show
          drankye Kai Zheng added a comment - Hi Steve Loughran , Chris Nauroth , Yi Liu , Allen Wittenauer , Uma Maheswara Rao G , Lei (Eddy) Xu and all, Thank you for your suggestions, guidances and driving! Since the branch was created, this runs pretty well. The code base is improved and cleaned, more tests are added, the user documentation is provided, and the design doc is updated, aligning with the conventions for aws/s3a, azure and etc. Importantly, the major functionality has already been running in production environment for quite some time and many users are expecting to see this as part of formal Hadoop offering. Recently I looked around this reviewing the work and now feel it's time to merge it into trunk. The whole work generates a patch in HADOOP-13584 in size of 138 kB , which isn't quite large. It adds a new standalone module (hadoop-aliyun) in hadoop-tools and doesn't affect existing functionalities. There shouldn't be conflict merging into trunk and the risk is low. Note after this is in, some optimization work will be easier to do altogether with on-going efforts, like HADOOP-13345 , making Hadoop become more and more friendly towards cloud platforms. I'm happy to contribute in this direction as well with my colleagues. Hope this sounds good to everybody and your further suggestion is very welcome. If no further inputs in following days, I'd like to take and help do the merge. Thank you.
          Hide
          drankye Kai Zheng added a comment -

          As the branch is relatively small and also in order for a much clean commit record, I merged the branch by committing it as a sole patch with the contributors clearly stated, as follows. Currently it's only merged to trunk branch. We may also consider other branches later.

          commit 5707f88d8550346f167e45c2f8c4161eb3957e3a
          Author: Kai Zheng <kai.zheng@intel.com>
          Date:   Mon Sep 26 20:42:22 2016 +0800
          
              HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back.
          
              HADOOP-12756 branch: Incorporate Aliyun OSS file system implementation. Contributors:
              Mingfei Shi (mingfei.shi@intel.com)
              Genmao Yu (genmao.ygm@alibaba-inc.com)
          
          Show
          drankye Kai Zheng added a comment - As the branch is relatively small and also in order for a much clean commit record, I merged the branch by committing it as a sole patch with the contributors clearly stated, as follows. Currently it's only merged to trunk branch. We may also consider other branches later. commit 5707f88d8550346f167e45c2f8c4161eb3957e3a Author: Kai Zheng <kai.zheng@intel.com> Date: Mon Sep 26 20:42:22 2016 +0800 HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back. HADOOP-12756 branch: Incorporate Aliyun OSS file system implementation. Contributors: Mingfei Shi (mingfei.shi@intel.com) Genmao Yu (genmao.ygm@alibaba-inc.com)
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10485 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10485/)
          HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back. (kai.zheng: rev 5707f88d8550346f167e45c2f8c4161eb3957e3a)

          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSTestUtils.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDelete.java
          • (add) hadoop-tools/hadoop-aliyun/pom.xml
          • (add) hadoop-tools/hadoop-aliyun/src/test/resources/core-site.xml
          • (edit) hadoop-tools/pom.xml
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractCreate.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSInputStream.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractMkdir.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRootDir.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/resources/contract/aliyun-oss.xml
          • (add) hadoop-tools/hadoop-aliyun/src/test/resources/log4j.properties
          • (add) hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRename.java
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSOutputStream.java
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/package-info.java
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/AliyunOSSContract.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDistCp.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractOpen.java
          • (edit) hadoop-tools/hadoop-tools-dist/pom.xml
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractSeek.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
          • (edit) hadoop-project/pom.xml
          • (edit) .gitignore
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunCredentials.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractGetFileStatus.java
          • (add) hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10485 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10485/ ) HADOOP-13584 . hdoop-aliyun: merge HADOOP-12756 branch back. (kai.zheng: rev 5707f88d8550346f167e45c2f8c4161eb3957e3a) (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSTestUtils.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDelete.java (add) hadoop-tools/hadoop-aliyun/pom.xml (add) hadoop-tools/hadoop-aliyun/src/test/resources/core-site.xml (edit) hadoop-tools/pom.xml (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractCreate.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSInputStream.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractMkdir.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRootDir.java (add) hadoop-tools/hadoop-aliyun/src/test/resources/contract/aliyun-oss.xml (add) hadoop-tools/hadoop-aliyun/src/test/resources/log4j.properties (add) hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRename.java (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSOutputStream.java (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/package-info.java (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/AliyunOSSContract.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDistCp.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractOpen.java (edit) hadoop-tools/hadoop-tools-dist/pom.xml (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractSeek.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java (edit) hadoop-project/pom.xml (edit) .gitignore (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunCredentials.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractGetFileStatus.java (add) hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
          Hide
          stevel@apache.org Steve Loughran added a comment -

          great to see this work in.

          Based on the experiences with the other object stores, it invariably takes time for problems to surface; as well as code review, people need to play with it to see what breaks

          in particular: hadoop fs experience; the S3a phase II JIRA has shown up some issues there (home dir, trash, ...) which may now need to be left unfixed for compatibility. I'd recommend someone spends time working with the FS, and doing some scale tests, including big distcp operations. It's that leap from local integration tests to large runs which shows up the surprises

          Show
          stevel@apache.org Steve Loughran added a comment - great to see this work in. Based on the experiences with the other object stores, it invariably takes time for problems to surface; as well as code review, people need to play with it to see what breaks in particular: hadoop fs experience; the S3a phase II JIRA has shown up some issues there (home dir, trash, ...) which may now need to be left unfixed for compatibility. I'd recommend someone spends time working with the FS, and doing some scale tests, including big distcp operations. It's that leap from local integration tests to large runs which shows up the surprises
          Hide
          stevel@apache.org Steve Loughran added a comment -

          see also HADOOP-13655

          Show
          stevel@apache.org Steve Loughran added a comment - see also HADOOP-13655
          Hide
          uncleGen Genmao Yu added a comment -

          I will continue to participate in this direction and do further improvement and optimization.

          Show
          uncleGen Genmao Yu added a comment - I will continue to participate in this direction and do further improvement and optimization.
          Hide
          drankye Kai Zheng added a comment -

          Thanks Steve Loughran for the nice feedback and concrete suggestions. Yes we should do that and the work being in is right a good beginning.

          Show
          drankye Kai Zheng added a comment - Thanks Steve Loughran for the nice feedback and concrete suggestions. Yes we should do that and the work being in is right a good beginning.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Hi Kai Zheng, was this branch merged without a vote thread?

          Show
          arpitagarwal Arpit Agarwal added a comment - Hi Kai Zheng , was this branch merged without a vote thread?
          Hide
          drankye Kai Zheng added a comment - - edited

          Hi Arpit Agarwal,

          There wasn't an explicit vote thread called for this in the mailing list. I tracked the important discussions in this master issue, in above comment made the summary about this branch work and called for the merge. I wish it could serve the same purpose and would work for you as well. The merge was recorded here and how would you like it?

          Thank you for the discussion.

          Show
          drankye Kai Zheng added a comment - - edited Hi Arpit Agarwal , There wasn't an explicit vote thread called for this in the mailing list. I tracked the important discussions in this master issue, in above comment made the summary about this branch work and called for the merge. I wish it could serve the same purpose and would work for you as well. The merge was recorded here and how would you like it? Thank you for the discussion.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          I wish it could serve the same purpose and would work for you as well.

          Kai Zheng, no it does not serve the same purpose because it is not as visible to the community. Even discounting the lack of an email thread, I don't see the requisite 3 binding +1s between your two comments. This sets a bad precedent.

          Show
          arpitagarwal Arpit Agarwal added a comment - I wish it could serve the same purpose and would work for you as well. Kai Zheng , no it does not serve the same purpose because it is not as visible to the community. Even discounting the lack of an email thread, I don't see the requisite 3 binding +1s between your two comments. This sets a bad precedent.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          To clarify, I don't object to the content of the change (I haven't looked into it). This change is likely safe because it doesn't affect existing code. But letting committers override the branch merge procedure selectively is opening a can of worms.

          Show
          arpitagarwal Arpit Agarwal added a comment - To clarify, I don't object to the content of the change (I haven't looked into it). This change is likely safe because it doesn't affect existing code. But letting committers override the branch merge procedure selectively is opening a can of worms.
          Hide
          drankye Kai Zheng added a comment -

          it does not serve the same purpose because it is not as visible to the community.

          Yeah, I agree, a separate thread discussion for extra attentions could be better, even though this effort was watched by many people.

          I don't see the requisite 3 binding +1s between your two comments.

          I got it. We need explicit votes rather than no objections, before the action.

          This sets a bad precedent.

          This wasn't ever my want. Could we save from this and would you think I calling for a thread discussion could do the help? If it was sorted out that something must be fixed for the work before it being in, I can revert this and then do the fix.

          Show
          drankye Kai Zheng added a comment - it does not serve the same purpose because it is not as visible to the community. Yeah, I agree, a separate thread discussion for extra attentions could be better, even though this effort was watched by many people. I don't see the requisite 3 binding +1s between your two comments. I got it. We need explicit votes rather than no objections, before the action. This sets a bad precedent. This wasn't ever my want. Could we save from this and would you think I calling for a thread discussion could do the help? If it was sorted out that something must be fixed for the work before it being in, I can revert this and then do the fix.
          Hide
          andrew.wang Andrew Wang added a comment -

          Agree with Arpit that this shouldn't have been merged without a merge vote.

          Could we treat this as a learning experience? Looking at the JIRA, at least two committers (Kai and Steve) did look at it, and what happened seems like an honest mistake not to be repeated.

          Sending a ping to common-dev would be good as a heads-up, but I'm hoping we can retroactively +1 to avoid the git gymnastics to revert and recommit the code.

          Show
          andrew.wang Andrew Wang added a comment - Agree with Arpit that this shouldn't have been merged without a merge vote. Could we treat this as a learning experience? Looking at the JIRA, at least two committers (Kai and Steve) did look at it, and what happened seems like an honest mistake not to be repeated. Sending a ping to common-dev would be good as a heads-up, but I'm hoping we can retroactively +1 to avoid the git gymnastics to revert and recommit the code.
          Hide
          andrew.wang Andrew Wang added a comment -

          Also somewhat unrelated, could one of the contributors update the fix version of this umbrella JIRA to reflect the merge, and also add some release notes? Thanks.

          Show
          andrew.wang Andrew Wang added a comment - Also somewhat unrelated, could one of the contributors update the fix version of this umbrella JIRA to reflect the merge, and also add some release notes? Thanks.
          Hide
          anu Anu Engineer added a comment - - edited

          Kai Zheng I think what Arpit is saying is that he does not have an issue with the code. The proper process to bring in this code would be to call for vote. Again, it is nothing to do with Aliyun code or technical issues. It gives the community a chance to review, understand and comment upon the code base before it is committed. That I think would be the best way to build a community of contributors around this feature.

          If you agree that we should follow the right process, I think we should revert this change and call for a merge vote and merge based on the results of such a voting thread.

          The danger of the precedent what we are doing in this branch would be that someone else might decide to bring in another feature via this loophole saying that this was done in Aliyun code merge. That is what I think we want to avoid, in many senses a rule of law remains a rule only if it is followed consistently.

          I am really sympathetic to what was done and I appreciate the enthusiasm and the spirit of let us get it done, but I think this list of changes is large enough for us to follow the right process. As far as I can see, few days spend on voting time will only strengthen the sense of community around this code base.

          Andrew Wang Since this is a single commit, reverting and merging will actuall be a better experience, because it will allow you follow the policy that was suggested by you

          "git merge --no-ff" is also the preferred way of integrating a feature branch to other branches, e.g. branch-2."
          From https://lists.apache.org/thread.html/43cd65c6b6c3c0e8ac2b3c76afd9eff1f78b177fabe9c4a96d9b3d0b@1440189889@%3Ccommon-dev.hadoop.apache.org%3E

          Show
          anu Anu Engineer added a comment - - edited Kai Zheng I think what Arpit is saying is that he does not have an issue with the code. The proper process to bring in this code would be to call for vote. Again, it is nothing to do with Aliyun code or technical issues. It gives the community a chance to review, understand and comment upon the code base before it is committed. That I think would be the best way to build a community of contributors around this feature. If you agree that we should follow the right process, I think we should revert this change and call for a merge vote and merge based on the results of such a voting thread. The danger of the precedent what we are doing in this branch would be that someone else might decide to bring in another feature via this loophole saying that this was done in Aliyun code merge. That is what I think we want to avoid, in many senses a rule of law remains a rule only if it is followed consistently. I am really sympathetic to what was done and I appreciate the enthusiasm and the spirit of let us get it done, but I think this list of changes is large enough for us to follow the right process. As far as I can see, few days spend on voting time will only strengthen the sense of community around this code base. Andrew Wang Since this is a single commit, reverting and merging will actuall be a better experience, because it will allow you follow the policy that was suggested by you "git merge --no-ff" is also the preferred way of integrating a feature branch to other branches, e.g. branch-2." From https://lists.apache.org/thread.html/43cd65c6b6c3c0e8ac2b3c76afd9eff1f78b177fabe9c4a96d9b3d0b@1440189889@%3Ccommon-dev.hadoop.apache.org%3E
          Hide
          andrew.wang Andrew Wang added a comment -

          Generally speaking, I'd prefer we default to assuming that our fellow committers are good actors and behaving with the best interests of the project in mind. Rules aren't meant to be blindly enforced, and given our positive working relationships with committers like Kai and Steve, we're allowed to let these little mistakes slide. Ultimately, we need to trust the other people we work with in the community since no one can personally review every change that goes in.

          Anu, thanks for bringing up the single commit though. It looks like the branch was squashed and committed as a single commit, so we lost all the history. Fixing this seems worthwhile, in which case we might as well go through the merge vote for completeness.

          Kai Zheng do you agree? In which case, do you mind reverting and firing up a [VOTE] thread on common-dev@? Thanks.

          Show
          andrew.wang Andrew Wang added a comment - Generally speaking, I'd prefer we default to assuming that our fellow committers are good actors and behaving with the best interests of the project in mind. Rules aren't meant to be blindly enforced, and given our positive working relationships with committers like Kai and Steve, we're allowed to let these little mistakes slide. Ultimately, we need to trust the other people we work with in the community since no one can personally review every change that goes in. Anu, thanks for bringing up the single commit though. It looks like the branch was squashed and committed as a single commit, so we lost all the history. Fixing this seems worthwhile, in which case we might as well go through the merge vote for completeness. Kai Zheng do you agree? In which case, do you mind reverting and firing up a [VOTE] thread on common-dev@? Thanks.
          Hide
          anu Anu Engineer added a comment -

          our fellow committers are good actors and behaving with the best interests of the project in mind

          Absolutely, that is why we try to get the backs of each other, just like you would comment on a code review error – that is trying to help each other – A mistake pointed out by the fellow community member is indeed an appreciation of what you have contributed.

          I think Arpit's original comments was pointing out a mistake – And I think we all owe him a bit of gratitude.

          Ultimately, we need to trust the other people we work with in the community since no one can personally review every change that goes in.

          Again completely agree, that is why we have a community and hopefully someone else is there to catch the ball when you miss. In fact, in this particular case it is the expression of trust when someone suggests that an error might have occurred instead of a -1 veto. The very fact that this is being discussed in the corresponding JIRA without a -1, is indeed an expression of respect and trust. In fact, I would think these threads have been really appreciate of the work and a very gentle reminder of why we do things the way we do.

          Show
          anu Anu Engineer added a comment - our fellow committers are good actors and behaving with the best interests of the project in mind Absolutely, that is why we try to get the backs of each other, just like you would comment on a code review error – that is trying to help each other – A mistake pointed out by the fellow community member is indeed an appreciation of what you have contributed. I think Arpit's original comments was pointing out a mistake – And I think we all owe him a bit of gratitude. Ultimately, we need to trust the other people we work with in the community since no one can personally review every change that goes in. Again completely agree, that is why we have a community and hopefully someone else is there to catch the ball when you miss. In fact, in this particular case it is the expression of trust when someone suggests that an error might have occurred instead of a -1 veto. The very fact that this is being discussed in the corresponding JIRA without a -1, is indeed an expression of respect and trust. In fact, I would think these threads have been really appreciate of the work and a very gentle reminder of why we do things the way we do.
          Hide
          drankye Kai Zheng added a comment -

          Thank you Arpit Agarwal, Andrew Wang and Anu Engineer for the feedback, thoughts and suggestions. It sounds like a great community and I love it .

          In which case, do you mind reverting and firing up a VOTE thread on common-dev@?

          Sure let me follow this, reverting the commit and calling for the merge vote.

          Show
          drankye Kai Zheng added a comment - Thank you Arpit Agarwal , Andrew Wang and Anu Engineer for the feedback, thoughts and suggestions. It sounds like a great community and I love it . In which case, do you mind reverting and firing up a VOTE thread on common-dev@? Sure let me follow this, reverting the commit and calling for the merge vote.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10502 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10502/)
          Revert "HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back" This (kai.zheng: rev d1443988f809fe6656f60dfed4ee4e0f4844ee5c)

          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunCredentials.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDistCp.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/resources/log4j.properties
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractCreate.java
          • (delete) hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
          • (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSTestUtils.java
          • (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
          • (edit) hadoop-tools/pom.xml
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSOutputStream.java
          • (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractSeek.java
          • (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
          • (edit) hadoop-project/pom.xml
          • (delete) hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
          • (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDelete.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/AliyunOSSContract.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractGetFileStatus.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/resources/contract/aliyun-oss.xml
          • (delete) hadoop-tools/hadoop-aliyun/pom.xml
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractOpen.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRootDir.java
          • (edit) .gitignore
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractMkdir.java
          • (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/package-info.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/resources/core-site.xml
          • (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java
          • (edit) hadoop-tools/hadoop-tools-dist/pom.xml
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRename.java
          • (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
          • (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSInputStream.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10502 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10502/ ) Revert " HADOOP-13584 . hdoop-aliyun: merge HADOOP-12756 branch back" This (kai.zheng: rev d1443988f809fe6656f60dfed4ee4e0f4844ee5c) (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunCredentials.java (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDistCp.java (delete) hadoop-tools/hadoop-aliyun/src/test/resources/log4j.properties (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractCreate.java (delete) hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSTestUtils.java (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java (edit) hadoop-tools/pom.xml (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSOutputStream.java (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractSeek.java (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java (edit) hadoop-project/pom.xml (delete) hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDelete.java (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/AliyunOSSContract.java (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractGetFileStatus.java (delete) hadoop-tools/hadoop-aliyun/src/test/resources/contract/aliyun-oss.xml (delete) hadoop-tools/hadoop-aliyun/pom.xml (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractOpen.java (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRootDir.java (edit) .gitignore (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractMkdir.java (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/package-info.java (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java (delete) hadoop-tools/hadoop-aliyun/src/test/resources/core-site.xml (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java (edit) hadoop-tools/hadoop-tools-dist/pom.xml (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRename.java (delete) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java (delete) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSInputStream.java
          Hide
          drankye Kai Zheng added a comment -

          I have reverted the commit and posted a VOTE thread in the common dev mailing list. Kindly review this work and give your vote there, thanks!

          Show
          drankye Kai Zheng added a comment - I have reverted the commit and posted a VOTE thread in the common dev mailing list. Kindly review this work and give your vote there, thanks!
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          +1 to merge to trunk. The latest patch LGTM.

          Thanks for the efforts, shimingfei, Kai Zheng, Genmao Yu.

          Show
          eddyxu Lei (Eddy) Xu added a comment - +1 to merge to trunk. The latest patch LGTM. Thanks for the efforts, shimingfei , Kai Zheng , Genmao Yu .
          Hide
          uncleGen Genmao Yu added a comment - - edited

          FS Shell

          op support or not comment
          cat  
          chgrp meaningless and no need
          chmod meaningless and no need
          chown meaningless and no need
          copyFromLocal  
          copyToLocal  
          cp oss -> oss, oss -> local, oss -> hdfs, local -> oss, hdfs -> oss
          du  
          dus  
          expunge only support hdfs
          get  
          getmerge  
          ls  
          lsr  
          mkdir  
          movefromLocal  
          mv only support oss -> oss
          put  
          rm  
          rmr  
          setrep locked
          stat  
          tail  
          test  
          text  
          touchz  
          Show
          uncleGen Genmao Yu added a comment - - edited FS Shell op support or not comment cat   chgrp meaningless and no need chmod meaningless and no need chown meaningless and no need copyFromLocal   copyToLocal   cp oss -> oss, oss -> local, oss -> hdfs, local -> oss, hdfs -> oss du   dus   expunge only support hdfs get   getmerge   ls   lsr   mkdir   movefromLocal   mv only support oss -> oss put   rm   rmr   setrep locked stat   tail   test   text   touchz  
          Hide
          drankye Kai Zheng added a comment -

          Thanks Genmao Yu for doing the complete hadoop fs tests per Steve Loughran's comments and providing the results!

          Show
          drankye Kai Zheng added a comment - Thanks Genmao Yu for doing the complete hadoop fs tests per Steve Loughran 's comments and providing the results!
          Hide
          drankye Kai Zheng added a comment -

          I have merged the branch to trunk via git merge --no-ff according to the vote result and above discussions. Thanks all for your nice support!

          Show
          drankye Kai Zheng added a comment - I have merged the branch to trunk via git merge --no-ff according to the vote result and above discussions. Thanks all for your nice support!
          Hide
          drankye Kai Zheng added a comment -

          Added a release note.

          Show
          drankye Kai Zheng added a comment - Added a release note.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10583/)
          HADOOP-12756. Incorporate Aliyun OSS file system implementation. (mingfei.shi: rev a5d5342228050a778b20e95adf7885bdba39985d)

          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractDelete.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractRename.java
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/OSSContract.java
          • (edit) hadoop-tools/pom.xml
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSInputStream.java
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/package-info.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/resources/contract/oss.xml
          • (add) hadoop-tools/hadoop-aliyun/pom.xml
          • (edit) hadoop-project/pom.xml
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractMkdir.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSOutputStream.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSFileSystemContract.java
          • (edit) hadoop-tools/hadoop-tools-dist/pom.xml
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractSeek.java
          • (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/resources/core-site.xml
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractOpen.java
          • (edit) .gitignore
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractCreate.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/OSSTestUtils.java
          • (add) hadoop-tools/hadoop-aliyun/src/test/resources/log4j.properties
          • (add) hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10583/ ) HADOOP-12756 . Incorporate Aliyun OSS file system implementation. (mingfei.shi: rev a5d5342228050a778b20e95adf7885bdba39985d) (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractDelete.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractRename.java (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/OSSContract.java (edit) hadoop-tools/pom.xml (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSInputStream.java (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/package-info.java (add) hadoop-tools/hadoop-aliyun/src/test/resources/contract/oss.xml (add) hadoop-tools/hadoop-aliyun/pom.xml (edit) hadoop-project/pom.xml (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractMkdir.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSOutputStream.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSFileSystemContract.java (edit) hadoop-tools/hadoop-tools-dist/pom.xml (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractSeek.java (add) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java (add) hadoop-tools/hadoop-aliyun/src/test/resources/core-site.xml (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractOpen.java (edit) .gitignore (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractCreate.java (add) hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/OSSTestUtils.java (add) hadoop-tools/hadoop-aliyun/src/test/resources/log4j.properties (add) hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
          Hide
          liuml07 Mingliang Liu added a comment -

          Ray Chiang. I found the original patches were maintained by shimingfei. Changing the "Assignee" filed of this JIRA indicates the ownership of the contribution. Please make sure this is what you'd like to do. Ping Kai Zheng.

          Show
          liuml07 Mingliang Liu added a comment - Ray Chiang . I found the original patches were maintained by shimingfei . Changing the "Assignee" filed of this JIRA indicates the ownership of the contribution. Please make sure this is what you'd like to do. Ping Kai Zheng .
          Hide
          rchiang Ray Chiang added a comment -

          Thanks Mingliang Liu. I'd certainly like to at least get an idea of the feasibility of updating the aliyun-sdk-oss library to 2.8.0 before we have the final release of Hadoop 3.

          I did file HADOOP-14649 and tagged Kai Zheng there as well. Kai, if you can let me know the effort involved, I'd appreciate it.

          Show
          rchiang Ray Chiang added a comment - Thanks Mingliang Liu . I'd certainly like to at least get an idea of the feasibility of updating the aliyun-sdk-oss library to 2.8.0 before we have the final release of Hadoop 3. I did file HADOOP-14649 and tagged Kai Zheng there as well. Kai, if you can let me know the effort involved, I'd appreciate it.

            People

            • Assignee:
              kakagou mingfei.shi
              Reporter:
              shimingfei shimingfei
            • Votes:
              2 Vote for this issue
              Watchers:
              54 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development