HBase
  1. HBase
  2. HBASE-9393

Hbase does not closing a closed socket resulting in many CLOSE_WAIT

    Details

    • Type: Bug Bug
    • Status: Patch Available
    • Priority: Critical Critical
    • Resolution: Unresolved
    • Affects Version/s: 0.94.2, 0.98.0
    • Fix Version/s: 2.0.0
    • Component/s: None
    • Labels:
      None
    • Environment:

      Centos 6.4 - 7 regionservers/datanodes, 8 TB per node, 7279 regions

      Description

      HBase dose not close a dead connection with the datanode.
      This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect to the datanode because too many mapped sockets from one host to another on the same port.

      The example below is with low CLOSE_WAIT count because we had to restart hbase to solve the porblem, later in time it will incease to 60-100K sockets on CLOSE_WAIT

      [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
      13156
      [root@hd2-region3 ~]# ps -ef |grep 21592
      root 17255 17219 0 12:26 pts/0 00:00:00 grep 21592
      hbase 21592 1 17 Aug29 ? 03:29:06 /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -Dhbase.log.dir=/var/log/hbase -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ...

      1. HBASE-9393.patch
        8 kB
        Ashish Singhi
      2. HBASE-9393.v1.patch
        4 kB
        Ashish Singhi
      3. HBASE-9393.v2.patch
        5 kB
        Ashish Singhi
      4. HBASE-9393.v3.patch
        6 kB
        Ashish Singhi
      5. HBASE-9393.v4.patch
        6 kB
        Ashish Singhi
      6. HBASE-9393.v5.patch
        6 kB
        stack
      7. HBASE-9393.v5.patch
        6 kB
        stack
      8. HBASE-9393.v5.patch
        6 kB
        Ashish Singhi
      9. HBASE-9393.v6.patch
        7 kB
        Ashish Singhi
      10. HBASE-9393.v6.patch
        7 kB
        Ashish Singhi
      11. HBASE-9393.v6.patch
        7 kB
        Ashish Singhi

        Issue Links

          Activity

          Hide
          Enis Soztutar added a comment -

          We have also observed a similar situation which rendered some of the regionservers unusable because master was not able to open more sockets to the regionserver.

          $ for i in `cat allnodes`; do echo $i; ssh $i "netstat -to | grep CLOSE_WAIT" ; done  
          horn04
          tcp       22      0 horn04.gq1.ygridcore:49853 horn04.gq1.ygridcore:50010 CLOSE_WAIT  off (0.00/0/0)
          tcp        1      0 horn04.gq1.ygridcore:49812 horn04.gq1.ygridcore:50010 CLOSE_WAIT  off (0.00/0/0)
          horn05
          tcp       76      0 horn05.gq1.ygridcore:40253 horn05.gq1.ygridcore:50010 CLOSE_WAIT  off (0.00/0/0)
          tcp        1      0 horn05.gq1.ygridcore:39667 horn05.gq1.ygridcore:50010 CLOSE_WAIT  off (0.00/0/0)
          tcp      166      0 horn05.gq1.ygridcore:39919 horn05.gq1.ygridcore:50010 CLOSE_WAIT  off (0.00/0/0)
          tcp       97      0 horn05.gq1.ygridcore:40631 horn05.gq1.ygridcore:50010 CLOSE_WAIT  off (0.00/0/0)
          tcp        5      0 horn05.gq1.ygridcore:40227 horn05.gq1.ygridcore:50010 CLOSE_WAIT  off (0.00/0/0)
          tcp       32      0 horn05.gq1.ygridcore:39707 horn05.gq1.ygridcore:50010 CLOSE_WAIT  off (0.00/0/0)
          

          I was not able to nail down the root cause at that time though.

          Show
          Enis Soztutar added a comment - We have also observed a similar situation which rendered some of the regionservers unusable because master was not able to open more sockets to the regionserver. $ for i in `cat allnodes`; do echo $i; ssh $i "netstat -to | grep CLOSE_WAIT" ; done horn04 tcp 22 0 horn04.gq1.ygridcore:49853 horn04.gq1.ygridcore:50010 CLOSE_WAIT off (0.00/0/0) tcp 1 0 horn04.gq1.ygridcore:49812 horn04.gq1.ygridcore:50010 CLOSE_WAIT off (0.00/0/0) horn05 tcp 76 0 horn05.gq1.ygridcore:40253 horn05.gq1.ygridcore:50010 CLOSE_WAIT off (0.00/0/0) tcp 1 0 horn05.gq1.ygridcore:39667 horn05.gq1.ygridcore:50010 CLOSE_WAIT off (0.00/0/0) tcp 166 0 horn05.gq1.ygridcore:39919 horn05.gq1.ygridcore:50010 CLOSE_WAIT off (0.00/0/0) tcp 97 0 horn05.gq1.ygridcore:40631 horn05.gq1.ygridcore:50010 CLOSE_WAIT off (0.00/0/0) tcp 5 0 horn05.gq1.ygridcore:40227 horn05.gq1.ygridcore:50010 CLOSE_WAIT off (0.00/0/0) tcp 32 0 horn05.gq1.ygridcore:39707 horn05.gq1.ygridcore:50010 CLOSE_WAIT off (0.00/0/0) I was not able to nail down the root cause at that time though.
          Hide
          stack added a comment -

          Lots of random reads?

          Show
          stack added a comment - Lots of random reads?
          Hide
          Avi Zrachya added a comment -

          In our case it's definitely not because of many random reads.

          On a cluster that have no traffic:
          1. restart the regionserver on one node, the CLOSE_WAIT goes away..
          2. wait few min and see that there are still 0 CLOSE_WAIT sockets.
          2. re-balance the cluster so the new regionserver will get some regions.
          3. As soon as the regionserver get some regions, many CLOSE_WAIT starts to show up.

          My assumption it happens when the regionserver reads/initialize the assigned region.

          Read data from the cluster dose not seems to affect the CLOSE_WAIT sockets.. but when regions in transition kicks in for whatever reason, the CLOSE_WAIT stating to pump up.

          Show
          Avi Zrachya added a comment - In our case it's definitely not because of many random reads. On a cluster that have no traffic: 1. restart the regionserver on one node, the CLOSE_WAIT goes away.. 2. wait few min and see that there are still 0 CLOSE_WAIT sockets. 2. re-balance the cluster so the new regionserver will get some regions. 3. As soon as the regionserver get some regions, many CLOSE_WAIT starts to show up. My assumption it happens when the regionserver reads/initialize the assigned region. Read data from the cluster dose not seems to affect the CLOSE_WAIT sockets.. but when regions in transition kicks in for whatever reason, the CLOSE_WAIT stating to pump up.
          Hide
          stack added a comment -

          You can attribute the CLOSE_WAIT to the regionserver process (it is not the datanode process? If datanode process, what version of hadoop).

          Can someone else try and repro what Avi Zrachya is reporting above? Enis Soztutar What if you do the sequence Avi Zrachya outlines? Thanks.

          Show
          stack added a comment - You can attribute the CLOSE_WAIT to the regionserver process (it is not the datanode process? If datanode process, what version of hadoop). Can someone else try and repro what Avi Zrachya is reporting above? Enis Soztutar What if you do the sequence Avi Zrachya outlines? Thanks.
          Hide
          Avi Zrachya added a comment -

          Yes, it is most definetly the regionserver as you can see below.
          pid 21592 is the pid the holding the CLOSE_WAIT socket and as you can see 21592 is the regionserver.

          [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
          13156
          [root@hd2-region3 ~]# ps -ef |grep 21592
          root 17255 17219 0 12:26 pts/0 00:00:00 grep 21592
          hbase 21592 1 17 Aug29 ? 03:29:06 /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -Dhbase.log.dir=/var /log/hbase -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ..
          
          Show
          Avi Zrachya added a comment - Yes, it is most definetly the regionserver as you can see below. pid 21592 is the pid the holding the CLOSE_WAIT socket and as you can see 21592 is the regionserver. [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l 13156 [root@hd2-region3 ~]# ps -ef |grep 21592 root 17255 17219 0 12:26 pts/0 00:00:00 grep 21592 hbase 21592 1 17 Aug29 ? 03:29:06 /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -Dhbase.log.dir=/ var /log/hbase -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ..
          Hide
          Enis Soztutar added a comment -

          I was not able to repro this same problem consistently when I had initially bumped into it. The number of hanging tcp connections varied across nodes. Let me try Avi's approach once more.

          Show
          Enis Soztutar added a comment - I was not able to repro this same problem consistently when I had initially bumped into it. The number of hanging tcp connections varied across nodes. Let me try Avi's approach once more.
          Hide
          Avi Zrachya added a comment -

          I have something that might shade light on this issue.
          It happens with cloudera's CDH 4.2.1 and CDH 4.4.0
          It dose not happen with Horton Works.
          So i assume it is connected to the fact that CDH 4 are working with HDFS 2

          Please check this direction, this problem is consist on CDH 4.

          Show
          Avi Zrachya added a comment - I have something that might shade light on this issue. It happens with cloudera's CDH 4.2.1 and CDH 4.4.0 It dose not happen with Horton Works. So i assume it is connected to the fact that CDH 4 are working with HDFS 2 Please check this direction, this problem is consist on CDH 4.
          Hide
          Colin Patrick McCabe added a comment -

          I looked into this issue. I found a few things:

          The HDFS socket cache is too small by default and times out too quickly. Its default size is 16, but HBase seems to be opening many more connections to the DN than that. In this situation, sockets must inevitably be opened and then discarded, leading to sockets in CLOSE_WAIT.

          When you use positional read (aka pread), we grab a socket from the cache, read from it, and then immediately put it back. When you seek and then call read, we don't put the socket back at the end. The assumption behind the normal read method is that you are probably going to call read again, so it holds on to the socket until something else comes up (such as closing the stream). In many scenarios, this can lead to seek+read generating more sockets in CLOSE_WAIT than pread.

          I don't think we want to alter this HDFS behavior, since it's helpful in the case that you're reading through the entire file from start to finish-- which many HDFS clients do. It allows us to make certain optimizations such as reading a few kilobytes at a time, even if the user only asks for a few bytes at a time. These optimizations are unavailable with pread because it creates a new BlockReader each time.

          So as far as recommendations for HBase go:

          • use short-circuit reads whenever possible, since in many cases you can avoid needing a socket at all and just reuse the same file descriptor
          • set the socket cache to a bigger size and adjust the timeouts to be longer (I may explore changing the defaults in HDFS...)
          • if you are going to keep files open for a while and random read, use pread, never seek+read.
          Show
          Colin Patrick McCabe added a comment - I looked into this issue. I found a few things: The HDFS socket cache is too small by default and times out too quickly. Its default size is 16, but HBase seems to be opening many more connections to the DN than that. In this situation, sockets must inevitably be opened and then discarded, leading to sockets in CLOSE_WAIT . When you use positional read (aka pread ), we grab a socket from the cache, read from it, and then immediately put it back. When you seek and then call read , we don't put the socket back at the end. The assumption behind the normal read method is that you are probably going to call read again, so it holds on to the socket until something else comes up (such as closing the stream). In many scenarios, this can lead to seek+read generating more sockets in CLOSE_WAIT than pread . I don't think we want to alter this HDFS behavior, since it's helpful in the case that you're reading through the entire file from start to finish-- which many HDFS clients do. It allows us to make certain optimizations such as reading a few kilobytes at a time, even if the user only asks for a few bytes at a time. These optimizations are unavailable with pread because it creates a new BlockReader each time. So as far as recommendations for HBase go: use short-circuit reads whenever possible, since in many cases you can avoid needing a socket at all and just reuse the same file descriptor set the socket cache to a bigger size and adjust the timeouts to be longer (I may explore changing the defaults in HDFS...) if you are going to keep files open for a while and random read, use pread , never seek+read .
          Hide
          Colin Patrick McCabe added a comment -

          I guess I should also explain why this doesn't happen in branch-1 of Hadoop. The reason is because Hadoop-1 had no socket cache and no grace period before the sockets were closed. The client simply opened a new socket each time, performed the op, and then closed it. This would result in (basically) no sockets in CLOSE_WAIT. Remember CLOSE_WAIT only happens when the server is waiting for the client to execute close.

          Keeping sockets open is an optimization, but one that may require you to raise your maximum number of file descriptors. If you are not happy with this tradeoff, you can set dfs.client.socketcache.capacity to 0 and dfs.datanode.socket.reuse.keepalive to 0 to get the old branch-1 behavior. It will be slower, though.

          Show
          Colin Patrick McCabe added a comment - I guess I should also explain why this doesn't happen in branch-1 of Hadoop. The reason is because Hadoop-1 had no socket cache and no grace period before the sockets were closed. The client simply opened a new socket each time, performed the op, and then closed it. This would result in (basically) no sockets in CLOSE_WAIT . Remember CLOSE_WAIT only happens when the server is waiting for the client to execute close . Keeping sockets open is an optimization, but one that may require you to raise your maximum number of file descriptors. If you are not happy with this tradeoff, you can set dfs.client.socketcache.capacity to 0 and dfs.datanode.socket.reuse.keepalive to 0 to get the old branch-1 behavior. It will be slower, though.
          Hide
          Liang Xie added a comment -

          Avi Zrachya, do you observe a similar stack trace like HDFS-5671 in your scenario?

          Show
          Liang Xie added a comment - Avi Zrachya , do you observe a similar stack trace like HDFS-5671 in your scenario?
          Hide
          steven xu added a comment -

          Hi, Avi Zrachya, I also have the same issue HBASE-11833 in HDP 2.1/Hadoop 2.4/HBase 0.98.0. Can you share more info about this issue. Thanks.

          Show
          steven xu added a comment - Hi, Avi Zrachya, I also have the same issue HBASE-11833 in HDP 2.1/Hadoop 2.4/HBase 0.98.0. Can you share more info about this issue. Thanks.
          Hide
          steven xu added a comment -

          I considered [cmccabe] suggestions. And I testes the following case:

          • use short-circuit reads;
          • set the socket cache to a bigger size ( dfs.client.socketcache.capacity = 1024) and adjust the timeouts to be longer
            But it didnot work, the numbers of CLOSE_WAIT also grown up quickly.
          Show
          steven xu added a comment - I considered [cmccabe] suggestions. And I testes the following case: use short-circuit reads; set the socket cache to a bigger size ( dfs.client.socketcache.capacity = 1024) and adjust the timeouts to be longer But it didnot work, the numbers of CLOSE_WAIT also grown up quickly.
          Hide
          steven xu added a comment -

          I also changed the conf and tested

          • dfs.client.socketcache.capacity = 0
          • dfs.datanode.socket.reuse.keepalive = 0

            And the number of CLOSE_WAIT also grown up quickly.

          Show
          steven xu added a comment - I also changed the conf and tested dfs.client.socketcache.capacity = 0 dfs.datanode.socket.reuse.keepalive = 0 And the number of CLOSE_WAIT also grown up quickly.
          Hide
          steven xu added a comment -

          [Colin Patrick McCabe], Please review my comments. Could you give more suggestions? Thanks a lo.

          Show
          steven xu added a comment - [Colin Patrick McCabe] , Please review my comments. Could you give more suggestions? Thanks a lo.
          Hide
          Colin Patrick McCabe added a comment -

          Best guess is that you didn't apply your configuration to HBase, which is the DFSClient in this scenario. Suggest posting to hdfs-user@apache.org

          Show
          Colin Patrick McCabe added a comment - Best guess is that you didn't apply your configuration to HBase, which is the DFSClient in this scenario. Suggest posting to hdfs-user@apache.org
          Hide
          juntaoduan added a comment -

          Is there any progress of this issue? we also have the same problem.
          hadoop version: Hadoop 2.0.0-cdh4.4.0
          hbase version: HBase 0.94.6-cdh4.4.0

          we have 120 nodes, the total data scale is 70TB, and we sharded them over 4096 regions(the region number is fixed).
          when we export data out using mapreduce, there will be over 4000+ connections in CLOSE wait per node.
          those connections all belong to regionserver process, and all connected to datanode's $

          {dfs.datanode.address}

          port.

          Show
          juntaoduan added a comment - Is there any progress of this issue? we also have the same problem. hadoop version: Hadoop 2.0.0-cdh4.4.0 hbase version: HBase 0.94.6-cdh4.4.0 we have 120 nodes, the total data scale is 70TB, and we sharded them over 4096 regions(the region number is fixed). when we export data out using mapreduce, there will be over 4000+ connections in CLOSE wait per node. those connections all belong to regionserver process, and all connected to datanode's $ {dfs.datanode.address} port.
          Hide
          Colin Patrick McCabe added a comment -

          CDH4.4 had some configuration defaults that weren't the best, that were improved in later versions. It is getting pretty old now, so I would suggest just upgrading. If that's not possible, then you could check out some of the recent HBaseCon talks about tuning HBase and HDFS performance.

          I think this jira should be closed since I don't see any bug here. if we get more information about something specific we could improve we could reopen it.

          Show
          Colin Patrick McCabe added a comment - CDH4.4 had some configuration defaults that weren't the best, that were improved in later versions. It is getting pretty old now, so I would suggest just upgrading. If that's not possible, then you could check out some of the recent HBaseCon talks about tuning HBase and HDFS performance. I think this jira should be closed since I don't see any bug here. if we get more information about something specific we could improve we could reopen it.
          Hide
          Andrew Purtell added a comment -

          Resolving as Not A Problem

          Show
          Andrew Purtell added a comment - Resolving as Not A Problem
          Hide
          Ashish Singhi added a comment -

          We are able to reproduce this by executing the below simple scenario:
          0. Check the total number of CLOSE_WAIT connections already existing for a RS
          1. Create a table
          2. Put a row
          3. Flush the table (Ensure the store file is assigned to the same RS)
          Now we can notice there will be a new CLOSE_WAIT connection.

          From initial analysis we found that for every HFile on a RS we have a CLOSE_WAIT connection to it. When we disable the table all the CLOSE_WAIT connections to it are closed and on enabling it we see again the CLOSE_WAIT connections to it.
          The StoreFile$Reader has FSDataInputStreamWrapper which is open and not closed. During disable of the table we saw that the close method is getting called. I still need to check the reason behind keeping these open.

          We have also checked that a HDFS client opening a connection to a file and not closing that client will show up a new CLOSE_WAIT connection.

          Show
          Ashish Singhi added a comment - We are able to reproduce this by executing the below simple scenario: 0. Check the total number of CLOSE_WAIT connections already existing for a RS 1. Create a table 2. Put a row 3. Flush the table (Ensure the store file is assigned to the same RS) Now we can notice there will be a new CLOSE_WAIT connection. From initial analysis we found that for every HFile on a RS we have a CLOSE_WAIT connection to it. When we disable the table all the CLOSE_WAIT connections to it are closed and on enabling it we see again the CLOSE_WAIT connections to it. The StoreFile$Reader has FSDataInputStreamWrapper which is open and not closed. During disable of the table we saw that the close method is getting called. I still need to check the reason behind keeping these open. We have also checked that a HDFS client opening a connection to a file and not closing that client will show up a new CLOSE_WAIT connection.
          Hide
          Ashish Singhi added a comment -

          Analysis so far,

          If the socket is idle for a configured amount of time then Datanode will close that socket but the client still has it open, so it goes into a CLOSE_WAIT state and that socket is of no use now.
          Since the stream to the HFile is still open and when HBase do any operation on that HFile then the HDFS client closes that socket and will open a new socket.
          So the only advantage HBase gets from keeping this stream open is, it saves a Namenode open call for that HFile but keeping so many CLOSE_WAIT connections also does not seems good.

          Show
          Ashish Singhi added a comment - Analysis so far, If the socket is idle for a configured amount of time then Datanode will close that socket but the client still has it open, so it goes into a CLOSE_WAIT state and that socket is of no use now. Since the stream to the HFile is still open and when HBase do any operation on that HFile then the HDFS client closes that socket and will open a new socket. So the only advantage HBase gets from keeping this stream open is, it saves a Namenode open call for that HFile but keeping so many CLOSE_WAIT connections also does not seems good.
          Hide
          Colin Patrick McCabe added a comment -

          The client should be configured so that it closes sockets a short time after the server does. In other words, its timeout should be slightly longer than the server's. Suggest checking your timeout configuration (this was too long in older versions of Hadoop).

          Show
          Colin Patrick McCabe added a comment - The client should be configured so that it closes sockets a short time after the server does. In other words, its timeout should be slightly longer than the server's. Suggest checking your timeout configuration (this was too long in older versions of Hadoop).
          Hide
          Ashish Singhi added a comment -

          AFAIK HBase do not have any such timeout now. I have already started the implementation towards that path. Will post the patch once I thoroughly test it.
          Thanks for the comment.

          Show
          Ashish Singhi added a comment - AFAIK HBase do not have any such timeout now. I have already started the implementation towards that path. Will post the patch once I thoroughly test it. Thanks for the comment.
          Hide
          Colin Patrick McCabe added a comment -

          The timeout that I'm talking about is inside DFSClient.java, not inside HBase. HDFS-4911 fixed a problem where the timeout was too long. Can you be a little bit clearer on what you'd like to implement, and what you see as the problem here?

          Show
          Colin Patrick McCabe added a comment - The timeout that I'm talking about is inside DFSClient.java, not inside HBase. HDFS-4911 fixed a problem where the timeout was too long. Can you be a little bit clearer on what you'd like to implement, and what you see as the problem here?
          Hide
          Ashish Singhi added a comment -

          The timeout that I'm talking about is inside DFSClient.java, not inside HBase. HDFS-4911 fixed a problem where the timeout was too long.

          I have experimented with all those configurations but the thing to note here is HBase is not closing the stream, so how will the socket will be closed.

          Can you be a little bit clearer on what you'd like to implement, and what you see as the problem here?

          Below is the brief idea what I would like to implement,
          HBase will have a periodic thread monitoring these streams. When a stream is idle for more than configurable time, crossed the configurable limit on the maximum number of streams that can be kept open and has 0 references (like when HFile#pickReaderVersion is called I will increment the reference count and at the end I will decrement it, as after that this stream is no longer used in the same flow) to it then this thread will close that stream.
          The above implementation will be configurable and by default disabled as we are expecting some impact on read flow.

          Show
          Ashish Singhi added a comment - The timeout that I'm talking about is inside DFSClient.java, not inside HBase. HDFS-4911 fixed a problem where the timeout was too long. I have experimented with all those configurations but the thing to note here is HBase is not closing the stream, so how will the socket will be closed. Can you be a little bit clearer on what you'd like to implement, and what you see as the problem here? Below is the brief idea what I would like to implement, HBase will have a periodic thread monitoring these streams. When a stream is idle for more than configurable time, crossed the configurable limit on the maximum number of streams that can be kept open and has 0 references (like when HFile#pickReaderVersion is called I will increment the reference count and at the end I will decrement it, as after that this stream is no longer used in the same flow) to it then this thread will close that stream. The above implementation will be configurable and by default disabled as we are expecting some impact on read flow.
          Hide
          Colin Patrick McCabe added a comment -

          Unfortunately, this is kind of a complex topic.

          In HDFS, sockets for input streams are managed by the Peer class. Peers can either be "owned" by DFSInputStream objects, or stored in the PeerCache. The PeerCache already has appropriate timeouts and won't keep open too many sockets. However, there is no limit to how long a DFSInputStream could hold on to a Peer.

          There are a few ways to minimize the number of open peers.
          1. If HBase only ever called positional read (pread), the DFSInputStream object would never own a Peer, so this issue would not arise.
          2. If HBase called DFSInputStream#unbuffer, any open peers would be closed, even though the stream would continue to be open.
          3. If HDFS had a timeout for how long it would hold onto a Peer, that could limit the number of open sockets.

          Configuring HBase to periodically close open streams is not necessary; it's strictly worse than option #2.

          I believe there is an option do to #1 even right now. Can't HBase be configured just to use pread and never read? #2 would require a code change to HBase; #3 would require a code change to HDFS.

          Are you running out of file descriptors? What's the user-visible problem here?

          Show
          Colin Patrick McCabe added a comment - Unfortunately, this is kind of a complex topic. In HDFS, sockets for input streams are managed by the Peer class. Peers can either be "owned" by DFSInputStream objects, or stored in the PeerCache . The PeerCache already has appropriate timeouts and won't keep open too many sockets. However, there is no limit to how long a DFSInputStream could hold on to a Peer . There are a few ways to minimize the number of open peers. 1. If HBase only ever called positional read (pread), the DFSInputStream object would never own a Peer , so this issue would not arise. 2. If HBase called DFSInputStream#unbuffer , any open peers would be closed, even though the stream would continue to be open. 3. If HDFS had a timeout for how long it would hold onto a Peer , that could limit the number of open sockets. Configuring HBase to periodically close open streams is not necessary; it's strictly worse than option #2. I believe there is an option do to #1 even right now. Can't HBase be configured just to use pread and never read? #2 would require a code change to HBase; #3 would require a code change to HDFS. Are you running out of file descriptors? What's the user-visible problem here?
          Hide
          Anoop Sam John added a comment -

          We open all HFiles in an RS and will be doing a read of the FileInfo. I believe this is not a pread. I believe we can change it to do pread and try. #2 is also ok after we read the FileInfo.

          The above implementation will be configurable and by default disabled as we are expecting some impact on read flow.

          Closing the HFile's inputStreams is not a good option because of its impact.

          Show
          Anoop Sam John added a comment - We open all HFiles in an RS and will be doing a read of the FileInfo. I believe this is not a pread. I believe we can change it to do pread and try. #2 is also ok after we read the FileInfo. The above implementation will be configurable and by default disabled as we are expecting some impact on read flow. Closing the HFile's inputStreams is not a good option because of its impact.
          Hide
          Ashish Singhi added a comment -

          Thanks Colin Patrick McCabe for the detailed comment. It was helpful.

          I have modified code according to approach #2 and the DFSInputStream#unbuffer is closing the socket. This is what we were initially looking for while deciding our approach to solve this issue. We wanted to have control over the socket and close it rather than closing the complete stream. Due to lack of knowledge about DFSInputStream we missed this api. I am testing this with PE tool for random reads to see if the impact.

          Configuring HBase to periodically close open streams is not necessary; it's strictly worse than option #2.

          Agree, as mentioned above due to lack of knowledge about DFSInputStream we thought of that approach.

          I believe there is an option do to #1 even right now. Can't HBase be configured just to use pread and never read?

          Looking at the code I find that we are specifically not using pread. There are comments like // Seek + read. Better for scanning. and we are mainly using it for small scan (HBASE-9488). So there may be strong reasons behind not using pread.

          Are you running out of file descriptors?

          Yes.

          What's the user-visible problem here?

          Not able to perform any FS operation.

          Anoop Sam John,

          Closing the HFile's inputStreams is not a good option because of its impact.

          Agree, but keeping too many CLOSE_WAIT connections is also not good, right ? Assuming the impact only we thought of making it configurable. Anyways we are now following approach #2 as solution.

          Show
          Ashish Singhi added a comment - Thanks Colin Patrick McCabe for the detailed comment. It was helpful. I have modified code according to approach #2 and the DFSInputStream#unbuffer is closing the socket. This is what we were initially looking for while deciding our approach to solve this issue. We wanted to have control over the socket and close it rather than closing the complete stream. Due to lack of knowledge about DFSInputStream we missed this api. I am testing this with PE tool for random reads to see if the impact. Configuring HBase to periodically close open streams is not necessary; it's strictly worse than option #2. Agree, as mentioned above due to lack of knowledge about DFSInputStream we thought of that approach. I believe there is an option do to #1 even right now. Can't HBase be configured just to use pread and never read? Looking at the code I find that we are specifically not using pread. There are comments like // Seek + read. Better for scanning. and we are mainly using it for small scan ( HBASE-9488 ). So there may be strong reasons behind not using pread. Are you running out of file descriptors? Yes. What's the user-visible problem here? Not able to perform any FS operation. Anoop Sam John , Closing the HFile's inputStreams is not a good option because of its impact. Agree, but keeping too many CLOSE_WAIT connections is also not good, right ? Assuming the impact only we thought of making it configurable. Anyways we are now following approach #2 as solution.
          Hide
          Ashish Singhi added a comment -

          This is a issue we need to fix it, so reopening it.

          Show
          Ashish Singhi added a comment - This is a issue we need to fix it, so reopening it.
          Hide
          Ashish Singhi added a comment -

          We did not observe any performance impact with the patch in our two PE run of randomReads for 1 million row. So I have attached the patch which mainly has two lines of code change rest all are code formatting changes. As we did not see any impact I did not made it configurable. If reviewer think otherwise then I am open for making it configurable and suggesting the configuration name will be appreciated.
          Please review.

          Show
          Ashish Singhi added a comment - We did not observe any performance impact with the patch in our two PE run of randomReads for 1 million row. So I have attached the patch which mainly has two lines of code change rest all are code formatting changes. As we did not see any impact I did not made it configurable. If reviewer think otherwise then I am open for making it configurable and suggesting the configuration name will be appreciated. Please review.
          Hide
          Anoop Sam John added a comment -

          While reading FileInfo, doing unbuffer() call is fine. (we can even do a pread also there)
          For normal scan operation, doing HFileBlock read, doing this unbuffer is correct? Then we can do pread also right? Or we can just make sure this unbuffer() call happens at the end of the scan on this HFile. (When StoreFileScanner done with its Cells for the scan and getting closed)

          Show
          Anoop Sam John added a comment - While reading FileInfo, doing unbuffer() call is fine. (we can even do a pread also there) For normal scan operation, doing HFileBlock read, doing this unbuffer is correct? Then we can do pread also right? Or we can just make sure this unbuffer() call happens at the end of the scan on this HFile. (When StoreFileScanner done with its Cells for the scan and getting closed)
          Hide
          Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 2m 37s master passed
          +1 compile 0m 36s master passed with JDK v1.8.0
          +1 compile 0m 32s master passed with JDK v1.7.0_79
          +1 checkstyle 1m 22s master passed
          +1 mvneclipse 0m 16s master passed
          -1 findbugs 1m 48s hbase-server in master has 1 extant Findbugs warnings.
          +1 javadoc 0m 26s master passed with JDK v1.8.0
          +1 javadoc 0m 32s master passed with JDK v1.7.0_79
          +1 mvninstall 0m 40s the patch passed
          +1 compile 0m 36s the patch passed with JDK v1.8.0
          +1 javac 0m 36s the patch passed
          +1 compile 0m 31s the patch passed with JDK v1.7.0_79
          +1 javac 0m 31s the patch passed
          +1 checkstyle 1m 22s the patch passed
          +1 mvneclipse 0m 14s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 hadoopcheck 0m 45s Patch causes 20 errors with Hadoop v2.4.0.
          -1 hadoopcheck 1m 30s Patch causes 20 errors with Hadoop v2.4.1.
          -1 hadoopcheck 2m 14s Patch causes 20 errors with Hadoop v2.5.0.
          -1 hadoopcheck 3m 0s Patch causes 20 errors with Hadoop v2.5.1.
          -1 hadoopcheck 3m 44s Patch causes 20 errors with Hadoop v2.5.2.
          -1 hadoopcheck 4m 29s Patch causes 20 errors with Hadoop v2.6.1.
          -1 hadoopcheck 5m 15s Patch causes 20 errors with Hadoop v2.6.2.
          -1 hadoopcheck 6m 1s Patch causes 20 errors with Hadoop v2.6.3.
          +1 findbugs 1m 56s the patch passed
          +1 javadoc 0m 25s the patch passed with JDK v1.8.0
          +1 javadoc 0m 31s the patch passed with JDK v1.7.0_79
          -1 unit 71m 12s hbase-server in the patch failed with JDK v1.8.0.
          -1 unit 72m 16s hbase-server in the patch failed with JDK v1.7.0_79.
          +1 asflicense 0m 12s Patch does not generate ASF License warnings.
          166m 39s



          Reason Tests
          JDK v1.8.0 Failed junit tests hadoop.hbase.regionserver.TestKeepDeletes
            hadoop.hbase.regionserver.TestBlocksScanned
            hadoop.hbase.client.TestIntraRowPagination
            hadoop.hbase.filter.TestColumnPrefixFilter
            hadoop.hbase.io.hfile.TestReseekTo
            hadoop.hbase.filter.TestMultipleColumnPrefixFilter
            hadoop.hbase.mob.TestCachedMobFile
            hadoop.hbase.filter.TestInvocationRecordFilter
            hadoop.hbase.io.encoding.TestPrefixTree
            hadoop.hbase.mob.TestMobFileCache
            hadoop.hbase.io.encoding.TestSeekBeforeWithReverseScan
            hadoop.hbase.regionserver.TestStoreFileRefresherChore
            hadoop.hbase.coprocessor.TestRegionObserverStacking
            hadoop.hbase.io.hfile.TestHFile
            hadoop.hbase.regionserver.TestRegionMergeTransaction
            hadoop.hbase.regionserver.TestMinVersions
            hadoop.hbase.regionserver.TestScanner
            hadoop.hbase.mob.TestMobFile
            hadoop.hbase.regionserver.TestStoreFileScannerWithTagCompression
            hadoop.hbase.regionserver.TestBulkLoad
            hadoop.hbase.io.hfile.TestScannerSelectionUsingKeyRange
            hadoop.hbase.io.hfile.TestChecksum
            hadoop.hbase.io.hfile.TestSeekTo
            hadoop.hbase.io.hfile.TestPrefetch
            hadoop.hbase.io.TestHalfStoreFileReader
            hadoop.hbase.io.hfile.TestHFileInlineToRootChunkConversion
            hadoop.hbase.regionserver.TestStoreFile
            hadoop.hbase.regionserver.TestSplitTransaction
            hadoop.hbase.filter.TestDependentColumnFilter
            hadoop.hbase.io.hfile.TestHFileBlockCompatibility
            hadoop.hbase.filter.TestFilter
            hadoop.hbase.regionserver.TestResettingCounters
            hadoop.hbase.io.hfile.TestHFileEncryption
            hadoop.hbase.io.hfile.TestHFileWriterV3
            hadoop.hbase.io.hfile.TestLazyDataBlockDecompression
            hadoop.hbase.coprocessor.TestCoprocessorInterface
            hadoop.hbase.regionserver.TestWideScanner
            hadoop.hbase.regionserver.TestScanWithBloomError
          JDK v1.7.0_79 Failed junit tests hadoop.hbase.regionserver.TestKeepDeletes
            hadoop.hbase.regionserver.TestBlocksScanned
            hadoop.hbase.client.TestIntraRowPagination
            hadoop.hbase.filter.TestColumnPrefixFilter
            hadoop.hbase.io.hfile.TestReseekTo
            hadoop.hbase.filter.TestMultipleColumnPrefixFilter
            hadoop.hbase.mob.TestCachedMobFile
            hadoop.hbase.filter.TestInvocationRecordFilter
            hadoop.hbase.io.encoding.TestPrefixTree
            hadoop.hbase.mob.TestMobFileCache
            hadoop.hbase.io.encoding.TestSeekBeforeWithReverseScan
            hadoop.hbase.regionserver.TestStoreFileRefresherChore
            hadoop.hbase.coprocessor.TestRegionObserverStacking
            hadoop.hbase.io.hfile.TestHFile
            hadoop.hbase.regionserver.TestRegionMergeTransaction
            hadoop.hbase.regionserver.TestMinVersions
            hadoop.hbase.regionserver.TestScanner
            hadoop.hbase.mob.TestMobFile
            hadoop.hbase.regionserver.TestStoreFileScannerWithTagCompression
            hadoop.hbase.regionserver.TestBulkLoad
            hadoop.hbase.io.hfile.TestScannerSelectionUsingKeyRange
            hadoop.hbase.io.hfile.TestChecksum
            hadoop.hbase.io.hfile.TestSeekTo
            hadoop.hbase.io.hfile.TestPrefetch
            hadoop.hbase.io.TestHalfStoreFileReader
            hadoop.hbase.io.hfile.TestHFileInlineToRootChunkConversion
            hadoop.hbase.regionserver.TestStoreFile
            hadoop.hbase.regionserver.TestSplitTransaction
            hadoop.hbase.filter.TestDependentColumnFilter
            hadoop.hbase.io.hfile.TestHFileBlockCompatibility
            hadoop.hbase.filter.TestFilter
            hadoop.hbase.regionserver.TestResettingCounters
            hadoop.hbase.io.hfile.TestHFileEncryption
            hadoop.hbase.io.hfile.TestHFileWriterV3
            hadoop.hbase.io.hfile.TestLazyDataBlockDecompression
            hadoop.hbase.coprocessor.TestCoprocessorInterface
            hadoop.hbase.regionserver.TestWideScanner
            hadoop.hbase.regionserver.TestScanWithBloomError



          Subsystem Report/Notes
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783342/HBASE-9393.patch
          JIRA Issue HBASE-9393
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / 93e200d
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/208/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/208/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/208/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_79.txt
          unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/208/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0.txt https://builds.apache.org/job/PreCommit-HBASE-Build/208/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_79.txt
          JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/208/testReport/
          modules C: hbase-server U: hbase-server
          Max memory used 191MB
          Powered by Apache Yetus 0.1.0 http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/208/console

          This message was automatically generated.

          Show
          Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 2m 37s master passed +1 compile 0m 36s master passed with JDK v1.8.0 +1 compile 0m 32s master passed with JDK v1.7.0_79 +1 checkstyle 1m 22s master passed +1 mvneclipse 0m 16s master passed -1 findbugs 1m 48s hbase-server in master has 1 extant Findbugs warnings. +1 javadoc 0m 26s master passed with JDK v1.8.0 +1 javadoc 0m 32s master passed with JDK v1.7.0_79 +1 mvninstall 0m 40s the patch passed +1 compile 0m 36s the patch passed with JDK v1.8.0 +1 javac 0m 36s the patch passed +1 compile 0m 31s the patch passed with JDK v1.7.0_79 +1 javac 0m 31s the patch passed +1 checkstyle 1m 22s the patch passed +1 mvneclipse 0m 14s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 hadoopcheck 0m 45s Patch causes 20 errors with Hadoop v2.4.0. -1 hadoopcheck 1m 30s Patch causes 20 errors with Hadoop v2.4.1. -1 hadoopcheck 2m 14s Patch causes 20 errors with Hadoop v2.5.0. -1 hadoopcheck 3m 0s Patch causes 20 errors with Hadoop v2.5.1. -1 hadoopcheck 3m 44s Patch causes 20 errors with Hadoop v2.5.2. -1 hadoopcheck 4m 29s Patch causes 20 errors with Hadoop v2.6.1. -1 hadoopcheck 5m 15s Patch causes 20 errors with Hadoop v2.6.2. -1 hadoopcheck 6m 1s Patch causes 20 errors with Hadoop v2.6.3. +1 findbugs 1m 56s the patch passed +1 javadoc 0m 25s the patch passed with JDK v1.8.0 +1 javadoc 0m 31s the patch passed with JDK v1.7.0_79 -1 unit 71m 12s hbase-server in the patch failed with JDK v1.8.0. -1 unit 72m 16s hbase-server in the patch failed with JDK v1.7.0_79. +1 asflicense 0m 12s Patch does not generate ASF License warnings. 166m 39s Reason Tests JDK v1.8.0 Failed junit tests hadoop.hbase.regionserver.TestKeepDeletes   hadoop.hbase.regionserver.TestBlocksScanned   hadoop.hbase.client.TestIntraRowPagination   hadoop.hbase.filter.TestColumnPrefixFilter   hadoop.hbase.io.hfile.TestReseekTo   hadoop.hbase.filter.TestMultipleColumnPrefixFilter   hadoop.hbase.mob.TestCachedMobFile   hadoop.hbase.filter.TestInvocationRecordFilter   hadoop.hbase.io.encoding.TestPrefixTree   hadoop.hbase.mob.TestMobFileCache   hadoop.hbase.io.encoding.TestSeekBeforeWithReverseScan   hadoop.hbase.regionserver.TestStoreFileRefresherChore   hadoop.hbase.coprocessor.TestRegionObserverStacking   hadoop.hbase.io.hfile.TestHFile   hadoop.hbase.regionserver.TestRegionMergeTransaction   hadoop.hbase.regionserver.TestMinVersions   hadoop.hbase.regionserver.TestScanner   hadoop.hbase.mob.TestMobFile   hadoop.hbase.regionserver.TestStoreFileScannerWithTagCompression   hadoop.hbase.regionserver.TestBulkLoad   hadoop.hbase.io.hfile.TestScannerSelectionUsingKeyRange   hadoop.hbase.io.hfile.TestChecksum   hadoop.hbase.io.hfile.TestSeekTo   hadoop.hbase.io.hfile.TestPrefetch   hadoop.hbase.io.TestHalfStoreFileReader   hadoop.hbase.io.hfile.TestHFileInlineToRootChunkConversion   hadoop.hbase.regionserver.TestStoreFile   hadoop.hbase.regionserver.TestSplitTransaction   hadoop.hbase.filter.TestDependentColumnFilter   hadoop.hbase.io.hfile.TestHFileBlockCompatibility   hadoop.hbase.filter.TestFilter   hadoop.hbase.regionserver.TestResettingCounters   hadoop.hbase.io.hfile.TestHFileEncryption   hadoop.hbase.io.hfile.TestHFileWriterV3   hadoop.hbase.io.hfile.TestLazyDataBlockDecompression   hadoop.hbase.coprocessor.TestCoprocessorInterface   hadoop.hbase.regionserver.TestWideScanner   hadoop.hbase.regionserver.TestScanWithBloomError JDK v1.7.0_79 Failed junit tests hadoop.hbase.regionserver.TestKeepDeletes   hadoop.hbase.regionserver.TestBlocksScanned   hadoop.hbase.client.TestIntraRowPagination   hadoop.hbase.filter.TestColumnPrefixFilter   hadoop.hbase.io.hfile.TestReseekTo   hadoop.hbase.filter.TestMultipleColumnPrefixFilter   hadoop.hbase.mob.TestCachedMobFile   hadoop.hbase.filter.TestInvocationRecordFilter   hadoop.hbase.io.encoding.TestPrefixTree   hadoop.hbase.mob.TestMobFileCache   hadoop.hbase.io.encoding.TestSeekBeforeWithReverseScan   hadoop.hbase.regionserver.TestStoreFileRefresherChore   hadoop.hbase.coprocessor.TestRegionObserverStacking   hadoop.hbase.io.hfile.TestHFile   hadoop.hbase.regionserver.TestRegionMergeTransaction   hadoop.hbase.regionserver.TestMinVersions   hadoop.hbase.regionserver.TestScanner   hadoop.hbase.mob.TestMobFile   hadoop.hbase.regionserver.TestStoreFileScannerWithTagCompression   hadoop.hbase.regionserver.TestBulkLoad   hadoop.hbase.io.hfile.TestScannerSelectionUsingKeyRange   hadoop.hbase.io.hfile.TestChecksum   hadoop.hbase.io.hfile.TestSeekTo   hadoop.hbase.io.hfile.TestPrefetch   hadoop.hbase.io.TestHalfStoreFileReader   hadoop.hbase.io.hfile.TestHFileInlineToRootChunkConversion   hadoop.hbase.regionserver.TestStoreFile   hadoop.hbase.regionserver.TestSplitTransaction   hadoop.hbase.filter.TestDependentColumnFilter   hadoop.hbase.io.hfile.TestHFileBlockCompatibility   hadoop.hbase.filter.TestFilter   hadoop.hbase.regionserver.TestResettingCounters   hadoop.hbase.io.hfile.TestHFileEncryption   hadoop.hbase.io.hfile.TestHFileWriterV3   hadoop.hbase.io.hfile.TestLazyDataBlockDecompression   hadoop.hbase.coprocessor.TestCoprocessorInterface   hadoop.hbase.regionserver.TestWideScanner   hadoop.hbase.regionserver.TestScanWithBloomError Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783342/HBASE-9393.patch JIRA Issue HBASE-9393 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / 93e200d findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/208/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/208/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/208/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_79.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/208/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0.txt https://builds.apache.org/job/PreCommit-HBASE-Build/208/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_79.txt JDK v1.7.0_79 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/208/testReport/ modules C: hbase-server U: hbase-server Max memory used 191MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HBASE-Build/208/console This message was automatically generated.
          Hide
          Ted Yu added a comment -

          Test failures were due to:

          testSeekTo[4](org.apache.hadoop.hbase.io.hfile.TestSeekTo)  Time elapsed: 0.033 sec  <<< ERROR!
          java.lang.UnsupportedOperationException: this stream does not support unbuffering.
          	at org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:229)
          	at org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:227)
          	at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:518)
          	at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:562)
          	at org.apache.hadoop.hbase.io.hfile.TestSeekTo.testSeekToInternals(TestSeekTo.java:307)
          	at org.apache.hadoop.hbase.io.hfile.TestSeekTo.testSeekTo(TestSeekTo.java:298)
          

          Here was the cause:

          java.lang.ClassCastException: org.apache.hadoop.fs.BufferedFSInputStream cannot be cast to org.apache.hadoop.fs.CanUnbuffer
          

          BufferedFSInputStream currently doesn't implement CanUnbuffer.

          After discussion with Colin Patrick McCabe, I logged HADOOP-12724

          Show
          Ted Yu added a comment - Test failures were due to: testSeekTo[4](org.apache.hadoop.hbase.io.hfile.TestSeekTo) Time elapsed: 0.033 sec <<< ERROR! java.lang.UnsupportedOperationException: this stream does not support unbuffering. at org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:229) at org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:227) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:518) at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:562) at org.apache.hadoop.hbase.io.hfile.TestSeekTo.testSeekToInternals(TestSeekTo.java:307) at org.apache.hadoop.hbase.io.hfile.TestSeekTo.testSeekTo(TestSeekTo.java:298) Here was the cause: java.lang.ClassCastException: org.apache.hadoop.fs.BufferedFSInputStream cannot be cast to org.apache.hadoop.fs.CanUnbuffer BufferedFSInputStream currently doesn't implement CanUnbuffer. After discussion with Colin Patrick McCabe , I logged HADOOP-12724
          Hide
          KWON BYUNGCHANG added a comment -

          which hadoop version use?

          I have figured out building hbase with hadoop 2.7.1 (HBASE-15138)

          Show
          KWON BYUNGCHANG added a comment - which hadoop version use? I have figured out building hbase with hadoop 2.7.1 ( HBASE-15138 )
          Hide
          ramkrishna.s.vasudevan added a comment -

          Ashish Singhi
          Just going thro this JIRA and the discussion above. Once a HFile is created in RS we immediately keep it open (the streams are alive). So during this process do we read the FileInfo? If so just after reading the FileInfo we can call unbuffer.
          And as Anoop says- after reading the HFileblock am not very sure if it is right to call unbuffer() instead after evey scan is done we can call unbuffer() - But how costly is that operation?
          One more question - If preads does not have this SOCKETS kept open then that should ideally help in all these cases - but we may need to evaluate which one is going to be costlier.

          Show
          ramkrishna.s.vasudevan added a comment - Ashish Singhi Just going thro this JIRA and the discussion above. Once a HFile is created in RS we immediately keep it open (the streams are alive). So during this process do we read the FileInfo? If so just after reading the FileInfo we can call unbuffer. And as Anoop says- after reading the HFileblock am not very sure if it is right to call unbuffer() instead after evey scan is done we can call unbuffer() - But how costly is that operation? One more question - If preads does not have this SOCKETS kept open then that should ideally help in all these cases - but we may need to evaluate which one is going to be costlier.
          Hide
          Vinayakumar B added a comment -

          I think those test failures, are because of local file system is being used to read/write HFiles, gives BufferedFSInputStream which doesnt implement CanUnBuffer.

          Show
          Vinayakumar B added a comment - I think those test failures, are because of local file system is being used to read/write HFiles, gives BufferedFSInputStream which doesnt implement CanUnBuffer.
          Hide
          Ashish Singhi added a comment -

          Anoop Sam John and ramkrishna vasudevan thanks for comment.
          I think we are purposefully doing pread here as we are passing value false to the HFileBlock read. For meta and small scan I see pread is set to true. When replacing the unbuffer with pread in our internal test we found that there was a performance degrade of about 9% and with the attached patch there was a slight(4%) improvement!! with below commands,

          hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestPerf --presplit=134 --rows=1000000 randomWrite 14
          hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestPerf --rows=1000000 randomRead 14
          

          Or we can just make sure this unbuffer() call happens at the end of the scan on this HFile. (When StoreFileScanner done with its Cells for the scan and getting closed)

          Yes I am working on it, on first try I saw that CLOSE_WAIT are not getting closed, I need to dig more in it to find out what is the difference it makes to close the socket in close call or immediately after the read. I will come up soon with the result.

          Ted Yu, Thanks a lot for checking the test failure and for the help on the same.

          Vinayakumar B, Thanks, will add a check that if stream is instance of DFSInputStream then only call unbuffer().

          Show
          Ashish Singhi added a comment - Anoop Sam John and ramkrishna vasudevan thanks for comment. I think we are purposefully doing pread here as we are passing value false to the HFileBlock read. For meta and small scan I see pread is set to true. When replacing the unbuffer with pread in our internal test we found that there was a performance degrade of about 9% and with the attached patch there was a slight(4%) improvement!! with below commands, hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestPerf --presplit=134 --rows=1000000 randomWrite 14 hbase org.apache.hadoop.hbase.PerformanceEvaluation --table=TestPerf --rows=1000000 randomRead 14 Or we can just make sure this unbuffer() call happens at the end of the scan on this HFile. (When StoreFileScanner done with its Cells for the scan and getting closed) Yes I am working on it, on first try I saw that CLOSE_WAIT are not getting closed, I need to dig more in it to find out what is the difference it makes to close the socket in close call or immediately after the read. I will come up soon with the result. Ted Yu , Thanks a lot for checking the test failure and for the help on the same. Vinayakumar B , Thanks, will add a check that if stream is instance of DFSInputStream then only call unbuffer().
          Hide
          Ashish Singhi added a comment -

          I have tried with Apache Hadoop 2.6.0 and Custom Hadoop almost equal to 2.7.2
          But I strongly think this is not related to Hadoop.

          Show
          Ashish Singhi added a comment - I have tried with Apache Hadoop 2.6.0 and Custom Hadoop almost equal to 2.7.2 But I strongly think this is not related to Hadoop.
          Hide
          Ashish Singhi added a comment -

          Attached v1 patch addressing the review comment.
          Tested the patch with PE for the following commands,

          hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --table=TestPerf --presplit=134 --rows=1000000 randomWrite 20
          
          hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --table=TestPerf --addColumns=false --filterAll --rows=1000 scanRange10000 20
          

          Did not find any impact in the performance with the patch.
          Time taken without patch

          2016-01-22 19:17:16,312 INFO  [main] hbase.PerformanceEvaluation: [RandomScanWithRange10000Test]	Min: 237798ms	Max: 242298ms	Avg: 241073ms
          

          Time taken with patch

          2016-01-22 20:20:33,809 INFO  [main] hbase.PerformanceEvaluation: [RandomScanWithRange10000Test]	Min: 224436ms	Max: 227248ms	Avg: 226417ms
          

          Note: I have tested the patch on my one node cluster on master branch where block cache was disabled and rest all the configurations are almost equal to default values.

          Please review.

          Show
          Ashish Singhi added a comment - Attached v1 patch addressing the review comment. Tested the patch with PE for the following commands, hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --table=TestPerf --presplit=134 --rows=1000000 randomWrite 20 hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --table=TestPerf --addColumns=false --filterAll --rows=1000 scanRange10000 20 Did not find any impact in the performance with the patch. Time taken without patch 2016-01-22 19:17:16,312 INFO [main] hbase.PerformanceEvaluation: [RandomScanWithRange10000Test] Min: 237798ms Max: 242298ms Avg: 241073ms Time taken with patch 2016-01-22 20:20:33,809 INFO [main] hbase.PerformanceEvaluation: [RandomScanWithRange10000Test] Min: 224436ms Max: 227248ms Avg: 226417ms Note: I have tested the patch on my one node cluster on master branch where block cache was disabled and rest all the configurations are almost equal to default values. Please review.
          Hide
          Ted Yu added a comment -
          1772	          boolean useHBaseChecksum = this.streamWrapper.shouldUseHBaseChecksum();
          1773	          final FSDataInputStream stream = this.streamWrapper.getStream(useHBaseChecksum);
          1774	          if (stream.getWrappedStream() instanceof DFSInputStream) {
          1775	            stream.unbuffer();
          1776	          }
          

          The above code is repeated. Suggest refactoring into a method.

          Please run PE tool for other read actions.

          Show
          Ted Yu added a comment - 1772 boolean useHBaseChecksum = this .streamWrapper.shouldUseHBaseChecksum(); 1773 final FSDataInputStream stream = this .streamWrapper.getStream(useHBaseChecksum); 1774 if (stream.getWrappedStream() instanceof DFSInputStream) { 1775 stream.unbuffer(); 1776 } The above code is repeated. Suggest refactoring into a method. Please run PE tool for other read actions.
          Hide
          Ashish Singhi added a comment -

          Please run PE tool for other read actions.

          Can you suggest which one's to run? I will run them now.

          Show
          Ashish Singhi added a comment - Please run PE tool for other read actions. Can you suggest which one's to run? I will run them now.
          Hide
          Ted Yu added a comment -

          How about the following:

          randomSeekScan
          randomRead
          sequentialRead
          filterScan

          Show
          Ted Yu added a comment - How about the following: randomSeekScan randomRead sequentialRead filterScan
          Hide
          Ashish Singhi added a comment -

          Ok. Let me try.

          Show
          Ashish Singhi added a comment - Ok. Let me try.
          Hide
          Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 3m 6s master passed
          +1 compile 0m 34s master passed with JDK v1.8.0_66
          +1 compile 0m 36s master passed with JDK v1.7.0_91
          +1 checkstyle 3m 57s master passed
          +1 mvneclipse 0m 17s master passed
          -1 findbugs 1m 56s hbase-server in master has 1 extant Findbugs warnings.
          +1 javadoc 0m 29s master passed with JDK v1.8.0_66
          +1 javadoc 0m 35s master passed with JDK v1.7.0_91
          +1 mvninstall 0m 47s the patch passed
          +1 compile 0m 35s the patch passed with JDK v1.8.0_66
          +1 javac 0m 35s the patch passed
          +1 compile 0m 36s the patch passed with JDK v1.7.0_91
          +1 javac 0m 36s the patch passed
          +1 checkstyle 4m 18s the patch passed
          +1 mvneclipse 0m 17s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 hadoopcheck 1m 0s Patch causes 20 errors with Hadoop v2.4.0.
          -1 hadoopcheck 1m 50s Patch causes 20 errors with Hadoop v2.4.1.
          -1 hadoopcheck 2m 41s Patch causes 20 errors with Hadoop v2.5.0.
          -1 hadoopcheck 3m 36s Patch causes 20 errors with Hadoop v2.5.1.
          -1 hadoopcheck 4m 31s Patch causes 20 errors with Hadoop v2.5.2.
          -1 hadoopcheck 5m 24s Patch causes 20 errors with Hadoop v2.6.1.
          -1 hadoopcheck 6m 17s Patch causes 20 errors with Hadoop v2.6.2.
          -1 hadoopcheck 7m 11s Patch causes 20 errors with Hadoop v2.6.3.
          +1 findbugs 1m 58s the patch passed
          +1 javadoc 0m 24s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 35s the patch passed with JDK v1.7.0_91
          -1 unit 15m 16s hbase-server in the patch failed with JDK v1.8.0_66.
          -1 unit 17m 47s hbase-server in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 11s Patch does not generate ASF License warnings.
          64m 27s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.hbase.io.hfile.TestHFileEncryption
            hadoop.hbase.io.hfile.TestHFile
          JDK v1.7.0_91 Failed junit tests hadoop.hbase.io.hfile.TestHFileEncryption
            hadoop.hbase.io.hfile.TestHFile



          Subsystem Report/Notes
          Docker Client=1.7.1 Server=1.7.1 Image:yetus/hbase:date2016-01-22
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783858/HBASE-9393.v1.patch
          JIRA Issue HBASE-9393
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux 6bda2421f74d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / f9e69b5
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/257/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/257/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/257/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/257/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HBASE-Build/257/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/257/testReport/
          modules C: hbase-server U: hbase-server
          Max memory used 173MB
          Powered by Apache Yetus 0.1.0 http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/257/console

          This message was automatically generated.

          Show
          Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 3m 6s master passed +1 compile 0m 34s master passed with JDK v1.8.0_66 +1 compile 0m 36s master passed with JDK v1.7.0_91 +1 checkstyle 3m 57s master passed +1 mvneclipse 0m 17s master passed -1 findbugs 1m 56s hbase-server in master has 1 extant Findbugs warnings. +1 javadoc 0m 29s master passed with JDK v1.8.0_66 +1 javadoc 0m 35s master passed with JDK v1.7.0_91 +1 mvninstall 0m 47s the patch passed +1 compile 0m 35s the patch passed with JDK v1.8.0_66 +1 javac 0m 35s the patch passed +1 compile 0m 36s the patch passed with JDK v1.7.0_91 +1 javac 0m 36s the patch passed +1 checkstyle 4m 18s the patch passed +1 mvneclipse 0m 17s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 hadoopcheck 1m 0s Patch causes 20 errors with Hadoop v2.4.0. -1 hadoopcheck 1m 50s Patch causes 20 errors with Hadoop v2.4.1. -1 hadoopcheck 2m 41s Patch causes 20 errors with Hadoop v2.5.0. -1 hadoopcheck 3m 36s Patch causes 20 errors with Hadoop v2.5.1. -1 hadoopcheck 4m 31s Patch causes 20 errors with Hadoop v2.5.2. -1 hadoopcheck 5m 24s Patch causes 20 errors with Hadoop v2.6.1. -1 hadoopcheck 6m 17s Patch causes 20 errors with Hadoop v2.6.2. -1 hadoopcheck 7m 11s Patch causes 20 errors with Hadoop v2.6.3. +1 findbugs 1m 58s the patch passed +1 javadoc 0m 24s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 35s the patch passed with JDK v1.7.0_91 -1 unit 15m 16s hbase-server in the patch failed with JDK v1.8.0_66. -1 unit 17m 47s hbase-server in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 11s Patch does not generate ASF License warnings. 64m 27s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.hbase.io.hfile.TestHFileEncryption   hadoop.hbase.io.hfile.TestHFile JDK v1.7.0_91 Failed junit tests hadoop.hbase.io.hfile.TestHFileEncryption   hadoop.hbase.io.hfile.TestHFile Subsystem Report/Notes Docker Client=1.7.1 Server=1.7.1 Image:yetus/hbase:date2016-01-22 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783858/HBASE-9393.v1.patch JIRA Issue HBASE-9393 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux 6bda2421f74d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / f9e69b5 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/257/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/257/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/257/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/257/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HBASE-Build/257/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/257/testReport/ modules C: hbase-server U: hbase-server Max memory used 173MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HBASE-Build/257/console This message was automatically generated.
          Hide
          Ted Yu added a comment -

          w.r.t. test failure:

          java.lang.NullPointerException: null
          	at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:525)
          	at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:571)
          	at org.apache.hadoop.hbase.io.hfile.TestHFile.testCorrupt0LengthHFile(TestHFile.java:116)
          

          For corrupt HFile, stream is null in the following call:

                if (stream.getWrappedStream() instanceof DFSInputStream) {
          

          Please add null check.

          Show
          Ted Yu added a comment - w.r.t. test failure: java.lang.NullPointerException: null at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:525) at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:571) at org.apache.hadoop.hbase.io.hfile.TestHFile.testCorrupt0LengthHFile(TestHFile.java:116) For corrupt HFile, stream is null in the following call: if (stream.getWrappedStream() instanceof DFSInputStream) { Please add null check.
          Hide
          Ashish Singhi added a comment -

          Patch addressing test failures and code refactor comment.

          Show
          Ashish Singhi added a comment - Patch addressing test failures and code refactor comment.
          Hide
          Ashish Singhi added a comment -

          Ran some tests,
          0. randomWrite

          hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --presplit=134 --rows=100000 randomWrite 20
          

          1. randomRead

          hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows=1000 randomRead 20
          

          Without patch

          2016-01-22 23:49:52,305 INFO  [main] hbase.PerformanceEvaluation: [RandomReadTest] Summary of timings (ms): [7223, 7034, 6950, 6945, 6882, 6830, 7107, 7097, 7122, 6675, 7072, 6636, 7080, 6533, 6987, 6305, 7227, 7172, 6608, 6589]
          2016-01-22 23:49:52,306 INFO  [main] hbase.PerformanceEvaluation: [RandomReadTest]	Min: 6305ms	Max: 7227ms	Avg: 6903ms
          

          With patch

          2016-01-22 23:43:08,695 INFO  [main] hbase.PerformanceEvaluation: [RandomReadTest] Summary of timings (ms): [6406, 6648, 7623, 6678, 7163, 6673, 7150, 6712, 6412, 7169, 6364, 6214, 7293, 7484, 7633, 7212, 7350, 6447, 7101, 6499]
          2016-01-22 23:43:08,696 INFO  [main] hbase.PerformanceEvaluation: [RandomReadTest]	Min: 6214ms	Max: 7633ms	Avg: 6911ms
          

          2. sequentialRead

          hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows=1000 sequentialRead 20
          

          Without patch

          2016-01-22 23:54:56,024 INFO  [main] hbase.PerformanceEvaluation: [SequentialReadTest] Summary of timings (ms): [6476, 6150, 6291, 6381, 6284, 6182, 6069, 6350, 6394, 6200, 6260, 6349, 6240, 5974, 6014, 5965, 6483, 6025, 6098, 6389]
          2016-01-22 23:54:56,025 INFO  [main] hbase.PerformanceEvaluation: [SequentialReadTest]	Min: 5965ms	Max: 6483ms	Avg: 6228ms
          

          With patch

          2016-01-22 23:58:40,519 INFO  [main] hbase.PerformanceEvaluation: [RandomReadTest] Summary of timings (ms): [6985, 6720, 6970, 6756, 6468, 6890, 6719, 7003, 6348, 6803, 6584, 6846, 6793, 6496, 6490, 6879, 6450, 6663, 6921, 6896]
          2016-01-22 23:58:40,520 INFO  [main] hbase.PerformanceEvaluation: [RandomReadTest]	Min: 6348ms	Max: 7003ms	Avg: 6734ms
          

          3. randomSeekScan

          hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows=1000 randomSeekScan 20
          

          Without patch

          2016-01-23 00:12:01,954 INFO  [main] hbase.PerformanceEvaluation: [RandomSeekScanTest] Summary of timings (ms): [105473, 107416, 88796, 91563, 104032, 101899, 99557, 103790, 107929, 94213, 103251, 101177, 106168, 106903, 106086, 101905, 97543, 68672, 91004, 105064]
          2016-01-23 00:12:01,954 INFO  [main] hbase.PerformanceEvaluation: [RandomSeekScanTest]	Min: 68672ms	Max: 107929ms	Avg: 99622ms
          

          With patch

          2016-01-23 00:05:07,185 INFO  [main] hbase.PerformanceEvaluation: [RandomSeekScanTest] Summary of timings (ms): [78781, 82973, 76085, 81127, 74558, 74974, 60761, 77760, 80286, 70820, 71463, 74105, 70433, 64313, 80937, 82408, 81356, 83155, 65988, 82360]
          2016-01-23 00:05:07,186 INFO  [main] hbase.PerformanceEvaluation: [RandomSeekScanTest]	Min: 60761ms	Max: 83155ms	Avg: 75732ms
          

          4. filterScan

          hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows=100 filterScan 3
          

          Without patch

          2016-01-23 01:01:10,168 INFO  [main] hbase.PerformanceEvaluation: [FilteredScanTest] Summary of timings (ms): [417507, 425604, 420263]
          2016-01-23 01:01:10,169 INFO  [main] hbase.PerformanceEvaluation: [FilteredScanTest]	Min: 417507ms	Max: 425604ms	Avg: 421124ms
          

          With patch

          2016-01-23 01:17:28,614 INFO  [main] hbase.PerformanceEvaluation: [FilteredScanTest] Summary of timings (ms): [359967, 358833, 359256]
          2016-01-23 01:17:28,615 INFO  [main] hbase.PerformanceEvaluation: [FilteredScanTest]	Min: 358833ms	Max: 359967ms	Avg: 359352ms
          
          Show
          Ashish Singhi added a comment - Ran some tests, 0. randomWrite hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --presplit=134 --rows=100000 randomWrite 20 1. randomRead hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows=1000 randomRead 20 Without patch 2016-01-22 23:49:52,305 INFO [main] hbase.PerformanceEvaluation: [RandomReadTest] Summary of timings (ms): [7223, 7034, 6950, 6945, 6882, 6830, 7107, 7097, 7122, 6675, 7072, 6636, 7080, 6533, 6987, 6305, 7227, 7172, 6608, 6589] 2016-01-22 23:49:52,306 INFO [main] hbase.PerformanceEvaluation: [RandomReadTest] Min: 6305ms Max: 7227ms Avg: 6903ms With patch 2016-01-22 23:43:08,695 INFO [main] hbase.PerformanceEvaluation: [RandomReadTest] Summary of timings (ms): [6406, 6648, 7623, 6678, 7163, 6673, 7150, 6712, 6412, 7169, 6364, 6214, 7293, 7484, 7633, 7212, 7350, 6447, 7101, 6499] 2016-01-22 23:43:08,696 INFO [main] hbase.PerformanceEvaluation: [RandomReadTest] Min: 6214ms Max: 7633ms Avg: 6911ms 2. sequentialRead hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows=1000 sequentialRead 20 Without patch 2016-01-22 23:54:56,024 INFO [main] hbase.PerformanceEvaluation: [SequentialReadTest] Summary of timings (ms): [6476, 6150, 6291, 6381, 6284, 6182, 6069, 6350, 6394, 6200, 6260, 6349, 6240, 5974, 6014, 5965, 6483, 6025, 6098, 6389] 2016-01-22 23:54:56,025 INFO [main] hbase.PerformanceEvaluation: [SequentialReadTest] Min: 5965ms Max: 6483ms Avg: 6228ms With patch 2016-01-22 23:58:40,519 INFO [main] hbase.PerformanceEvaluation: [RandomReadTest] Summary of timings (ms): [6985, 6720, 6970, 6756, 6468, 6890, 6719, 7003, 6348, 6803, 6584, 6846, 6793, 6496, 6490, 6879, 6450, 6663, 6921, 6896] 2016-01-22 23:58:40,520 INFO [main] hbase.PerformanceEvaluation: [RandomReadTest] Min: 6348ms Max: 7003ms Avg: 6734ms 3. randomSeekScan hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows=1000 randomSeekScan 20 Without patch 2016-01-23 00:12:01,954 INFO [main] hbase.PerformanceEvaluation: [RandomSeekScanTest] Summary of timings (ms): [105473, 107416, 88796, 91563, 104032, 101899, 99557, 103790, 107929, 94213, 103251, 101177, 106168, 106903, 106086, 101905, 97543, 68672, 91004, 105064] 2016-01-23 00:12:01,954 INFO [main] hbase.PerformanceEvaluation: [RandomSeekScanTest] Min: 68672ms Max: 107929ms Avg: 99622ms With patch 2016-01-23 00:05:07,185 INFO [main] hbase.PerformanceEvaluation: [RandomSeekScanTest] Summary of timings (ms): [78781, 82973, 76085, 81127, 74558, 74974, 60761, 77760, 80286, 70820, 71463, 74105, 70433, 64313, 80937, 82408, 81356, 83155, 65988, 82360] 2016-01-23 00:05:07,186 INFO [main] hbase.PerformanceEvaluation: [RandomSeekScanTest] Min: 60761ms Max: 83155ms Avg: 75732ms 4. filterScan hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows=100 filterScan 3 Without patch 2016-01-23 01:01:10,168 INFO [main] hbase.PerformanceEvaluation: [FilteredScanTest] Summary of timings (ms): [417507, 425604, 420263] 2016-01-23 01:01:10,169 INFO [main] hbase.PerformanceEvaluation: [FilteredScanTest] Min: 417507ms Max: 425604ms Avg: 421124ms With patch 2016-01-23 01:17:28,614 INFO [main] hbase.PerformanceEvaluation: [FilteredScanTest] Summary of timings (ms): [359967, 358833, 359256] 2016-01-23 01:17:28,615 INFO [main] hbase.PerformanceEvaluation: [FilteredScanTest] Min: 358833ms Max: 359967ms Avg: 359352ms
          Hide
          Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 2m 43s master passed
          +1 compile 0m 32s master passed with JDK v1.8.0_66
          +1 compile 0m 34s master passed with JDK v1.7.0_91
          +1 checkstyle 3m 55s master passed
          +1 mvneclipse 0m 16s master passed
          -1 findbugs 1m 52s hbase-server in master has 1 extant Findbugs warnings.
          +1 javadoc 0m 24s master passed with JDK v1.8.0_66
          +1 javadoc 0m 33s master passed with JDK v1.7.0_91
          +1 mvninstall 0m 45s the patch passed
          +1 compile 0m 32s the patch passed with JDK v1.8.0_66
          +1 javac 0m 32s the patch passed
          +1 compile 0m 36s the patch passed with JDK v1.7.0_91
          +1 javac 0m 36s the patch passed
          +1 checkstyle 3m 58s the patch passed
          +1 mvneclipse 0m 16s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 hadoopcheck 0m 53s Patch causes 16 errors with Hadoop v2.4.0.
          -1 hadoopcheck 1m 43s Patch causes 16 errors with Hadoop v2.4.1.
          -1 hadoopcheck 2m 34s Patch causes 16 errors with Hadoop v2.5.0.
          -1 hadoopcheck 3m 24s Patch causes 16 errors with Hadoop v2.5.1.
          -1 hadoopcheck 4m 15s Patch causes 16 errors with Hadoop v2.5.2.
          -1 hadoopcheck 5m 5s Patch causes 16 errors with Hadoop v2.6.1.
          -1 hadoopcheck 5m 56s Patch causes 16 errors with Hadoop v2.6.2.
          -1 hadoopcheck 6m 47s Patch causes 16 errors with Hadoop v2.6.3.
          +1 findbugs 1m 58s the patch passed
          +1 javadoc 0m 23s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 33s the patch passed with JDK v1.7.0_91
          +1 unit 81m 7s hbase-server in the patch passed with JDK v1.8.0_66.
          +1 unit 80m 21s hbase-server in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 16s Patch does not generate ASF License warnings.
          191m 18s



          Subsystem Report/Notes
          Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-22
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783872/HBASE-9393.v2.patch
          JIRA Issue HBASE-9393
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux 07e79f2c49df 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / f9e69b5
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/259/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/259/testReport/
          modules C: hbase-server U: hbase-server
          Max memory used 173MB
          Powered by Apache Yetus 0.1.0 http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/259/console

          This message was automatically generated.

          Show
          Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 2m 43s master passed +1 compile 0m 32s master passed with JDK v1.8.0_66 +1 compile 0m 34s master passed with JDK v1.7.0_91 +1 checkstyle 3m 55s master passed +1 mvneclipse 0m 16s master passed -1 findbugs 1m 52s hbase-server in master has 1 extant Findbugs warnings. +1 javadoc 0m 24s master passed with JDK v1.8.0_66 +1 javadoc 0m 33s master passed with JDK v1.7.0_91 +1 mvninstall 0m 45s the patch passed +1 compile 0m 32s the patch passed with JDK v1.8.0_66 +1 javac 0m 32s the patch passed +1 compile 0m 36s the patch passed with JDK v1.7.0_91 +1 javac 0m 36s the patch passed +1 checkstyle 3m 58s the patch passed +1 mvneclipse 0m 16s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 hadoopcheck 0m 53s Patch causes 16 errors with Hadoop v2.4.0. -1 hadoopcheck 1m 43s Patch causes 16 errors with Hadoop v2.4.1. -1 hadoopcheck 2m 34s Patch causes 16 errors with Hadoop v2.5.0. -1 hadoopcheck 3m 24s Patch causes 16 errors with Hadoop v2.5.1. -1 hadoopcheck 4m 15s Patch causes 16 errors with Hadoop v2.5.2. -1 hadoopcheck 5m 5s Patch causes 16 errors with Hadoop v2.6.1. -1 hadoopcheck 5m 56s Patch causes 16 errors with Hadoop v2.6.2. -1 hadoopcheck 6m 47s Patch causes 16 errors with Hadoop v2.6.3. +1 findbugs 1m 58s the patch passed +1 javadoc 0m 23s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 33s the patch passed with JDK v1.7.0_91 +1 unit 81m 7s hbase-server in the patch passed with JDK v1.8.0_66. +1 unit 80m 21s hbase-server in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 16s Patch does not generate ASF License warnings. 191m 18s Subsystem Report/Notes Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-22 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12783872/HBASE-9393.v2.patch JIRA Issue HBASE-9393 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux 07e79f2c49df 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / f9e69b5 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/259/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/259/testReport/ modules C: hbase-server U: hbase-server Max memory used 173MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HBASE-Build/259/console This message was automatically generated.
          Hide
          Anoop Sam John added a comment -

          In HFileScannerImpl

          public void close() {
          573	      reader.unbuffer();
          574	      this.returnBlocks(true);
          

          May be we should call this unbuffer() iff it was a seek+ read (scan). For get case we will be using pread. We have a boolean pread state in this. Can make use of that.

          Show
          Anoop Sam John added a comment - In HFileScannerImpl public void close() { 573 reader.unbuffer(); 574 this .returnBlocks( true ); May be we should call this unbuffer() iff it was a seek+ read (scan). For get case we will be using pread. We have a boolean pread state in this. Can make use of that.
          Hide
          Anoop Sam John added a comment -

          FSDataInputStream unbuffer API is added from hadoop 2.7.0 as I can see in maven repo. So for old versions of HBase fix where we have to support old versions of hadoop also, we need handle in some other way!

          Show
          Anoop Sam John added a comment - FSDataInputStream unbuffer API is added from hadoop 2.7.0 as I can see in maven repo. So for old versions of HBase fix where we have to support old versions of hadoop also, we need handle in some other way!
          Hide
          Ted Yu added a comment - - edited

          For branch-1,

          +    if (stream != null && stream.getWrappedStream() instanceof DFSInputStream) {
          

          The instanceof check can be replaced with iterating stream.getWrappedStream().getClass().getInterfaces() and seeing if one of the interfaces is CanUnbuffer

          Show
          Ted Yu added a comment - - edited For branch-1, + if (stream != null && stream.getWrappedStream() instanceof DFSInputStream) { The instanceof check can be replaced with iterating stream.getWrappedStream().getClass().getInterfaces() and seeing if one of the interfaces is CanUnbuffer
          Hide
          Anoop Sam John added a comment -

          unbuffer support is added from 2.7.0 (See https://issues.apache.org/jira/browse/HDFS-7694)
          So for older version based clusters, we can not solve this?

          Show
          Anoop Sam John added a comment - unbuffer support is added from 2.7.0 (See https://issues.apache.org/jira/browse/HDFS-7694 ) So for older version based clusters, we can not solve this?
          Hide
          Ashish Singhi added a comment -

          Just to update here.. I was using Apache Hadoop 2.6.0 in server and client was Hadoop 2.7.1 which is default for hbase master branch so did not face this issue.

          Thanks for the finding, Anoop.

          Show
          Ashish Singhi added a comment - Just to update here.. I was using Apache Hadoop 2.6.0 in server and client was Hadoop 2.7.1 which is default for hbase master branch so did not face this issue. Thanks for the finding, Anoop.
          Hide
          Anoop Sam John added a comment -

          After some discussion this is what we think
          We will get this fix in for 2.0 only. (As there we have hadoop 2.7.0+ version by default).
          Users on older version seeing this issue can get it fixed by upping their hadoop version in client side to 2.7.0 at least and apply this patch.
          Branch-1 is on hadoop 2.5.x only by default. So unless our default version is not upped there, there is no point in adding the fix there.
          Can open a backport jira when applicable.

          Show
          Anoop Sam John added a comment - After some discussion this is what we think We will get this fix in for 2.0 only. (As there we have hadoop 2.7.0+ version by default). Users on older version seeing this issue can get it fixed by upping their hadoop version in client side to 2.7.0 at least and apply this patch. Branch-1 is on hadoop 2.5.x only by default. So unless our default version is not upped there, there is no point in adding the fix there. Can open a backport jira when applicable.
          Hide
          Ashish Singhi added a comment -

          Attached patch addressing review comment.

          Thanks for all the offline discussion on this, Anoop & Ram.
          For now this issue will be fixed only for 2.0.0. Once we plan to up our hadoop version to 2.7.x+ in any our branch code we can fix the issue there also as part of a back port jira.

          Show
          Ashish Singhi added a comment - Attached patch addressing review comment. Thanks for all the offline discussion on this, Anoop & Ram. For now this issue will be fixed only for 2.0.0. Once we plan to up our hadoop version to 2.7.x+ in any our branch code we can fix the issue there also as part of a back port jira.
          Hide
          Ted Yu added a comment -

          I think the name of helper method should be unbufferStream.
          The method name starts with verb followed by the target being unbuffered.

          What do you think ?

          Show
          Ted Yu added a comment - I think the name of helper method should be unbufferStream. The method name starts with verb followed by the target being unbuffered. What do you think ?
          Hide
          Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 2m 42s master passed
          +1 compile 0m 31s master passed with JDK v1.8.0_66
          +1 compile 0m 33s master passed with JDK v1.7.0_91
          +1 checkstyle 4m 12s master passed
          +1 mvneclipse 0m 17s master passed
          -1 findbugs 1m 51s hbase-server in master has 1 extant Findbugs warnings.
          +1 javadoc 0m 25s master passed with JDK v1.8.0_66
          +1 javadoc 0m 33s master passed with JDK v1.7.0_91
          +1 mvninstall 0m 45s the patch passed
          +1 compile 0m 31s the patch passed with JDK v1.8.0_66
          +1 javac 0m 31s the patch passed
          +1 compile 0m 34s the patch passed with JDK v1.7.0_91
          +1 javac 0m 34s the patch passed
          +1 checkstyle 4m 13s the patch passed
          +1 mvneclipse 0m 17s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 hadoopcheck 0m 50s Patch causes 28 errors with Hadoop v2.4.0.
          -1 hadoopcheck 1m 38s Patch causes 28 errors with Hadoop v2.4.1.
          -1 hadoopcheck 2m 28s Patch causes 28 errors with Hadoop v2.5.0.
          -1 hadoopcheck 3m 17s Patch causes 28 errors with Hadoop v2.5.1.
          -1 hadoopcheck 4m 6s Patch causes 28 errors with Hadoop v2.5.2.
          -1 hadoopcheck 4m 55s Patch causes 28 errors with Hadoop v2.6.1.
          -1 hadoopcheck 5m 44s Patch causes 28 errors with Hadoop v2.6.2.
          -1 hadoopcheck 6m 34s Patch causes 28 errors with Hadoop v2.6.3.
          -1 findbugs 2m 2s hbase-server introduced 1 new FindBugs issues.
          +1 javadoc 0m 24s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 32s the patch passed with JDK v1.7.0_91
          -1 unit 78m 24s hbase-server in the patch failed with JDK v1.8.0_66.
          +1 unit 79m 57s hbase-server in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 18s Patch does not generate ASF License warnings.
          188m 27s



          Reason Tests
          FindBugs module:hbase-server
            instanceof will always return true for all non-null values in org.apache.hadoop.hbase.io.hfile.HFile.streamUnbuffer(FSDataInputStreamWrapper), since all org.apache.hadoop.fs.FSDataInputStream are instances of org.apache.hadoop.fs.CanUnbuffer At HFile.java:for all non-null values in org.apache.hadoop.hbase.io.hfile.HFile.streamUnbuffer(FSDataInputStreamWrapper), since all org.apache.hadoop.fs.FSDataInputStream are instances of org.apache.hadoop.fs.CanUnbuffer At HFile.java:[line 530]
          JDK v1.8.0_66 Failed junit tests hadoop.hbase.mapreduce.TestImportExport
            hadoop.hbase.mapreduce.TestImportTsv
            hadoop.hbase.regionserver.TestRowTooBig



          Subsystem Report/Notes
          Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-23
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784000/HBASE-9393.v3.patch
          JIRA Issue HBASE-9393
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux dc6f3a39d503 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / 772f30f
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/266/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/266/artifact/patchprocess/new-findbugs-hbase-server.html
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/266/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt
          unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/266/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/266/testReport/
          modules C: hbase-server U: hbase-server
          Max memory used 174MB
          Powered by Apache Yetus 0.1.0 http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/266/console

          This message was automatically generated.

          Show
          Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 2m 42s master passed +1 compile 0m 31s master passed with JDK v1.8.0_66 +1 compile 0m 33s master passed with JDK v1.7.0_91 +1 checkstyle 4m 12s master passed +1 mvneclipse 0m 17s master passed -1 findbugs 1m 51s hbase-server in master has 1 extant Findbugs warnings. +1 javadoc 0m 25s master passed with JDK v1.8.0_66 +1 javadoc 0m 33s master passed with JDK v1.7.0_91 +1 mvninstall 0m 45s the patch passed +1 compile 0m 31s the patch passed with JDK v1.8.0_66 +1 javac 0m 31s the patch passed +1 compile 0m 34s the patch passed with JDK v1.7.0_91 +1 javac 0m 34s the patch passed +1 checkstyle 4m 13s the patch passed +1 mvneclipse 0m 17s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 hadoopcheck 0m 50s Patch causes 28 errors with Hadoop v2.4.0. -1 hadoopcheck 1m 38s Patch causes 28 errors with Hadoop v2.4.1. -1 hadoopcheck 2m 28s Patch causes 28 errors with Hadoop v2.5.0. -1 hadoopcheck 3m 17s Patch causes 28 errors with Hadoop v2.5.1. -1 hadoopcheck 4m 6s Patch causes 28 errors with Hadoop v2.5.2. -1 hadoopcheck 4m 55s Patch causes 28 errors with Hadoop v2.6.1. -1 hadoopcheck 5m 44s Patch causes 28 errors with Hadoop v2.6.2. -1 hadoopcheck 6m 34s Patch causes 28 errors with Hadoop v2.6.3. -1 findbugs 2m 2s hbase-server introduced 1 new FindBugs issues. +1 javadoc 0m 24s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 32s the patch passed with JDK v1.7.0_91 -1 unit 78m 24s hbase-server in the patch failed with JDK v1.8.0_66. +1 unit 79m 57s hbase-server in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 18s Patch does not generate ASF License warnings. 188m 27s Reason Tests FindBugs module:hbase-server   instanceof will always return true for all non-null values in org.apache.hadoop.hbase.io.hfile.HFile.streamUnbuffer(FSDataInputStreamWrapper), since all org.apache.hadoop.fs.FSDataInputStream are instances of org.apache.hadoop.fs.CanUnbuffer At HFile.java:for all non-null values in org.apache.hadoop.hbase.io.hfile.HFile.streamUnbuffer(FSDataInputStreamWrapper), since all org.apache.hadoop.fs.FSDataInputStream are instances of org.apache.hadoop.fs.CanUnbuffer At HFile.java: [line 530] JDK v1.8.0_66 Failed junit tests hadoop.hbase.mapreduce.TestImportExport   hadoop.hbase.mapreduce.TestImportTsv   hadoop.hbase.regionserver.TestRowTooBig Subsystem Report/Notes Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-23 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784000/HBASE-9393.v3.patch JIRA Issue HBASE-9393 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux dc6f3a39d503 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / 772f30f findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/266/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/266/artifact/patchprocess/new-findbugs-hbase-server.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/266/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/266/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/266/testReport/ modules C: hbase-server U: hbase-server Max memory used 174MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HBASE-Build/266/console This message was automatically generated.
          Hide
          Ted Yu added a comment -
          +        && (stream instanceof CanUnbuffer || stream.getWrappedStream() instanceof CanUnbuffer)) {
          

          As FindBugs warning shows, stream instanceof CanUnbuffer would always be true.
          The issue for BufferedFSInputStream not implementing CanUnbuffer from patch v1 is still in v3.
          The catch clause hides the issue.
          Since we call unbuffer on wrapped stream, the above condition should be changed to:

          stream.getWrappedStream() instanceof CanUnbuffer
          

          Please also change method name in patch v4.
          Thanks

          Show
          Ted Yu added a comment - + && (stream instanceof CanUnbuffer || stream.getWrappedStream() instanceof CanUnbuffer)) { As FindBugs warning shows, stream instanceof CanUnbuffer would always be true. The issue for BufferedFSInputStream not implementing CanUnbuffer from patch v1 is still in v3. The catch clause hides the issue. Since we call unbuffer on wrapped stream, the above condition should be changed to: stream.getWrappedStream() instanceof CanUnbuffer Please also change method name in patch v4. Thanks
          Hide
          Ashish Singhi added a comment -

          Attached v4 addressing the comment. Thanks

          Show
          Ashish Singhi added a comment - Attached v4 addressing the comment. Thanks
          Hide
          Ted Yu added a comment -

          Here is signature for unbuffer() method:

            public void unbuffer();
          

          This means if we catch Throwable, something has gone wrong.
          Suggest changing DEBUG to ERROR log level - can be done at commit.

          +      // Enclosing unbuffer() in try-catch just to be on defensive side.
          +      try {
          +        stream.unbuffer();
          +      } catch (Throwable e) {
          +        LOG.debug("Ignoring the exception caught on closing the socket of the FSDataInputStream",
          +          e);
          
          Show
          Ted Yu added a comment - Here is signature for unbuffer() method: public void unbuffer(); This means if we catch Throwable, something has gone wrong. Suggest changing DEBUG to ERROR log level - can be done at commit. + // Enclosing unbuffer() in try - catch just to be on defensive side. + try { + stream.unbuffer(); + } catch (Throwable e) { + LOG.debug( "Ignoring the exception caught on closing the socket of the FSDataInputStream" , + e);
          Hide
          Ashish Singhi added a comment -

          Thanks for comment, Ted.

          Yes, the unbuffer api does not throw any exception this I just added to be on defensive side as I mentioned in the comment. Intention behind the try-catch block was, because of this unbuffer call if some error like NoSuchMethodError or e.t.c is thrown we should not let the thread die.
          Intention of not logging it at WARN/ERROR level was, user can neither do anything here nor we can and to avoid user getting panicked about this. This socket will be auto closed by HDFS client when there is a next read operation on this stream.

          Anyways the call on logging level is up to the reviewer. The above was my just my thinking behind having the log at DEBUG level.

          Show
          Ashish Singhi added a comment - Thanks for comment, Ted. Yes, the unbuffer api does not throw any exception this I just added to be on defensive side as I mentioned in the comment. Intention behind the try-catch block was, because of this unbuffer call if some error like NoSuchMethodError or e.t.c is thrown we should not let the thread die. Intention of not logging it at WARN/ERROR level was, user can neither do anything here nor we can and to avoid user getting panicked about this. This socket will be auto closed by HDFS client when there is a next read operation on this stream. Anyways the call on logging level is up to the reviewer. The above was my just my thinking behind having the log at DEBUG level.
          Hide
          Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 2m 55s master passed
          +1 compile 0m 44s master passed with JDK v1.8.0_66
          +1 compile 0m 38s master passed with JDK v1.7.0_91
          +1 checkstyle 4m 6s master passed
          +1 mvneclipse 0m 20s master passed
          -1 findbugs 2m 13s hbase-server in master has 1 extant Findbugs warnings.
          +1 javadoc 0m 33s master passed with JDK v1.8.0_66
          +1 javadoc 0m 38s master passed with JDK v1.7.0_91
          +1 mvninstall 0m 54s the patch passed
          +1 compile 0m 45s the patch passed with JDK v1.8.0_66
          +1 javac 0m 45s the patch passed
          +1 compile 0m 40s the patch passed with JDK v1.7.0_91
          +1 javac 0m 40s the patch passed
          +1 checkstyle 4m 37s the patch passed
          +1 mvneclipse 0m 19s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 hadoopcheck 0m 59s Patch causes 24 errors with Hadoop v2.4.0.
          -1 hadoopcheck 2m 0s Patch causes 24 errors with Hadoop v2.4.1.
          -1 hadoopcheck 3m 0s Patch causes 24 errors with Hadoop v2.5.0.
          -1 hadoopcheck 4m 2s Patch causes 24 errors with Hadoop v2.5.1.
          -1 hadoopcheck 5m 6s Patch causes 24 errors with Hadoop v2.5.2.
          -1 hadoopcheck 6m 10s Patch causes 24 errors with Hadoop v2.6.1.
          -1 hadoopcheck 7m 17s Patch causes 24 errors with Hadoop v2.6.2.
          -1 hadoopcheck 8m 23s Patch causes 24 errors with Hadoop v2.6.3.
          +1 findbugs 1m 59s the patch passed
          +1 javadoc 0m 29s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 33s the patch passed with JDK v1.7.0_91
          +1 unit 99m 25s hbase-server in the patch passed with JDK v1.8.0_66.
          +1 unit 113m 50s hbase-server in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 16s Patch does not generate ASF License warnings.
          247m 25s



          Subsystem Report/Notes
          Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-23
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784019/HBASE-9393.v4.patch
          JIRA Issue HBASE-9393
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux 934ce95c5f69 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / 6ed3c75
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/269/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/269/testReport/
          modules C: hbase-server U: hbase-server
          Max memory used 170MB
          Powered by Apache Yetus 0.1.0 http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/269/console

          This message was automatically generated.

          Show
          Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 2m 55s master passed +1 compile 0m 44s master passed with JDK v1.8.0_66 +1 compile 0m 38s master passed with JDK v1.7.0_91 +1 checkstyle 4m 6s master passed +1 mvneclipse 0m 20s master passed -1 findbugs 2m 13s hbase-server in master has 1 extant Findbugs warnings. +1 javadoc 0m 33s master passed with JDK v1.8.0_66 +1 javadoc 0m 38s master passed with JDK v1.7.0_91 +1 mvninstall 0m 54s the patch passed +1 compile 0m 45s the patch passed with JDK v1.8.0_66 +1 javac 0m 45s the patch passed +1 compile 0m 40s the patch passed with JDK v1.7.0_91 +1 javac 0m 40s the patch passed +1 checkstyle 4m 37s the patch passed +1 mvneclipse 0m 19s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 hadoopcheck 0m 59s Patch causes 24 errors with Hadoop v2.4.0. -1 hadoopcheck 2m 0s Patch causes 24 errors with Hadoop v2.4.1. -1 hadoopcheck 3m 0s Patch causes 24 errors with Hadoop v2.5.0. -1 hadoopcheck 4m 2s Patch causes 24 errors with Hadoop v2.5.1. -1 hadoopcheck 5m 6s Patch causes 24 errors with Hadoop v2.5.2. -1 hadoopcheck 6m 10s Patch causes 24 errors with Hadoop v2.6.1. -1 hadoopcheck 7m 17s Patch causes 24 errors with Hadoop v2.6.2. -1 hadoopcheck 8m 23s Patch causes 24 errors with Hadoop v2.6.3. +1 findbugs 1m 59s the patch passed +1 javadoc 0m 29s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 33s the patch passed with JDK v1.7.0_91 +1 unit 99m 25s hbase-server in the patch passed with JDK v1.8.0_66. +1 unit 113m 50s hbase-server in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 16s Patch does not generate ASF License warnings. 247m 25s Subsystem Report/Notes Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-23 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784019/HBASE-9393.v4.patch JIRA Issue HBASE-9393 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux 934ce95c5f69 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / 6ed3c75 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/269/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/269/testReport/ modules C: hbase-server U: hbase-server Max memory used 170MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HBASE-Build/269/console This message was automatically generated.
          Hide
          Anoop Sam John added a comment -

          Ya, as Ted suggested, we need log it with Error not debug. Log the exception and say we can not unbuffer the stream and so possible resource outage!!
          unbufferStream -> This same name can be used for method in interface also?

          Show
          Anoop Sam John added a comment - Ya, as Ted suggested, we need log it with Error not debug. Log the exception and say we can not unbuffer the stream and so possible resource outage!! unbufferStream -> This same name can be used for method in interface also?
          Hide
          Ashish Singhi added a comment -

          Addressed the comment.

          Show
          Ashish Singhi added a comment - Addressed the comment.
          Hide
          Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 2m 40s master passed
          +1 compile 0m 44s master passed with JDK v1.8.0_66
          +1 compile 0m 40s master passed with JDK v1.7.0_91
          +1 checkstyle 4m 17s master passed
          +1 mvneclipse 0m 18s master passed
          -1 findbugs 2m 5s hbase-server in master has 1 extant Findbugs warnings.
          +1 javadoc 0m 49s master passed with JDK v1.8.0_66
          +1 javadoc 0m 41s master passed with JDK v1.7.0_91
          +1 mvninstall 1m 0s the patch passed
          +1 compile 0m 53s the patch passed with JDK v1.8.0_66
          +1 javac 0m 53s the patch passed
          +1 compile 0m 43s the patch passed with JDK v1.7.0_91
          +1 javac 0m 43s the patch passed
          +1 checkstyle 4m 38s the patch passed
          +1 mvneclipse 0m 24s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 hadoopcheck 1m 13s Patch causes 24 errors with Hadoop v2.4.0.
          -1 hadoopcheck 2m 16s Patch causes 24 errors with Hadoop v2.4.1.
          -1 hadoopcheck 3m 24s Patch causes 24 errors with Hadoop v2.5.0.
          -1 hadoopcheck 4m 25s Patch causes 24 errors with Hadoop v2.5.1.
          -1 hadoopcheck 5m 33s Patch causes 24 errors with Hadoop v2.5.2.
          -1 hadoopcheck 6m 40s Patch causes 24 errors with Hadoop v2.6.1.
          -1 hadoopcheck 7m 45s Patch causes 24 errors with Hadoop v2.6.2.
          -1 hadoopcheck 8m 46s Patch causes 24 errors with Hadoop v2.6.3.
          +1 findbugs 2m 35s the patch passed
          +1 javadoc 0m 48s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 48s the patch passed with JDK v1.7.0_91
          -1 unit 126m 6s hbase-server in the patch failed with JDK v1.8.0_66.
          +1 unit 90m 44s hbase-server in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 24s Patch does not generate ASF License warnings.
          253m 45s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.hbase.master.balancer.TestStochasticLoadBalancer



          Subsystem Report/Notes
          Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784133/HBASE-9393.v5.patch
          JIRA Issue HBASE-9393
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux fb88fdc6efa9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
          git revision master / a87d956
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/276/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/276/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt
          unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/276/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/276/testReport/
          modules C: hbase-server U: hbase-server
          Max memory used 175MB
          Powered by Apache Yetus 0.1.0 http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/276/console

          This message was automatically generated.

          Show
          Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 2m 40s master passed +1 compile 0m 44s master passed with JDK v1.8.0_66 +1 compile 0m 40s master passed with JDK v1.7.0_91 +1 checkstyle 4m 17s master passed +1 mvneclipse 0m 18s master passed -1 findbugs 2m 5s hbase-server in master has 1 extant Findbugs warnings. +1 javadoc 0m 49s master passed with JDK v1.8.0_66 +1 javadoc 0m 41s master passed with JDK v1.7.0_91 +1 mvninstall 1m 0s the patch passed +1 compile 0m 53s the patch passed with JDK v1.8.0_66 +1 javac 0m 53s the patch passed +1 compile 0m 43s the patch passed with JDK v1.7.0_91 +1 javac 0m 43s the patch passed +1 checkstyle 4m 38s the patch passed +1 mvneclipse 0m 24s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 hadoopcheck 1m 13s Patch causes 24 errors with Hadoop v2.4.0. -1 hadoopcheck 2m 16s Patch causes 24 errors with Hadoop v2.4.1. -1 hadoopcheck 3m 24s Patch causes 24 errors with Hadoop v2.5.0. -1 hadoopcheck 4m 25s Patch causes 24 errors with Hadoop v2.5.1. -1 hadoopcheck 5m 33s Patch causes 24 errors with Hadoop v2.5.2. -1 hadoopcheck 6m 40s Patch causes 24 errors with Hadoop v2.6.1. -1 hadoopcheck 7m 45s Patch causes 24 errors with Hadoop v2.6.2. -1 hadoopcheck 8m 46s Patch causes 24 errors with Hadoop v2.6.3. +1 findbugs 2m 35s the patch passed +1 javadoc 0m 48s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 48s the patch passed with JDK v1.7.0_91 -1 unit 126m 6s hbase-server in the patch failed with JDK v1.8.0_66. +1 unit 90m 44s hbase-server in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 24s Patch does not generate ASF License warnings. 253m 45s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.hbase.master.balancer.TestStochasticLoadBalancer Subsystem Report/Notes Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784133/HBASE-9393.v5.patch JIRA Issue HBASE-9393 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux fb88fdc6efa9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh git revision master / a87d956 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/276/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/276/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/276/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/276/testReport/ modules C: hbase-server U: hbase-server Max memory used 175MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HBASE-Build/276/console This message was automatically generated.
          Hide
          Ted Yu added a comment -

          TestStochasticLoadBalancer failure was not related to the change - it has failed intermittently.

          Show
          Ted Yu added a comment - TestStochasticLoadBalancer failure was not related to the change - it has failed intermittently.
          Hide
          Ted Yu added a comment -

          +1 on patch v5.

          Show
          Ted Yu added a comment - +1 on patch v5.
          Hide
          stack added a comment -

          I believe there is an option do to #1 even right now. Can't HBase be configured just to use pread and never read?

          We want sequential reading when doing long scans (the purported hdfs i/o 'pipeliniing'). We want to be able to pick and choose dependent on read-type (short scan or random get vs streaming scan..).

          This issue and suggestion offlist by Duo Zhang brings up the unfinished project, https://issues.apache.org/jira/browse/HBASE-5979, which is the proper way to fix what is going on in here (as well as doing proper separation of long vs short read). Would be good to revive. There is good stuff in the cited issue.

          Adding the below as finally in a method named pickReaderVersion seems a bit odd... is pickReaderVersion only place we read in the file trailer? That seems odd (not your issue Ashish Singhi). You'd think we'd want to keep the trailer around in the reader.

          522 } finally

          { 523 unbufferStream(fsdis); 524 }

          525 }

          On commit, lets point this issue as to why we are doing gymnastics in unbufferStream method... and why the reflection.

          Is it odd adding this unbufferStream to hbase types when there is the Interface CanUnbuffer up in hdfs? Should we have a local hbase equivalent... and put it on HFileBlock, HFileReader... Then the relation is more clear? Perhaps overkill?

          Why you think the sequentialRead numbers are so different in your perf test above Ashish Singhi? The extra setup after reading in the trailer?

          TestStochasticLoadBalancer failure was not related to the change - it has failed intermittently.

          Ted Yu Let me retry the patch. We need clean build to commit... for any patch. No more, '... it passes for me locally...'. It has to pass up here on apache. If we can't get it to pass, nothing should get checked in until tests are fixed. Otherwise our test suite is for nought and the running of CI just wasted energy at the DC.

          Show
          stack added a comment - I believe there is an option do to #1 even right now. Can't HBase be configured just to use pread and never read? We want sequential reading when doing long scans (the purported hdfs i/o 'pipeliniing'). We want to be able to pick and choose dependent on read-type (short scan or random get vs streaming scan..). This issue and suggestion offlist by Duo Zhang brings up the unfinished project, https://issues.apache.org/jira/browse/HBASE-5979 , which is the proper way to fix what is going on in here (as well as doing proper separation of long vs short read). Would be good to revive. There is good stuff in the cited issue. Adding the below as finally in a method named pickReaderVersion seems a bit odd... is pickReaderVersion only place we read in the file trailer? That seems odd (not your issue Ashish Singhi ). You'd think we'd want to keep the trailer around in the reader. 522 } finally { 523 unbufferStream(fsdis); 524 } 525 } On commit, lets point this issue as to why we are doing gymnastics in unbufferStream method... and why the reflection. Is it odd adding this unbufferStream to hbase types when there is the Interface CanUnbuffer up in hdfs? Should we have a local hbase equivalent... and put it on HFileBlock, HFileReader... Then the relation is more clear? Perhaps overkill? Why you think the sequentialRead numbers are so different in your perf test above Ashish Singhi ? The extra setup after reading in the trailer? TestStochasticLoadBalancer failure was not related to the change - it has failed intermittently. Ted Yu Let me retry the patch. We need clean build to commit... for any patch. No more, '... it passes for me locally...'. It has to pass up here on apache. If we can't get it to pass, nothing should get checked in until tests are fixed. Otherwise our test suite is for nought and the running of CI just wasted energy at the DC.
          Hide
          Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 2m 56s master passed
          +1 compile 0m 52s master passed with JDK v1.8.0_66
          +1 compile 0m 41s master passed with JDK v1.7.0_91
          +1 checkstyle 4m 29s master passed
          +1 mvneclipse 0m 20s master passed
          -1 findbugs 2m 12s hbase-server in master has 1 extant Findbugs warnings.
          +1 javadoc 0m 41s master passed with JDK v1.8.0_66
          +1 javadoc 0m 38s master passed with JDK v1.7.0_91
          +1 mvninstall 0m 51s the patch passed
          +1 compile 0m 53s the patch passed with JDK v1.8.0_66
          +1 javac 0m 53s the patch passed
          +1 compile 0m 41s the patch passed with JDK v1.7.0_91
          +1 javac 0m 41s the patch passed
          +1 checkstyle 4m 11s the patch passed
          +1 mvneclipse 0m 22s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 hadoopcheck 1m 7s Patch causes 24 errors with Hadoop v2.4.0.
          -1 hadoopcheck 2m 11s Patch causes 24 errors with Hadoop v2.4.1.
          -1 hadoopcheck 3m 16s Patch causes 24 errors with Hadoop v2.5.0.
          -1 hadoopcheck 4m 18s Patch causes 24 errors with Hadoop v2.5.1.
          -1 hadoopcheck 5m 22s Patch causes 24 errors with Hadoop v2.5.2.
          -1 hadoopcheck 6m 30s Patch causes 24 errors with Hadoop v2.6.1.
          -1 hadoopcheck 7m 35s Patch causes 24 errors with Hadoop v2.6.2.
          -1 hadoopcheck 8m 43s Patch causes 24 errors with Hadoop v2.6.3.
          +1 findbugs 2m 41s the patch passed
          +1 javadoc 0m 46s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 43s the patch passed with JDK v1.7.0_91
          -1 unit 125m 7s hbase-server in the patch failed with JDK v1.8.0_66.
          +1 unit 100m 12s hbase-server in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 14s Patch does not generate ASF License warnings.
          261m 50s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.hbase.master.balancer.TestStochasticLoadBalancer
            hadoop.hbase.replication.TestPerTableCFReplication
          JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.hbase.regionserver.wal.TestWALReplay
            org.apache.hadoop.hbase.regionserver.TestHRegion



          Subsystem Report/Notes
          Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784214/HBASE-9393.v5.patch
          JIRA Issue HBASE-9393
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux 78db1953f664 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / d6b3d83
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/280/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/280/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt
          unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/280/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/280/testReport/
          modules C: hbase-server U: hbase-server
          Max memory used 410MB
          Powered by Apache Yetus 0.1.0 http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/280/console

          This message was automatically generated.

          Show
          Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 2m 56s master passed +1 compile 0m 52s master passed with JDK v1.8.0_66 +1 compile 0m 41s master passed with JDK v1.7.0_91 +1 checkstyle 4m 29s master passed +1 mvneclipse 0m 20s master passed -1 findbugs 2m 12s hbase-server in master has 1 extant Findbugs warnings. +1 javadoc 0m 41s master passed with JDK v1.8.0_66 +1 javadoc 0m 38s master passed with JDK v1.7.0_91 +1 mvninstall 0m 51s the patch passed +1 compile 0m 53s the patch passed with JDK v1.8.0_66 +1 javac 0m 53s the patch passed +1 compile 0m 41s the patch passed with JDK v1.7.0_91 +1 javac 0m 41s the patch passed +1 checkstyle 4m 11s the patch passed +1 mvneclipse 0m 22s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 hadoopcheck 1m 7s Patch causes 24 errors with Hadoop v2.4.0. -1 hadoopcheck 2m 11s Patch causes 24 errors with Hadoop v2.4.1. -1 hadoopcheck 3m 16s Patch causes 24 errors with Hadoop v2.5.0. -1 hadoopcheck 4m 18s Patch causes 24 errors with Hadoop v2.5.1. -1 hadoopcheck 5m 22s Patch causes 24 errors with Hadoop v2.5.2. -1 hadoopcheck 6m 30s Patch causes 24 errors with Hadoop v2.6.1. -1 hadoopcheck 7m 35s Patch causes 24 errors with Hadoop v2.6.2. -1 hadoopcheck 8m 43s Patch causes 24 errors with Hadoop v2.6.3. +1 findbugs 2m 41s the patch passed +1 javadoc 0m 46s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 43s the patch passed with JDK v1.7.0_91 -1 unit 125m 7s hbase-server in the patch failed with JDK v1.8.0_66. +1 unit 100m 12s hbase-server in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 14s Patch does not generate ASF License warnings. 261m 50s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.hbase.master.balancer.TestStochasticLoadBalancer   hadoop.hbase.replication.TestPerTableCFReplication JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.hbase.regionserver.wal.TestWALReplay   org.apache.hadoop.hbase.regionserver.TestHRegion Subsystem Report/Notes Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784214/HBASE-9393.v5.patch JIRA Issue HBASE-9393 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux 78db1953f664 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / d6b3d83 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/280/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/280/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/280/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/280/testReport/ modules C: hbase-server U: hbase-server Max memory used 410MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HBASE-Build/280/console This message was automatically generated.
          Hide
          Ted Yu added a comment -

          From https://builds.apache.org/job/PreCommit-HBASE-Build/280/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt :

          Caused by: java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=127: /bin/ls: error while loading shared libraries: libattr.so.1: failed to map segment from shared object: Permission denied
          
          	at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
          	at org.apache.hadoop.util.Shell.run(Shell.java:456)
          
          Show
          Ted Yu added a comment - From https://builds.apache.org/job/PreCommit-HBASE-Build/280/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt : Caused by: java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=127: /bin/ls: error while loading shared libraries: libattr.so.1: failed to map segment from shared object: Permission denied at org.apache.hadoop.util.Shell.runCommand(Shell.java:545) at org.apache.hadoop.util.Shell.run(Shell.java:456)
          Hide
          stack added a comment -

          Retry

          Show
          stack added a comment - Retry
          Hide
          Anoop Sam John added a comment -

          Adding the below as finally in a method named pickReaderVersion seems a bit odd... is pickReaderVersion only place we read in the file trailer? That seems odd (not your issue Ashish Singhi). You'd think we'd want to keep the trailer around in the reader.

          We do read FFT in isHFileFormat() method also. This is used for a check on an HFile which is being bulk loaded (LoadIncrementalHFiles)

          Actually reading the FFT, we can do as pread. Now it is seek and then read. In this method of pickReaderVersion, we actually create the HFileReader also which will read the root level index blocks, bloom block etc. Those are also done as seek + read (reading blocks) which is fine IMO.
          May be we should at least rename this method pickReaderVersion ?

          Show
          Anoop Sam John added a comment - Adding the below as finally in a method named pickReaderVersion seems a bit odd... is pickReaderVersion only place we read in the file trailer? That seems odd (not your issue Ashish Singhi). You'd think we'd want to keep the trailer around in the reader. We do read FFT in isHFileFormat() method also. This is used for a check on an HFile which is being bulk loaded (LoadIncrementalHFiles) Actually reading the FFT, we can do as pread. Now it is seek and then read. In this method of pickReaderVersion, we actually create the HFileReader also which will read the root level index blocks, bloom block etc. Those are also done as seek + read (reading blocks) which is fine IMO. May be we should at least rename this method pickReaderVersion ?
          Hide
          Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 3m 41s master passed
          +1 compile 0m 55s master passed with JDK v1.8.0_66
          +1 compile 0m 48s master passed with JDK v1.7.0_91
          +1 checkstyle 4m 59s master passed
          +1 mvneclipse 0m 22s master passed
          -1 findbugs 2m 36s hbase-server in master has 1 extant Findbugs warnings.
          +1 javadoc 0m 43s master passed with JDK v1.8.0_66
          +1 javadoc 0m 46s master passed with JDK v1.7.0_91
          +1 mvninstall 1m 3s the patch passed
          +1 compile 0m 52s the patch passed with JDK v1.8.0_66
          +1 javac 0m 52s the patch passed
          +1 compile 0m 48s the patch passed with JDK v1.7.0_91
          +1 javac 0m 48s the patch passed
          +1 checkstyle 4m 54s the patch passed
          +1 mvneclipse 0m 22s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 hadoopcheck 1m 13s Patch causes 24 errors with Hadoop v2.4.0.
          -1 hadoopcheck 2m 22s Patch causes 24 errors with Hadoop v2.4.1.
          -1 hadoopcheck 3m 31s Patch causes 24 errors with Hadoop v2.5.0.
          -1 hadoopcheck 4m 41s Patch causes 24 errors with Hadoop v2.5.1.
          -1 hadoopcheck 5m 51s Patch causes 24 errors with Hadoop v2.5.2.
          -1 hadoopcheck 7m 1s Patch causes 24 errors with Hadoop v2.6.1.
          -1 hadoopcheck 8m 8s Patch causes 24 errors with Hadoop v2.6.2.
          -1 hadoopcheck 9m 20s Patch causes 24 errors with Hadoop v2.6.3.
          +1 findbugs 2m 47s the patch passed
          +1 javadoc 0m 42s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 46s the patch passed with JDK v1.7.0_91
          +1 unit 129m 14s hbase-server in the patch passed with JDK v1.8.0_66.
          +1 unit 119m 39s hbase-server in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 22s Patch does not generate ASF License warnings.
          289m 45s



          Subsystem Report/Notes
          Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-26
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784305/HBASE-9393.v5.patch
          JIRA Issue HBASE-9393
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux 4f294db27100 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / d6b3d83
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/282/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/282/testReport/
          modules C: hbase-server U: hbase-server
          Max memory used 177MB
          Powered by Apache Yetus 0.1.0 http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/282/console

          This message was automatically generated.

          Show
          Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 3m 41s master passed +1 compile 0m 55s master passed with JDK v1.8.0_66 +1 compile 0m 48s master passed with JDK v1.7.0_91 +1 checkstyle 4m 59s master passed +1 mvneclipse 0m 22s master passed -1 findbugs 2m 36s hbase-server in master has 1 extant Findbugs warnings. +1 javadoc 0m 43s master passed with JDK v1.8.0_66 +1 javadoc 0m 46s master passed with JDK v1.7.0_91 +1 mvninstall 1m 3s the patch passed +1 compile 0m 52s the patch passed with JDK v1.8.0_66 +1 javac 0m 52s the patch passed +1 compile 0m 48s the patch passed with JDK v1.7.0_91 +1 javac 0m 48s the patch passed +1 checkstyle 4m 54s the patch passed +1 mvneclipse 0m 22s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 hadoopcheck 1m 13s Patch causes 24 errors with Hadoop v2.4.0. -1 hadoopcheck 2m 22s Patch causes 24 errors with Hadoop v2.4.1. -1 hadoopcheck 3m 31s Patch causes 24 errors with Hadoop v2.5.0. -1 hadoopcheck 4m 41s Patch causes 24 errors with Hadoop v2.5.1. -1 hadoopcheck 5m 51s Patch causes 24 errors with Hadoop v2.5.2. -1 hadoopcheck 7m 1s Patch causes 24 errors with Hadoop v2.6.1. -1 hadoopcheck 8m 8s Patch causes 24 errors with Hadoop v2.6.2. -1 hadoopcheck 9m 20s Patch causes 24 errors with Hadoop v2.6.3. +1 findbugs 2m 47s the patch passed +1 javadoc 0m 42s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 46s the patch passed with JDK v1.7.0_91 +1 unit 129m 14s hbase-server in the patch passed with JDK v1.8.0_66. +1 unit 119m 39s hbase-server in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 22s Patch does not generate ASF License warnings. 289m 45s Subsystem Report/Notes Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-26 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784305/HBASE-9393.v5.patch JIRA Issue HBASE-9393 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux 4f294db27100 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / d6b3d83 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/282/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/282/testReport/ modules C: hbase-server U: hbase-server Max memory used 177MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HBASE-Build/282/console This message was automatically generated.
          Hide
          Ashish Singhi added a comment -

          Thanks for the comments.
          Sorry for delay in repsonse, I was on holidays.

          Adding the below as finally in a method named pickReaderVersion seems a bit odd... is pickReaderVersion only place we read in the file trailer? That seems odd (not your issue Ashish Singhi). You'd think we'd want to keep the trailer around in the reader.

          Anoop Sam John has already replied for this. Thanks.

          Bq. Is it odd adding this unbufferStream to hbase types when there is the Interface CanUnbuffer up in hdfs? Should we have a local hbase equivalent... and put it on HFileBlock, HFileReader... Then the relation is more clear? Perhaps overkill?

          From HBase side we do not have any control over the socket, so I don’t think we can do anything here apart from calling the unbuffer api for the stream which implements CanBuffer class. I also think this is not needed.

          May be we should at least rename this method pickReaderVersion ?

          Changed it to openReader as per the suggestion.

          Last QA run for v5 was clean. Updated patch addressing method rename comment.
          Thanks all again.

          Show
          Ashish Singhi added a comment - Thanks for the comments. Sorry for delay in repsonse, I was on holidays. Adding the below as finally in a method named pickReaderVersion seems a bit odd... is pickReaderVersion only place we read in the file trailer? That seems odd (not your issue Ashish Singhi). You'd think we'd want to keep the trailer around in the reader. Anoop Sam John has already replied for this. Thanks. Bq. Is it odd adding this unbufferStream to hbase types when there is the Interface CanUnbuffer up in hdfs? Should we have a local hbase equivalent... and put it on HFileBlock, HFileReader... Then the relation is more clear? Perhaps overkill? From HBase side we do not have any control over the socket, so I don’t think we can do anything here apart from calling the unbuffer api for the stream which implements CanBuffer class. I also think this is not needed. May be we should at least rename this method pickReaderVersion ? Changed it to openReader as per the suggestion. Last QA run for v5 was clean. Updated patch addressing method rename comment. Thanks all again.
          Hide
          Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 2m 44s master passed
          +1 compile 0m 31s master passed with JDK v1.8.0_72
          +1 compile 0m 35s master passed with JDK v1.7.0_91
          +1 checkstyle 4m 1s master passed
          +1 mvneclipse 0m 17s master passed
          -1 findbugs 1m 50s hbase-server in master has 1 extant Findbugs warnings.
          +1 javadoc 0m 25s master passed with JDK v1.8.0_72
          +1 javadoc 0m 33s master passed with JDK v1.7.0_91
          +1 mvninstall 0m 45s the patch passed
          +1 compile 0m 30s the patch passed with JDK v1.8.0_72
          +1 javac 0m 30s the patch passed
          +1 compile 0m 34s the patch passed with JDK v1.7.0_91
          +1 javac 0m 34s the patch passed
          +1 checkstyle 4m 15s the patch passed
          +1 mvneclipse 0m 17s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 hadoopcheck 0m 50s Patch causes 24 errors with Hadoop v2.4.0.
          -1 hadoopcheck 1m 41s Patch causes 24 errors with Hadoop v2.4.1.
          -1 hadoopcheck 2m 32s Patch causes 24 errors with Hadoop v2.5.0.
          -1 hadoopcheck 3m 23s Patch causes 24 errors with Hadoop v2.5.1.
          -1 hadoopcheck 4m 16s Patch causes 24 errors with Hadoop v2.5.2.
          -1 hadoopcheck 5m 11s Patch causes 24 errors with Hadoop v2.6.1.
          -1 hadoopcheck 6m 5s Patch causes 24 errors with Hadoop v2.6.2.
          -1 hadoopcheck 6m 59s Patch causes 24 errors with Hadoop v2.6.3.
          +1 findbugs 1m 57s the patch passed
          +1 javadoc 0m 25s the patch passed with JDK v1.8.0_72
          +1 javadoc 0m 31s the patch passed with JDK v1.7.0_91
          +1 unit 98m 1s hbase-server in the patch passed with JDK v1.8.0_72.
          -1 unit 103m 34s hbase-server in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 19s Patch does not generate ASF License warnings.
          232m 7s



          Subsystem Report/Notes
          Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-28
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784866/HBASE-9393.v6.patch
          JIRA Issue HBASE-9393
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux e14576f3cba6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / 47c4147
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/333/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/333/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/333/testReport/
          modules C: hbase-server U: hbase-server
          Max memory used 174MB
          Powered by Apache Yetus 0.1.0 http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/333/console

          This message was automatically generated.

          Show
          Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 2m 44s master passed +1 compile 0m 31s master passed with JDK v1.8.0_72 +1 compile 0m 35s master passed with JDK v1.7.0_91 +1 checkstyle 4m 1s master passed +1 mvneclipse 0m 17s master passed -1 findbugs 1m 50s hbase-server in master has 1 extant Findbugs warnings. +1 javadoc 0m 25s master passed with JDK v1.8.0_72 +1 javadoc 0m 33s master passed with JDK v1.7.0_91 +1 mvninstall 0m 45s the patch passed +1 compile 0m 30s the patch passed with JDK v1.8.0_72 +1 javac 0m 30s the patch passed +1 compile 0m 34s the patch passed with JDK v1.7.0_91 +1 javac 0m 34s the patch passed +1 checkstyle 4m 15s the patch passed +1 mvneclipse 0m 17s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 hadoopcheck 0m 50s Patch causes 24 errors with Hadoop v2.4.0. -1 hadoopcheck 1m 41s Patch causes 24 errors with Hadoop v2.4.1. -1 hadoopcheck 2m 32s Patch causes 24 errors with Hadoop v2.5.0. -1 hadoopcheck 3m 23s Patch causes 24 errors with Hadoop v2.5.1. -1 hadoopcheck 4m 16s Patch causes 24 errors with Hadoop v2.5.2. -1 hadoopcheck 5m 11s Patch causes 24 errors with Hadoop v2.6.1. -1 hadoopcheck 6m 5s Patch causes 24 errors with Hadoop v2.6.2. -1 hadoopcheck 6m 59s Patch causes 24 errors with Hadoop v2.6.3. +1 findbugs 1m 57s the patch passed +1 javadoc 0m 25s the patch passed with JDK v1.8.0_72 +1 javadoc 0m 31s the patch passed with JDK v1.7.0_91 +1 unit 98m 1s hbase-server in the patch passed with JDK v1.8.0_72. -1 unit 103m 34s hbase-server in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 19s Patch does not generate ASF License warnings. 232m 7s Subsystem Report/Notes Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-28 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784866/HBASE-9393.v6.patch JIRA Issue HBASE-9393 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux e14576f3cba6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / 47c4147 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/333/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/333/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/333/testReport/ modules C: hbase-server U: hbase-server Max memory used 174MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HBASE-Build/333/console This message was automatically generated.
          Hide
          Ashish Singhi added a comment -

          TestReplicationSmallTests failed due to some XML issue.

          Caused by: org.xml.sax.SAXParseException; systemId: file:///home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/hbase-server/target/surefire-reports/TEST-org.apache.hadoop.hbase.replication.TestReplicationSmallTests.xml; lineNumber: 424; columnNumber: 28; XML document structures must start and end within the same entity.
          
          Show
          Ashish Singhi added a comment - TestReplicationSmallTests failed due to some XML issue. Caused by: org.xml.sax.SAXParseException; systemId: file:///home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/hbase-server/target/surefire-reports/TEST-org.apache.hadoop.hbase.replication.TestReplicationSmallTests.xml; lineNumber: 424; columnNumber: 28; XML document structures must start and end within the same entity.
          Hide
          Ashish Singhi added a comment -

          Retry...

          Show
          Ashish Singhi added a comment - Retry...
          Hide
          Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 3m 48s master passed
          +1 compile 1m 7s master passed with JDK v1.8.0_72
          +1 compile 0m 52s master passed with JDK v1.7.0_91
          +1 checkstyle 5m 15s master passed
          +1 mvneclipse 0m 23s master passed
          -1 findbugs 2m 51s hbase-server in master has 1 extant Findbugs warnings.
          +1 javadoc 0m 58s master passed with JDK v1.8.0_72
          +1 javadoc 0m 50s master passed with JDK v1.7.0_91
          +1 mvninstall 1m 7s the patch passed
          +1 compile 1m 10s the patch passed with JDK v1.8.0_72
          +1 javac 1m 10s the patch passed
          +1 compile 0m 50s the patch passed with JDK v1.7.0_91
          +1 javac 0m 50s the patch passed
          +1 checkstyle 5m 26s the patch passed
          +1 mvneclipse 0m 23s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          -1 hadoopcheck 1m 14s Patch causes 24 errors with Hadoop v2.4.0.
          -1 hadoopcheck 2m 26s Patch causes 24 errors with Hadoop v2.4.1.
          -1 hadoopcheck 3m 47s Patch causes 24 errors with Hadoop v2.5.0.
          -1 hadoopcheck 5m 4s Patch causes 24 errors with Hadoop v2.5.1.
          -1 hadoopcheck 6m 22s Patch causes 24 errors with Hadoop v2.5.2.
          -1 hadoopcheck 7m 40s Patch causes 24 errors with Hadoop v2.6.1.
          -1 hadoopcheck 9m 4s Patch causes 24 errors with Hadoop v2.6.2.
          -1 hadoopcheck 10m 21s Patch causes 24 errors with Hadoop v2.6.3.
          +1 findbugs 2m 52s the patch passed
          +1 javadoc 0m 54s the patch passed with JDK v1.8.0_72
          +1 javadoc 0m 53s the patch passed with JDK v1.7.0_91
          -1 unit 188m 44s hbase-server in the patch failed with JDK v1.8.0_72.
          -1 unit 97m 20s hbase-server in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 46s Patch does not generate ASF License warnings.
          331m 10s



          Reason Tests
          JDK v1.8.0_72 Failed junit tests hadoop.hbase.master.balancer.TestStochasticLoadBalancer
            hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort
          JDK v1.8.0_72 Timed out junit tests org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot
            org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot
            org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot
          JDK v1.7.0_91 Failed junit tests hadoop.hbase.master.balancer.TestStochasticLoadBalancer
            hadoop.hbase.replication.TestReplicationSyncUpTool
            hadoop.hbase.mapreduce.TestRowCounter
          JDK v1.7.0_91 Timed out junit tests org.apache.hadoop.hbase.util.TestIdLock
            org.apache.hadoop.hbase.regionserver.TestRSKilledWhenInitializing
            org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles
            org.apache.hadoop.hbase.mapreduce.TestMultiTableSnapshotInputFormat
            org.apache.hadoop.hbase.mapreduce.TestImportTSVWithVisibilityLabels



          Subsystem Report/Notes
          Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-28
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784912/HBASE-9393.v6.patch
          JIRA Issue HBASE-9393
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux bc8f25bff8a7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
          git revision master / 47c4147
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/338/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/338/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_72.txt
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/338/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/338/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_72.txt https://builds.apache.org/job/PreCommit-HBASE-Build/338/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/338/testReport/
          modules C: hbase-server U: hbase-server
          Max memory used 404MB
          Powered by Apache Yetus 0.1.0 http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/338/console

          This message was automatically generated.

          Show
          Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 3m 48s master passed +1 compile 1m 7s master passed with JDK v1.8.0_72 +1 compile 0m 52s master passed with JDK v1.7.0_91 +1 checkstyle 5m 15s master passed +1 mvneclipse 0m 23s master passed -1 findbugs 2m 51s hbase-server in master has 1 extant Findbugs warnings. +1 javadoc 0m 58s master passed with JDK v1.8.0_72 +1 javadoc 0m 50s master passed with JDK v1.7.0_91 +1 mvninstall 1m 7s the patch passed +1 compile 1m 10s the patch passed with JDK v1.8.0_72 +1 javac 1m 10s the patch passed +1 compile 0m 50s the patch passed with JDK v1.7.0_91 +1 javac 0m 50s the patch passed +1 checkstyle 5m 26s the patch passed +1 mvneclipse 0m 23s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 hadoopcheck 1m 14s Patch causes 24 errors with Hadoop v2.4.0. -1 hadoopcheck 2m 26s Patch causes 24 errors with Hadoop v2.4.1. -1 hadoopcheck 3m 47s Patch causes 24 errors with Hadoop v2.5.0. -1 hadoopcheck 5m 4s Patch causes 24 errors with Hadoop v2.5.1. -1 hadoopcheck 6m 22s Patch causes 24 errors with Hadoop v2.5.2. -1 hadoopcheck 7m 40s Patch causes 24 errors with Hadoop v2.6.1. -1 hadoopcheck 9m 4s Patch causes 24 errors with Hadoop v2.6.2. -1 hadoopcheck 10m 21s Patch causes 24 errors with Hadoop v2.6.3. +1 findbugs 2m 52s the patch passed +1 javadoc 0m 54s the patch passed with JDK v1.8.0_72 +1 javadoc 0m 53s the patch passed with JDK v1.7.0_91 -1 unit 188m 44s hbase-server in the patch failed with JDK v1.8.0_72. -1 unit 97m 20s hbase-server in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 46s Patch does not generate ASF License warnings. 331m 10s Reason Tests JDK v1.8.0_72 Failed junit tests hadoop.hbase.master.balancer.TestStochasticLoadBalancer   hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort JDK v1.8.0_72 Timed out junit tests org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot   org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot   org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot JDK v1.7.0_91 Failed junit tests hadoop.hbase.master.balancer.TestStochasticLoadBalancer   hadoop.hbase.replication.TestReplicationSyncUpTool   hadoop.hbase.mapreduce.TestRowCounter JDK v1.7.0_91 Timed out junit tests org.apache.hadoop.hbase.util.TestIdLock   org.apache.hadoop.hbase.regionserver.TestRSKilledWhenInitializing   org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles   org.apache.hadoop.hbase.mapreduce.TestMultiTableSnapshotInputFormat   org.apache.hadoop.hbase.mapreduce.TestImportTSVWithVisibilityLabels Subsystem Report/Notes Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-28 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784912/HBASE-9393.v6.patch JIRA Issue HBASE-9393 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux bc8f25bff8a7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh git revision master / 47c4147 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/338/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/338/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_72.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/338/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/338/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_72.txt https://builds.apache.org/job/PreCommit-HBASE-Build/338/artifact/patchprocess/patch-unit-hbase-server-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/338/testReport/ modules C: hbase-server U: hbase-server Max memory used 404MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HBASE-Build/338/console This message was automatically generated.
          Hide
          Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 hbaseanti 0m 1s Patch does not have any anti-patterns.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 2m 42s master passed
          +1 compile 0m 32s master passed with JDK v1.8.0_66
          +1 compile 0m 34s master passed with JDK v1.7.0_91
          +1 checkstyle 4m 19s master passed
          +1 mvneclipse 0m 16s master passed
          -1 findbugs 1m 51s hbase-server in master has 1 extant Findbugs warnings.
          +1 javadoc 0m 24s master passed with JDK v1.8.0_66
          +1 javadoc 0m 32s master passed with JDK v1.7.0_91
          +1 mvninstall 0m 45s the patch passed
          +1 compile 0m 31s the patch passed with JDK v1.8.0_66
          +1 javac 0m 31s the patch passed
          +1 compile 0m 34s the patch passed with JDK v1.7.0_91
          +1 javac 0m 34s the patch passed
          +1 checkstyle 3m 55s the patch passed
          +1 mvneclipse 0m 16s the patch passed
          +1 whitespace 0m 1s Patch has no whitespace issues.
          -1 hadoopcheck 0m 51s Patch causes 24 errors with Hadoop v2.4.0.
          -1 hadoopcheck 1m 42s Patch causes 24 errors with Hadoop v2.4.1.
          -1 hadoopcheck 2m 33s Patch causes 24 errors with Hadoop v2.5.0.
          -1 hadoopcheck 3m 23s Patch causes 24 errors with Hadoop v2.5.1.
          -1 hadoopcheck 4m 13s Patch causes 24 errors with Hadoop v2.5.2.
          -1 hadoopcheck 5m 4s Patch causes 24 errors with Hadoop v2.6.1.
          -1 hadoopcheck 5m 57s Patch causes 24 errors with Hadoop v2.6.2.
          -1 hadoopcheck 6m 48s Patch causes 24 errors with Hadoop v2.6.3.
          +1 findbugs 2m 7s the patch passed
          +1 javadoc 0m 32s the patch passed with JDK v1.8.0_66
          +1 javadoc 0m 34s the patch passed with JDK v1.7.0_91
          -1 unit 88m 33s hbase-server in the patch failed with JDK v1.8.0_66.
          +1 unit 88m 54s hbase-server in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 19s Patch does not generate ASF License warnings.
          207m 56s



          Reason Tests
          JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient



          Subsystem Report/Notes
          Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-29
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12785077/HBASE-9393.v6.patch
          JIRA Issue HBASE-9393
          Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
          uname Linux d527aa941a0a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
          git revision master / b3b1ce9
          findbugs v3.0.0
          findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/344/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
          unit https://builds.apache.org/job/PreCommit-HBASE-Build/344/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt
          unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/344/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/344/testReport/
          modules C: hbase-server U: hbase-server
          Max memory used 418MB
          Powered by Apache Yetus 0.1.0 http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HBASE-Build/344/console

          This message was automatically generated.

          Show
          Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 hbaseanti 0m 1s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 2m 42s master passed +1 compile 0m 32s master passed with JDK v1.8.0_66 +1 compile 0m 34s master passed with JDK v1.7.0_91 +1 checkstyle 4m 19s master passed +1 mvneclipse 0m 16s master passed -1 findbugs 1m 51s hbase-server in master has 1 extant Findbugs warnings. +1 javadoc 0m 24s master passed with JDK v1.8.0_66 +1 javadoc 0m 32s master passed with JDK v1.7.0_91 +1 mvninstall 0m 45s the patch passed +1 compile 0m 31s the patch passed with JDK v1.8.0_66 +1 javac 0m 31s the patch passed +1 compile 0m 34s the patch passed with JDK v1.7.0_91 +1 javac 0m 34s the patch passed +1 checkstyle 3m 55s the patch passed +1 mvneclipse 0m 16s the patch passed +1 whitespace 0m 1s Patch has no whitespace issues. -1 hadoopcheck 0m 51s Patch causes 24 errors with Hadoop v2.4.0. -1 hadoopcheck 1m 42s Patch causes 24 errors with Hadoop v2.4.1. -1 hadoopcheck 2m 33s Patch causes 24 errors with Hadoop v2.5.0. -1 hadoopcheck 3m 23s Patch causes 24 errors with Hadoop v2.5.1. -1 hadoopcheck 4m 13s Patch causes 24 errors with Hadoop v2.5.2. -1 hadoopcheck 5m 4s Patch causes 24 errors with Hadoop v2.6.1. -1 hadoopcheck 5m 57s Patch causes 24 errors with Hadoop v2.6.2. -1 hadoopcheck 6m 48s Patch causes 24 errors with Hadoop v2.6.3. +1 findbugs 2m 7s the patch passed +1 javadoc 0m 32s the patch passed with JDK v1.8.0_66 +1 javadoc 0m 34s the patch passed with JDK v1.7.0_91 -1 unit 88m 33s hbase-server in the patch failed with JDK v1.8.0_66. +1 unit 88m 54s hbase-server in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 19s Patch does not generate ASF License warnings. 207m 56s Reason Tests JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient Subsystem Report/Notes Docker Client=1.9.1 Server=1.9.1 Image:yetus/hbase:date2016-01-29 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12785077/HBASE-9393.v6.patch JIRA Issue HBASE-9393 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux d527aa941a0a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / b3b1ce9 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/344/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/344/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/344/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_66.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/344/testReport/ modules C: hbase-server U: hbase-server Max memory used 418MB Powered by Apache Yetus 0.1.0 http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HBASE-Build/344/console This message was automatically generated.
          Hide
          Ashish Singhi added a comment -

          TestFlushSnapshotFromClient failure is not related to patch. I have manually ran 3 times locally and not able to reproduce it.

          -------------------------------------------------------
           T E S T S
          -------------------------------------------------------
          Running org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient
          Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.841 sec - in org.apache.hadoop.hbase.snapshot.TestFlushSnapshot
          FromClient
          
          Results :
          
          Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
          
          [INFO]
          [INFO] --- maven-surefire-plugin:2.18.1:test (secondPartTestsExecution) @ hbase-server ---
          [INFO] Tests are skipped.
          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD SUCCESS
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 1:50.899s
          [INFO] Finished at: Fri Jan 29 14:07:38 GMT+05:30 2016
          [INFO] Final Memory: 36M/96M
          [INFO] ------------------------------------------------------------------------
          
          
          -------------------------------------------------------
           T E S T S
          -------------------------------------------------------
          Running org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient
          Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.399 sec - in org.apache.hadoop.hbase.snapshot.TestFlushSnapshot
          FromClient
          
          Results :
          
          Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
          
          [INFO]
          [INFO] --- maven-surefire-plugin:2.18.1:test (secondPartTestsExecution) @ hbase-server ---
          [INFO] Tests are skipped.
          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD SUCCESS
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 1:48.177s
          [INFO] Finished at: Fri Jan 29 14:13:52 GMT+05:30 2016
          [INFO] Final Memory: 35M/89M
          [INFO] ------------------------------------------------------------------------
          
          
          -------------------------------------------------------
           T E S T S
          -------------------------------------------------------
          Running org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient
          Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 94.072 sec - in org.apache.hadoop.hbase.snapshot.TestFlushSnapshot
          FromClient
          
          Results :
          
          Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
          
          [INFO]
          [INFO] --- maven-surefire-plugin:2.18.1:test (secondPartTestsExecution) @ hbase-server ---
          [INFO] Tests are skipped.
          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD SUCCESS
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 1:48.012s
          [INFO] Finished at: Fri Jan 29 14:16:28 GMT+05:30 2016
          [INFO] Final Memory: 36M/100M
          [INFO] ------------------------------------------------------------------------
          

          Stack, is v6 patch ok to commit ?
          Thanks.

          Show
          Ashish Singhi added a comment - TestFlushSnapshotFromClient failure is not related to patch. I have manually ran 3 times locally and not able to reproduce it. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.841 sec - in org.apache.hadoop.hbase.snapshot.TestFlushSnapshot FromClient Results : Tests run: 9, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] --- maven-surefire-plugin:2.18.1:test (secondPartTestsExecution) @ hbase-server --- [INFO] Tests are skipped. [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1:50.899s [INFO] Finished at: Fri Jan 29 14:07:38 GMT+05:30 2016 [INFO] Final Memory: 36M/96M [INFO] ------------------------------------------------------------------------ ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.399 sec - in org.apache.hadoop.hbase.snapshot.TestFlushSnapshot FromClient Results : Tests run: 9, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] --- maven-surefire-plugin:2.18.1:test (secondPartTestsExecution) @ hbase-server --- [INFO] Tests are skipped. [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1:48.177s [INFO] Finished at: Fri Jan 29 14:13:52 GMT+05:30 2016 [INFO] Final Memory: 35M/89M [INFO] ------------------------------------------------------------------------ ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 94.072 sec - in org.apache.hadoop.hbase.snapshot.TestFlushSnapshot FromClient Results : Tests run: 9, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] --- maven-surefire-plugin:2.18.1:test (secondPartTestsExecution) @ hbase-server --- [INFO] Tests are skipped. [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1:48.012s [INFO] Finished at: Fri Jan 29 14:16:28 GMT+05:30 2016 [INFO] Final Memory: 36M/100M [INFO] ------------------------------------------------------------------------ Stack , is v6 patch ok to commit ? Thanks.
          Hide
          Ashish Singhi added a comment -

          stack any feedback? Thanks.

          Show
          Ashish Singhi added a comment - stack any feedback? Thanks.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Stack - Do you have any feedback? We think it is good to go in 2.0 atleast.

          Show
          ramkrishna.s.vasudevan added a comment - Stack - Do you have any feedback? We think it is good to go in 2.0 atleast.

            People

            • Assignee:
              Ashish Singhi
              Reporter:
              Avi Zrachya
            • Votes:
              0 Vote for this issue
              Watchers:
              34 Start watching this issue

              Dates

              • Created:
                Updated:

                Development