Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-6520

hdfs fsck -move passes invalid length value when creating BlockReader

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.4.0
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: None
    • Labels:
    • Target Version/s:

      Description

      I met some error when I run fsck -move.
      My steps are as the following:
      1. Set up a pseudo cluster
      2. Copy a file to hdfs
      3. Corrupt a block of the file
      4. Run fsck to check:

      Connecting to namenode via http://localhost:50070
      FSCK started by hadoop (auth:SIMPLE) from /127.0.0.1 for path /user/hadoop at Wed Jun 11 15:58:38 CST 2014
      .
      /user/hadoop/fsck-test: CORRUPT blockpool BP-654596295-10.37.7.84-1402466764642 block blk_1073741825
      
      /user/hadoop/fsck-test: MISSING 1 blocks of total size 1048576 B.Status: CORRUPT
       Total size:    4104304 B
       Total dirs:    1
       Total files:   1
       Total symlinks:                0
       Total blocks (validated):      4 (avg. block size 1026076 B)
        ********************************
        CORRUPT FILES:        1
        MISSING BLOCKS:       1
        MISSING SIZE:         1048576 B
        CORRUPT BLOCKS:       1
        ********************************
       Minimally replicated blocks:   3 (75.0 %)
       Over-replicated blocks:        0 (0.0 %)
       Under-replicated blocks:       0 (0.0 %)
       Mis-replicated blocks:         0 (0.0 %)
       Default replication factor:    1
       Average block replication:     0.75
       Corrupt blocks:                1
       Missing replicas:              0 (0.0 %)
       Number of data-nodes:          1
       Number of racks:               1
      FSCK ended at Wed Jun 11 15:58:38 CST 2014 in 1 milliseconds
      
      
      The filesystem under path '/user/hadoop' is CORRUPT
      

      5. Run fsck -move to move the corrupted file to /lost+found and the error message in the namenode log:

      2014-06-11 15:48:16,686 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: FSCK started by hadoop (auth:SIMPLE) from /127.0.0.1 for path /user/hadoop at Wed Jun 11 15:48:16 CST 2014
      2014-06-11 15:48:16,894 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 35 Total time for transactions(ms): 9 Number of transactions batched in Syncs: 0 Number of syncs: 25 SyncTimes(ms): 73
      2014-06-11 15:48:16,991 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Error reading block
      java.io.IOException: Expected empty end-of-read packet! Header: PacketHeader with packetLen=66048 header data: offsetInBlock: 65536
      seqno: 1
      lastPacketInBlock: false
      dataLen: 65536
      
              at org.apache.hadoop.hdfs.RemoteBlockReader2.readTrailingEmptyPacket(RemoteBlockReader2.java:259)
              at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:220)
              at org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:138)
              at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlock(NamenodeFsck.java:649)
              at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(NamenodeFsck.java:543)
              at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:460)
              at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:324)
              at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.fsck(NamenodeFsck.java:233)
              at org.apache.hadoop.hdfs.server.namenode.FsckServlet$1.run(FsckServlet.java:67)
              at java.security.AccessController.doPrivileged(Native Method)
              at javax.security.auth.Subject.doAs(Subject.java:415)
              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
              at org.apache.hadoop.hdfs.server.namenode.FsckServlet.doGet(FsckServlet.java:58)
              at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
              at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
              at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
              at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
              at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1192)
              at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
              at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
              at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
              at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
              at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
              at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
              at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
              at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
              at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
              at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
              at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
              at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
              at org.mortbay.jetty.Server.handle(Server.java:326)
              at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
              at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
              at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
              at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
              at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
              at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
              at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
      2014-06-11 15:48:16,992 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Fsck: could not copy block BP-654596295-10.37.7.84-1402466764642:blk_1073741825_1001 to /lost+found/user/hadoop/fsck-test
      java.lang.Exception: Could not copy block data for BP-654596295-10.37.7.84-1402466764642:blk_1073741825_1001
              at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlock(NamenodeFsck.java:664)
              at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(NamenodeFsck.java:543)
              at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:460)
              at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:324)
              at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.fsck(NamenodeFsck.java:233)
              at org.apache.hadoop.hdfs.server.namenode.FsckServlet$1.run(FsckServlet.java:67)
              at java.security.AccessController.doPrivileged(Native Method)
              at javax.security.auth.Subject.doAs(Subject.java:415)
              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
              at org.apache.hadoop.hdfs.server.namenode.FsckServlet.doGet(FsckServlet.java:58)
              at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
              at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
              at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
              at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
              at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1192)
              at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
              at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
              at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
              at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
              at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
              at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
              at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
              at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
              at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
              at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
              at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
              at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
              at org.mortbay.jetty.Server.handle(Server.java:326)
              at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
              at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
              at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
              at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
              at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
              at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
              at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
      2014-06-11 15:48:16,994 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /lost+found/user/hadoop/fsck-test/0 is closed by DFSClient_NONMAPREDUCE_-774755866_14
      
      1. HDFS-6520.02.patch
        11 kB
        Xiao Chen
      2. HDFS-6520.01.patch
        8 kB
        Xiao Chen
      3. john.test.patch
        6 kB
        Xiao Chen
      4. HDFS-6520-partial.001.patch
        1 kB
        John Zhuge

        Activity

        Hide
        jzhuge John Zhuge added a comment -

        Was able to gain more insight after enhancing the test case a little:

        1. create a file of 4 blocks
        2. corrupt the 3rd block
        3. run fsck -move
          Expect fsck -move to move 3 good blocks to lost+found, or at least move the first 2 blocks, but actually it had trouble in all 4 blocks. Got exception in namenode.NamenodeFsck.copyBlock.

        Got these exceptions on NN (CDH5.8.0):

        java.io.IOException: Expected empty end-of-read packet! Header: PacketHeader with packetLen=66048 header data: offsetInBlock: 65536
        java.lang.Exception: Could not copy block data for BP-628350968-172.26.21.70-1456784299581:blk_1073746439_5615
        java.io.IOException: Expected empty end-of-read packet! Header: PacketHeader with packetLen=66048 header data: offsetInBlock: 65536
        java.lang.Exception: Could not copy block data for BP-628350968-172.26.21.70-1456784299581:blk_1073746440_5616
        java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@4d8c4b02
        java.lang.Exception: Could not copy block data for BP-628350968-172.26.21.70-1456784299581:blk_1073746441_5617
        java.io.IOException: Expected empty end-of-read packet! Header: PacketHeader with packetLen=66048 header data: offsetInBlock: 65536
        java.lang.Exception: Could not copy block data for BP-628350968-172.26.21.70-1456784299581:blk_1073746442_5618
        java.io.IOException: fsck encountered internal errors!
        
        Show
        jzhuge John Zhuge added a comment - Was able to gain more insight after enhancing the test case a little: create a file of 4 blocks corrupt the 3rd block run fsck -move Expect fsck -move to move 3 good blocks to lost+found , or at least move the first 2 blocks, but actually it had trouble in all 4 blocks. Got exception in namenode.NamenodeFsck.copyBlock . Got these exceptions on NN (CDH5.8.0): java.io.IOException: Expected empty end-of-read packet! Header: PacketHeader with packetLen=66048 header data: offsetInBlock: 65536 java.lang.Exception: Could not copy block data for BP-628350968-172.26.21.70-1456784299581:blk_1073746439_5615 java.io.IOException: Expected empty end-of-read packet! Header: PacketHeader with packetLen=66048 header data: offsetInBlock: 65536 java.lang.Exception: Could not copy block data for BP-628350968-172.26.21.70-1456784299581:blk_1073746440_5616 java.io.IOException: Premature EOF reading from org.apache.hadoop.net.SocketInputStream@4d8c4b02 java.lang.Exception: Could not copy block data for BP-628350968-172.26.21.70-1456784299581:blk_1073746441_5617 java.io.IOException: Expected empty end-of-read packet! Header: PacketHeader with packetLen=66048 header data: offsetInBlock: 65536 java.lang.Exception: Could not copy block data for BP-628350968-172.26.21.70-1456784299581:blk_1073746442_5618 java.io.IOException: fsck encountered internal errors!
        Hide
        jzhuge John Zhuge added a comment -

        Wrote the unit test that reproduced the problem. Got checksum exception in readNextPacket.

        Show
        jzhuge John Zhuge added a comment - Wrote the unit test that reproduced the problem. Got checksum exception in readNextPacket .
        Hide
        jzhuge John Zhuge added a comment -

        In this test case, echo CORRUPT >/path/to/block is used to corrupt the block.

        Show
        jzhuge John Zhuge added a comment - In this test case, echo CORRUPT >/path/to/block is used to corrupt the block.
        Hide
        jzhuge John Zhuge added a comment -

        Add a partial patch with a unit test that reproduces the problem. Here is the test output:

        2016-04-01 13:07:00,319 INFO  namenode.NameNode (NamenodeFsck.java:<init>(203)) - pmap: ugi = jzhuge
        2016-04-01 13:07:00,320 INFO  namenode.NameNode (NamenodeFsck.java:<init>(203)) - pmap: path = /
        2016-04-01 13:07:00,320 INFO  namenode.NameNode (NamenodeFsck.java:<init>(203)) - pmap: move = 1
        2016-04-01 13:07:00,320 INFO  namenode.NameNode (NamenodeFsck.java:fsck(322)) - FSCK started by jzhuge (auth:SIMPLE) from /127.0.0.1 for path / at Fri Apr 01 13:07:00 PDT 2016
        2016-04-01 13:07:00,320 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditMessage(9575)) - allowed=true	ugi=jzhuge (auth:SIMPLE)	ip=/127.0.0.1	cmd=fsck	src=/	dst=null	perm=null	proto=rpc
        Connecting to namenode via http://localhost:50349
        2016-04-01 13:07:00,324 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1715)) - PrivilegedAction as:jzhuge (auth:SIMPLE) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
        2016-04-01 13:07:00,324 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditMessage(9575)) - allowed=true	ugi=jzhuge (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/lost+found	dst=null	perm=null	proto=rpc
        2016-04-01 13:07:00,325 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1715)) - PrivilegedAction as:jzhuge (auth:SIMPLE) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
        2016-04-01 13:07:00,326 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditMessage(9575)) - allowed=true	ugi=jzhuge (auth:SIMPLE)	ip=/127.0.0.1	cmd=mkdirs	src=/lost+found	dst=null	perm=jzhuge:supergroup:rwxr-xr-x	proto=rpc
        2016-04-01 13:07:00,328 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1715)) - PrivilegedAction as:jzhuge (auth:SIMPLE) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
        2016-04-01 13:07:00,329 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditMessage(9575)) - allowed=true	ugi=jzhuge (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/lost+found/srcdat/four/two/4870332656992363927/0	dst=null	perm=jzhuge:supergroup:rw-r--r--	proto=rpc
        2016-04-01 13:07:00,334 ERROR namenode.NameNode (NamenodeFsck.java:copyBlock(775)) - Error reading block
        org.apache.hadoop.fs.ChecksumException: Checksum error: /127.0.0.1:50355:BP-152418585-172.16.2.34-1459541215431:1073741827 at 0 exp: 341714375 got: 2114326294
        	at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:325)
        	at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:237)
        	at org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:156)
        	at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlock(NamenodeFsck.java:766)
        	at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(NamenodeFsck.java:657)
        	at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:574)
        	at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:438)
        	at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:438)
        	at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:438)
        	at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:438)
        	at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.fsck(NamenodeFsck.java:346)
        	at org.apache.hadoop.hdfs.server.namenode.FsckServlet$1.run(FsckServlet.java:67)
        	at java.security.AccessController.doPrivileged(Native Method)
        	at javax.security.auth.Subject.doAs(Subject.java:415)
        	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        	at org.apache.hadoop.hdfs.server.namenode.FsckServlet.doGet(FsckServlet.java:58)
        	at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        	at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        	at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
        	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
        	at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1298)
        	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
        	at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
        	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
        	at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
        	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
        	at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
        	at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        	at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
        	at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767)
        	at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
        	at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        	at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        	at org.mortbay.jetty.Server.handle(Server.java:326)
        	at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
        	at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
        	at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
        	at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
        	at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
        	at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
        	at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
        
        Show
        jzhuge John Zhuge added a comment - Add a partial patch with a unit test that reproduces the problem. Here is the test output: 2016-04-01 13:07:00,319 INFO namenode.NameNode (NamenodeFsck.java:<init>(203)) - pmap: ugi = jzhuge 2016-04-01 13:07:00,320 INFO namenode.NameNode (NamenodeFsck.java:<init>(203)) - pmap: path = / 2016-04-01 13:07:00,320 INFO namenode.NameNode (NamenodeFsck.java:<init>(203)) - pmap: move = 1 2016-04-01 13:07:00,320 INFO namenode.NameNode (NamenodeFsck.java:fsck(322)) - FSCK started by jzhuge (auth:SIMPLE) from /127.0.0.1 for path / at Fri Apr 01 13:07:00 PDT 2016 2016-04-01 13:07:00,320 INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(9575)) - allowed= true ugi=jzhuge (auth:SIMPLE) ip=/127.0.0.1 cmd=fsck src=/ dst= null perm= null proto=rpc Connecting to namenode via http: //localhost:50349 2016-04-01 13:07:00,324 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1715)) - PrivilegedAction as:jzhuge (auth:SIMPLE) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) 2016-04-01 13:07:00,324 INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(9575)) - allowed= true ugi=jzhuge (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/lost+found dst= null perm= null proto=rpc 2016-04-01 13:07:00,325 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1715)) - PrivilegedAction as:jzhuge (auth:SIMPLE) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) 2016-04-01 13:07:00,326 INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(9575)) - allowed= true ugi=jzhuge (auth:SIMPLE) ip=/127.0.0.1 cmd=mkdirs src=/lost+found dst= null perm=jzhuge:supergroup:rwxr-xr-x proto=rpc 2016-04-01 13:07:00,328 DEBUG security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1715)) - PrivilegedAction as:jzhuge (auth:SIMPLE) from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) 2016-04-01 13:07:00,329 INFO FSNamesystem.audit (FSNamesystem.java:logAuditMessage(9575)) - allowed= true ugi=jzhuge (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/lost+found/srcdat/four/two/4870332656992363927/0 dst= null perm=jzhuge:supergroup:rw-r--r-- proto=rpc 2016-04-01 13:07:00,334 ERROR namenode.NameNode (NamenodeFsck.java:copyBlock(775)) - Error reading block org.apache.hadoop.fs.ChecksumException: Checksum error: /127.0.0.1:50355:BP-152418585-172.16.2.34-1459541215431:1073741827 at 0 exp: 341714375 got: 2114326294 at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:325) at org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:237) at org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:156) at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlock(NamenodeFsck.java:766) at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(NamenodeFsck.java:657) at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:574) at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:438) at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:438) at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:438) at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:438) at org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.fsck(NamenodeFsck.java:346) at org.apache.hadoop.hdfs.server.namenode.FsckServlet$1.run(FsckServlet.java:67) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.hdfs.server.namenode.FsckServlet.doGet(FsckServlet.java:58) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1298) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
        Hide
        xiaochen Xiao Chen added a comment -

        Seems John didn't get a complete test patch when uploading. Fortunately he hands me the patch too, so here's his original test patch.
        Thanks John!

        Show
        xiaochen Xiao Chen added a comment - Seems John didn't get a complete test patch when uploading. Fortunately he hands me the patch too, so here's his original test patch. Thanks John!
        Hide
        xiaochen Xiao Chen added a comment -

        So the root cause is that fsck -move will throw exception for all good blocks. A sample exception is pasted in the description. We can see the problematic method is RemoteBlockReader2#readTrailingEmptyPacket. Thanks Shengjun for reporting this jira with good details.

        The reason is that fsck sets the length to -1, but seems the RemoteBlockReader2 code doesn't support that, and will consider the read finished after the first iteration. Of course there are usually more than 1 buffer size of data to be read, so the above exception is seen.

        I think a quick fix to this would be to pass in the length when creating the block reader, like all other places of invocation do. Patch 1 implements this idea, and polished John's test case.

        Show
        xiaochen Xiao Chen added a comment - So the root cause is that fsck -move will throw exception for all good blocks. A sample exception is pasted in the description. We can see the problematic method is RemoteBlockReader2#readTrailingEmptyPacket . Thanks Shengjun for reporting this jira with good details. The reason is that fsck sets the length to -1 , but seems the RemoteBlockReader2 code doesn't support that, and will consider the read finished after the first iteration. Of course there are usually more than 1 buffer size of data to be read, so the above exception is seen. I think a quick fix to this would be to pass in the length when creating the block reader, like all other places of invocation do. Patch 1 implements this idea, and polished John's test case.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 12m 29s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
        +1 mvninstall 6m 45s trunk passed
        +1 compile 0m 38s trunk passed with JDK v1.8.0_77
        +1 compile 0m 41s trunk passed with JDK v1.7.0_95
        +1 checkstyle 0m 23s trunk passed
        +1 mvnsite 0m 52s trunk passed
        +1 mvneclipse 0m 13s trunk passed
        +1 findbugs 1m 58s trunk passed
        +1 javadoc 1m 9s trunk passed with JDK v1.8.0_77
        +1 javadoc 1m 46s trunk passed with JDK v1.7.0_95
        +1 mvninstall 0m 47s the patch passed
        +1 compile 0m 34s the patch passed with JDK v1.8.0_77
        +1 javac 0m 34s the patch passed
        +1 compile 0m 38s the patch passed with JDK v1.7.0_95
        +1 javac 0m 38s the patch passed
        +1 checkstyle 0m 19s the patch passed
        +1 mvnsite 0m 48s the patch passed
        +1 mvneclipse 0m 11s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 2m 8s the patch passed
        +1 javadoc 1m 4s the patch passed with JDK v1.8.0_77
        +1 javadoc 1m 43s the patch passed with JDK v1.7.0_95
        -1 unit 56m 54s hadoop-hdfs in the patch failed with JDK v1.8.0_77.
        -1 unit 54m 32s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
        +1 asflicense 0m 21s Patch does not generate ASF License warnings.
        148m 56s



        Reason Tests
        JDK v1.8.0_77 Failed junit tests hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
          hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
        JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestHFlush
          hadoop.hdfs.TestRenameWhileOpen



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:fbe3e86
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12797179/HDFS-6520.01.patch
        JIRA Issue HDFS-6520
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux af16d0bb7eaf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 9ba1e5a
        Default Java 1.7.0_95
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-HDFS-Build/15076/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
        unit https://builds.apache.org/job/PreCommit-HDFS-Build/15076/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
        unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15076/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15076/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
        JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15076/testReport/
        modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
        Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15076/console
        Powered by Apache Yetus 0.2.0 http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 12m 29s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 6m 45s trunk passed +1 compile 0m 38s trunk passed with JDK v1.8.0_77 +1 compile 0m 41s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 23s trunk passed +1 mvnsite 0m 52s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 58s trunk passed +1 javadoc 1m 9s trunk passed with JDK v1.8.0_77 +1 javadoc 1m 46s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 47s the patch passed +1 compile 0m 34s the patch passed with JDK v1.8.0_77 +1 javac 0m 34s the patch passed +1 compile 0m 38s the patch passed with JDK v1.7.0_95 +1 javac 0m 38s the patch passed +1 checkstyle 0m 19s the patch passed +1 mvnsite 0m 48s the patch passed +1 mvneclipse 0m 11s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 8s the patch passed +1 javadoc 1m 4s the patch passed with JDK v1.8.0_77 +1 javadoc 1m 43s the patch passed with JDK v1.7.0_95 -1 unit 56m 54s hadoop-hdfs in the patch failed with JDK v1.8.0_77. -1 unit 54m 32s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 21s Patch does not generate ASF License warnings. 148m 56s Reason Tests JDK v1.8.0_77 Failed junit tests hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes JDK v1.7.0_95 Failed junit tests hadoop.hdfs.TestHFlush   hadoop.hdfs.TestRenameWhileOpen Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12797179/HDFS-6520.01.patch JIRA Issue HDFS-6520 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux af16d0bb7eaf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9ba1e5a Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/15076/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15076/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15076/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15076/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15076/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15076/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
        Hide
        cmccabe Colin P. McCabe added a comment -

        Thanks, Xiao Chen. It does look like the BlockReaders require a valid size, ever since their introduction in HDFS-2260. Can you add a check that length is non-negative in BlockReaderFactory? Also, can you get rid of this comment in BlockReaderFactory.java?

        /**
           * Number of bytes to read.  -1 indicates no limit.
           */
          private long length = -1;
        
        Show
        cmccabe Colin P. McCabe added a comment - Thanks, Xiao Chen . It does look like the BlockReaders require a valid size, ever since their introduction in HDFS-2260 . Can you add a check that length is non-negative in BlockReaderFactory ? Also, can you get rid of this comment in BlockReaderFactory.java ? /** * Number of bytes to read. -1 indicates no limit. */ private long length = -1;
        Hide
        xiaochen Xiao Chen added a comment -

        Thanks Colin P. McCabe for the review! Good idea - we should actively detect and prevent this.
        I'm attaching patch 2 to address your comments.

        Show
        xiaochen Xiao Chen added a comment - Thanks Colin P. McCabe for the review! Good idea - we should actively detect and prevent this. I'm attaching patch 2 to address your comments.
        Hide
        hadoopqa Hadoop QA added a comment -
        -1 overall



        Vote Subsystem Runtime Comment
        0 reexec 0m 11s Docker mode activated.
        +1 @author 0m 0s The patch does not contain any @author tags.
        +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
        0 mvndep 0m 17s Maven dependency ordering for branch
        +1 mvninstall 6m 37s trunk passed
        +1 compile 1m 13s trunk passed with JDK v1.8.0_77
        +1 compile 1m 20s trunk passed with JDK v1.7.0_95
        +1 checkstyle 0m 28s trunk passed
        +1 mvnsite 1m 25s trunk passed
        +1 mvneclipse 0m 25s trunk passed
        +1 findbugs 3m 39s trunk passed
        +1 javadoc 1m 27s trunk passed with JDK v1.8.0_77
        +1 javadoc 2m 12s trunk passed with JDK v1.7.0_95
        0 mvndep 0m 9s Maven dependency ordering for patch
        +1 mvninstall 1m 17s the patch passed
        +1 compile 1m 13s the patch passed with JDK v1.8.0_77
        +1 javac 1m 13s the patch passed
        +1 compile 1m 18s the patch passed with JDK v1.7.0_95
        +1 javac 1m 18s the patch passed
        +1 checkstyle 0m 25s the patch passed
        +1 mvnsite 1m 20s the patch passed
        +1 mvneclipse 0m 23s the patch passed
        +1 whitespace 0m 0s Patch has no whitespace issues.
        +1 findbugs 3m 59s the patch passed
        +1 javadoc 1m 21s the patch passed with JDK v1.8.0_77
        +1 javadoc 2m 8s the patch passed with JDK v1.7.0_95
        +1 unit 0m 51s hadoop-hdfs-client in the patch passed with JDK v1.8.0_77.
        +1 unit 56m 4s hadoop-hdfs in the patch passed with JDK v1.8.0_77.
        +1 unit 0m 57s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
        -1 unit 53m 15s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
        +1 asflicense 0m 20s Patch does not generate ASF License warnings.
        146m 33s



        Reason Tests
        JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.datanode.TestFsDatasetCache



        Subsystem Report/Notes
        Docker Image:yetus/hadoop:fbe3e86
        JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12797209/HDFS-6520.02.patch
        JIRA Issue HDFS-6520
        Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
        uname Linux 838bbfc228b3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
        Build tool maven
        Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
        git revision trunk / 21eb428
        Default Java 1.7.0_95
        Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
        findbugs v3.0.0
        unit https://builds.apache.org/job/PreCommit-HDFS-Build/15079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
        unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
        JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15079/testReport/
        modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
        Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15079/console
        Powered by Apache Yetus 0.2.0 http://yetus.apache.org

        This message was automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 11s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. 0 mvndep 0m 17s Maven dependency ordering for branch +1 mvninstall 6m 37s trunk passed +1 compile 1m 13s trunk passed with JDK v1.8.0_77 +1 compile 1m 20s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 28s trunk passed +1 mvnsite 1m 25s trunk passed +1 mvneclipse 0m 25s trunk passed +1 findbugs 3m 39s trunk passed +1 javadoc 1m 27s trunk passed with JDK v1.8.0_77 +1 javadoc 2m 12s trunk passed with JDK v1.7.0_95 0 mvndep 0m 9s Maven dependency ordering for patch +1 mvninstall 1m 17s the patch passed +1 compile 1m 13s the patch passed with JDK v1.8.0_77 +1 javac 1m 13s the patch passed +1 compile 1m 18s the patch passed with JDK v1.7.0_95 +1 javac 1m 18s the patch passed +1 checkstyle 0m 25s the patch passed +1 mvnsite 1m 20s the patch passed +1 mvneclipse 0m 23s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 3m 59s the patch passed +1 javadoc 1m 21s the patch passed with JDK v1.8.0_77 +1 javadoc 2m 8s the patch passed with JDK v1.7.0_95 +1 unit 0m 51s hadoop-hdfs-client in the patch passed with JDK v1.8.0_77. +1 unit 56m 4s hadoop-hdfs in the patch passed with JDK v1.8.0_77. +1 unit 0m 57s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 53m 15s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 20s Patch does not generate ASF License warnings. 146m 33s Reason Tests JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.datanode.TestFsDatasetCache Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12797209/HDFS-6520.02.patch JIRA Issue HDFS-6520 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 838bbfc228b3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 21eb428 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/15079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15079/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15079/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15079/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
        Hide
        xiaochen Xiao Chen added a comment -

        Test failure looks unrelated and passed locally.

        Show
        xiaochen Xiao Chen added a comment - Test failure looks unrelated and passed locally.
        Hide
        cmccabe Colin P. McCabe added a comment -

        +1. Thanks, Xiao Chen.

        Show
        cmccabe Colin P. McCabe added a comment - +1. Thanks, Xiao Chen .
        Hide
        xiaochen Xiao Chen added a comment -

        Thanks very much Colin P. McCabe! Maybe we can update the title to s/hdfs fsck/hdfs fsck -move/ to be more specific?

        Show
        xiaochen Xiao Chen added a comment - Thanks very much Colin P. McCabe ! Maybe we can update the title to s/hdfs fsck/hdfs fsck -move/ to be more specific?
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-trunk-Commit #9569 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9569/)
        HDFS-6520. hdfs fsck passes invalid length value when creating (cmccabe: rev 188f65287d5b2f26a8862c88198f83ac59035016)

        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
        • hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
        • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
        • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9569 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9569/ ) HDFS-6520 . hdfs fsck passes invalid length value when creating (cmccabe: rev 188f65287d5b2f26a8862c88198f83ac59035016) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java

          People

          • Assignee:
            xiaochen Xiao Chen
            Reporter:
            xinshengjun Shengjun Xin
          • Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development