Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.22.0
    • Fix Version/s: 0.22.0, 0.23.0
    • Component/s: test
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      TestLargeBlock is failing for more than a week not on 0.22 and trunk with

      java.io.IOException: Premeture EOF from inputStream
      	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:118)
      	at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:275)
      
      1. HDFS-1523.patch
        10 kB
        Konstantin Boudnik
      2. HDFS-1523.patch
        9 kB
        Konstantin Boudnik

        Issue Links

          Activity

          Hide
          Konstantin Boudnik added a comment -

          Full stack trace:
          Stacktrace

          java.io.IOException: Premeture EOF from inputStream
          	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:118)
          	at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:275)
          	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:273)
          	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:225)
          	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:193)
          	at org.apache.hadoop.hdfs.BlockReader.read(BlockReader.java:117)
          	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:477)
          	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:528)
          	at java.io.DataInputStream.readFully(DataInputStream.java:178)
          	at org.apache.hadoop.hdfs.TestLargeBlock.checkFullFile(TestLargeBlock.java:142)
          	at org.apache.hadoop.hdfs.TestLargeBlock.runTest(TestLargeBlock.java:210)
          	at org.apache.hadoop.hdfs.TestLargeBlock.__CLR3_0_2y8q39ovnu(TestLargeBlock.java:171)
          	at org.apache.hadoop.hdfs.TestLargeBlock.testLargeBlockSize(TestLargeBlock.java:169)
          
          Show
          Konstantin Boudnik added a comment - Full stack trace: Stacktrace java.io.IOException: Premeture EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:118) at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:275) at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:273) at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:225) at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:193) at org.apache.hadoop.hdfs.BlockReader.read(BlockReader.java:117) at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:477) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:528) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.hdfs.TestLargeBlock.checkFullFile(TestLargeBlock.java:142) at org.apache.hadoop.hdfs.TestLargeBlock.runTest(TestLargeBlock.java:210) at org.apache.hadoop.hdfs.TestLargeBlock.__CLR3_0_2y8q39ovnu(TestLargeBlock.java:171) at org.apache.hadoop.hdfs.TestLargeBlock.testLargeBlockSize(TestLargeBlock.java:169)
          Hide
          Konstantin Boudnik added a comment -

          I can not reproduce this issue on my Ubuntu machine.

          Show
          Konstantin Boudnik added a comment - I can not reproduce this issue on my Ubuntu machine.
          Hide
          Konstantin Boudnik added a comment -

          Message type issue is addressed

          Show
          Konstantin Boudnik added a comment - Message type issue is addressed
          Hide
          Konstantin Boudnik added a comment -

          Test fails every time on

          • 32-bit Ubuntu (kernel 2.6.28-11) machine.
          • 64-bit Ubuntu (kernel 2.6.28-18) (Apache Hudson hadoop8)
            Works on:
          • Ubuntu 64-bit (kernel 2.6.35-23) works fine

          Is it kernel problem?

          Show
          Konstantin Boudnik added a comment - Test fails every time on 32-bit Ubuntu (kernel 2.6.28-11) machine. 64-bit Ubuntu (kernel 2.6.28-18) (Apache Hudson hadoop8) Works on: Ubuntu 64-bit (kernel 2.6.35-23) works fine Is it kernel problem?
          Hide
          Konstantin Boudnik added a comment -

          Here's the scenario of the test:

          • read the file in chunks of 134217728 (128Mb)
          • after last full read there are 513 bytes to be read
          • 512 bytes of those have to be read from the first block
          • 1 byte is going to be read from the last block (second one)

          When test passes before reading last 513 bytes call to FSNameSystem.getBlockLocationsInternal returns last block of the file (size=1)

          2010-11-30 19:44:49,439 DEBUG namenode.FSNamesystem (FSNamesystem.java:getBlockLocationsInternal(866)) - blocks = [blk_-6779333650185181528_1001, blk_-3599
          865432887782445_1001]
          2010-11-30 19:44:49,440 DEBUG namenode.FSNamesystem (FSNamesystem.java:getBlockLocationsInternal(881)) - last = blk_-3599865432887782445_1001
          2010-11-30 19:44:49,457 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(148)) - ugi=cos       ip=/127.0.0.1   cmd=open        src=/home/cos/work/
          H0.23/git/hdfs/build/test/data/2147484160.dat        dst=null        perm=null
          2010-11-30 19:44:49,459 DEBUG hdfs.DFSClient (DFSInputStream.java:openInfo(113)) - newInfo = LocatedBlocks{
            fileLength=2147484161
            underConstruction=false
            blocks=[LocatedBlock{blk_-6779333650185181528_1001; getBlockSize()=2147484160; corrupt=false; offset=0; locs=[127.0.0.1:35608]}]
            lastLocatedBlock=LocatedBlock{blk_-3599865432887782445_1001; getBlockSize()=1; corrupt=false; offset=2147484160; locs=[127.0.0.1:35608]}
            isLastBlockComplete=true}
          ...
          2010-11-30 19:45:23,880 INFO  DataNode.clienttrace (BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35608, dest: /127.0.0.1:51640, bytes: 2164261380, op
          : HDFS_READ, cliID: DFSClient_1273505070, offset: 0, srvID: DS-212336177-192.168.102.126-35608-1291175019763, blockid: blk_-6779333650185181528_1001, durat
          ion: 34361753463
          2010-11-30 19:45:24,030 DEBUG namenode.FSNamesystem (FSNamesystem.java:getBlockLocationsInternal(866)) - blocks = [blk_-6779333650185181528_1001, blk_-3599
          865432887782445_1001]
          2010-11-30 19:45:24,030 DEBUG namenode.FSNamesystem (FSNamesystem.java:getBlockLocationsInternal(881)) - last = blk_-3599865432887782445_1001
          2010-11-30 19:45:24,031 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(148)) - ugi=cos       ip=/127.0.0.1   cmd=open        src=/home/cos/work/H0.23/git/hdfs/build/test/data/2147484160.dat        dst=null        perm=null
          2010-11-30 19:45:24,032 DEBUG datanode.DataNode (DataXceiver.java:<init>(86)) - Number of active connections is: 2
          2010-11-30 19:45:24,099 DEBUG datanode.DataNode (DataXceiver.java:run(135)) - DatanodeRegistration(127.0.0.1:35608, storageID=DS-212336177-192.168.102.126-35608-1291175019763, infoPort=46218, ipcPort=38099):Number of active connections is: 3
          2010-11-30 19:45:24,099 DEBUG datanode.DataNode (BlockSender.java:<init>(140)) - block=blk_-3599865432887782445_1001, replica=FinalizedReplica, blk_-3599865432887782445_1001, FINALIZED
            getNumBytes()     = 1
            getBytesOnDisk()  = 1
            getVisibleLength()= 1
            getVolume()       = /home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized
            getBlockFile()    = /home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized/blk_-3599865432887782445
            unlinked=false
          2010-11-30 19:45:24,101 DEBUG datanode.DataNode (BlockSender.java:<init>(231)) - replica=FinalizedReplica, blk_-3599865432887782445_1001, FINALIZED
            getNumBytes()     = 1
            getBytesOnDisk()  = 1
            getVisibleLength()= 1
            getVolume()       = /home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized
            getBlockFile()    = /home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized/blk_-3599865432887782445
            unlinked=false
          2010-11-30 19:45:24,103 INFO  DataNode.clienttrace (BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35608, dest: /127.0.0.1:51644, bytes: 5, op: HDFS_READ, cliID: DFSClient_1273505070, offset: 0, srvID: DS-212336177-192.168.102.126-35608-1291175019763, blockid: blk_-3599865432887782445_1001, duration: 1854472
          

          In case of failure:

          2010-11-30 19:35:49,426 DEBUG namenode.FSNamesystem (FSNamesystem.java:getBlockLocationsInternal(866)) - blocks = [blk_1170274882140601397_1001, blk_289191
          6181488413346_1001]
          2010-11-30 19:35:49,426 DEBUG namenode.FSNamesystem (FSNamesystem.java:getBlockLocationsInternal(881)) - last = blk_2891916181488413346_1001
          2010-11-30 19:35:49,427 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(148)) - ugi=cos       ip=/127.0.0.1   cmd=open        src=/home/cos/work/
          hadoop/git/hdfs/build/test/data/2147484160.dat       dst=null        perm=null
          2010-11-30 19:35:49,428 DEBUG hdfs.DFSClient (DFSInputStream.java:openInfo(113)) - newInfo = LocatedBlocks{
            fileLength=2147484161
            underConstruction=false
            blocks=[LocatedBlock{blk_1170274882140601397_1001; getBlockSize()=2147484160; corrupt=false; offset=0; locs=[127.0.0.1:35644]}]
            lastLocatedBlock=LocatedBlock{blk_2891916181488413346_1001; getBlockSize()=1; corrupt=false; offset=2147484160; locs=[127.0.0.1:35644]}
            isLastBlockComplete=true}
          ...
          2010-11-30 19:36:16,761 INFO  DataNode.clienttrace (BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35644, dest: /127.0.0.1:52290, bytes: 2164194816, op
          : HDFS_READ, cliID: DFSClient_635470834, offset: 0, srvID: DS-514949605-127.0.0.1-35644-1291174495289, blockid: blk_1170274882140601397_1001, duration: 273
          20364567
          2010-11-30 19:36:16,761 DEBUG datanode.DataNode (DataXceiver.java:run(135)) - DatanodeRegistration(127.0.0.1:35644, storageID=DS-514949605-127.0.0.1-35644-
          1291174495289, infoPort=56924, ipcPort=60439):Number of active connections is: 2
          2010-11-30 19:36:16,794 DEBUG datanode.DataNode (DataXceiver.java:<init>(86)) - Number of active connections is: 1
          2010-11-30 19:36:16,794 DEBUG datanode.DataNode (BlockSender.java:<init>(140)) - block=blk_1170274882140601397_1001, replica=FinalizedReplica, blk_11702748
          82140601397_1001, FINALIZED
            getNumBytes()     = 2147484160
            getBytesOnDisk()  = 2147484160
            getVisibleLength()= 2147484160
            getVolume()       = /home/cos/work/hadoop/git/hdfs/build/test/data/dfs/data/data1/current/finalized
            getBlockFile()    = /home/cos/work/hadoop/git/hdfs/build/test/data/dfs/data/data1/current/finalized/blk_1170274882140601397
            unlinked=false
          2010-11-30 19:36:16,795 DEBUG datanode.DataNode (BlockSender.java:<init>(231)) - replica=FinalizedReplica, blk_1170274882140601397_1001, FINALIZED
            getNumBytes()     = 2147484160
            getBytesOnDisk()  = 2147484160
            getVisibleLength()= 2147484160
            getVolume()       = /home/cos/work/hadoop/git/hdfs/build/test/data/dfs/data/data1/current/finalized
            getBlockFile()    = /home/cos/work/hadoop/git/hdfs/build/test/data/dfs/data/data1/current/finalized/blk_1170274882140601397
            unlinked=false
          2010-11-30 19:36:17,276 INFO  DataNode.clienttrace (BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35644, dest: /127.0.0.1:52296, bytes: 135200256, op:
           HDFS_READ, cliID: DFSClient_635470834, offset: 2013265920, srvID: DS-514949605-127.0.0.1-35644-1291174495289, blockid: blk_1170274882140601397_1001, durat
          ion: 480762241
          2010-11-30 19:36:17,276 DEBUG datanode.DataNode (DataXceiver.java:run(135)) - DatanodeRegistration(127.0.0.1:35644, storageID=DS-514949605-127.0.0.1-35644-
          1291174495289, infoPort=56924, ipcPort=60439):Number of active connections is: 2
          2010-11-30 19:36:17,290 WARN  hdfs.DFSClient (DFSInputStream.java:readBuffer(486)) - Exception while reading from blk_1170274882140601397_1001 of /home/cos
          /work/hadoop/git/hdfs/build/test/data/2147484160.dat from 127.0.0.1:35644: java.io.IOException: Premature EOF from inputStream
                  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:118)
                  at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:275)
          

          so it seems like the test fails because wrong block is being read or something.

          Show
          Konstantin Boudnik added a comment - Here's the scenario of the test: read the file in chunks of 134217728 (128Mb) after last full read there are 513 bytes to be read 512 bytes of those have to be read from the first block 1 byte is going to be read from the last block (second one) When test passes before reading last 513 bytes call to FSNameSystem.getBlockLocationsInternal returns last block of the file (size=1) 2010-11-30 19:44:49,439 DEBUG namenode.FSNamesystem (FSNamesystem.java:getBlockLocationsInternal(866)) - blocks = [blk_-6779333650185181528_1001, blk_-3599 865432887782445_1001] 2010-11-30 19:44:49,440 DEBUG namenode.FSNamesystem (FSNamesystem.java:getBlockLocationsInternal(881)) - last = blk_-3599865432887782445_1001 2010-11-30 19:44:49,457 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(148)) - ugi=cos ip=/127.0.0.1 cmd=open src=/home/cos/work/ H0.23/git/hdfs/build/test/data/2147484160.dat dst=null perm=null 2010-11-30 19:44:49,459 DEBUG hdfs.DFSClient (DFSInputStream.java:openInfo(113)) - newInfo = LocatedBlocks{ fileLength=2147484161 underConstruction=false blocks=[LocatedBlock{blk_-6779333650185181528_1001; getBlockSize()=2147484160; corrupt=false; offset=0; locs=[127.0.0.1:35608]}] lastLocatedBlock=LocatedBlock{blk_-3599865432887782445_1001; getBlockSize()=1; corrupt=false; offset=2147484160; locs=[127.0.0.1:35608]} isLastBlockComplete=true} ... 2010-11-30 19:45:23,880 INFO DataNode.clienttrace (BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35608, dest: /127.0.0.1:51640, bytes: 2164261380, op : HDFS_READ, cliID: DFSClient_1273505070, offset: 0, srvID: DS-212336177-192.168.102.126-35608-1291175019763, blockid: blk_-6779333650185181528_1001, durat ion: 34361753463 2010-11-30 19:45:24,030 DEBUG namenode.FSNamesystem (FSNamesystem.java:getBlockLocationsInternal(866)) - blocks = [blk_-6779333650185181528_1001, blk_-3599 865432887782445_1001] 2010-11-30 19:45:24,030 DEBUG namenode.FSNamesystem (FSNamesystem.java:getBlockLocationsInternal(881)) - last = blk_-3599865432887782445_1001 2010-11-30 19:45:24,031 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(148)) - ugi=cos ip=/127.0.0.1 cmd=open src=/home/cos/work/H0.23/git/hdfs/build/test/data/2147484160.dat dst=null perm=null 2010-11-30 19:45:24,032 DEBUG datanode.DataNode (DataXceiver.java:<init>(86)) - Number of active connections is: 2 2010-11-30 19:45:24,099 DEBUG datanode.DataNode (DataXceiver.java:run(135)) - DatanodeRegistration(127.0.0.1:35608, storageID=DS-212336177-192.168.102.126-35608-1291175019763, infoPort=46218, ipcPort=38099):Number of active connections is: 3 2010-11-30 19:45:24,099 DEBUG datanode.DataNode (BlockSender.java:<init>(140)) - block=blk_-3599865432887782445_1001, replica=FinalizedReplica, blk_-3599865432887782445_1001, FINALIZED getNumBytes() = 1 getBytesOnDisk() = 1 getVisibleLength()= 1 getVolume() = /home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized getBlockFile() = /home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized/blk_-3599865432887782445 unlinked=false 2010-11-30 19:45:24,101 DEBUG datanode.DataNode (BlockSender.java:<init>(231)) - replica=FinalizedReplica, blk_-3599865432887782445_1001, FINALIZED getNumBytes() = 1 getBytesOnDisk() = 1 getVisibleLength()= 1 getVolume() = /home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized getBlockFile() = /home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized/blk_-3599865432887782445 unlinked=false 2010-11-30 19:45:24,103 INFO DataNode.clienttrace (BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35608, dest: /127.0.0.1:51644, bytes: 5, op: HDFS_READ, cliID: DFSClient_1273505070, offset: 0, srvID: DS-212336177-192.168.102.126-35608-1291175019763, blockid: blk_-3599865432887782445_1001, duration: 1854472 In case of failure: 2010-11-30 19:35:49,426 DEBUG namenode.FSNamesystem (FSNamesystem.java:getBlockLocationsInternal(866)) - blocks = [blk_1170274882140601397_1001, blk_289191 6181488413346_1001] 2010-11-30 19:35:49,426 DEBUG namenode.FSNamesystem (FSNamesystem.java:getBlockLocationsInternal(881)) - last = blk_2891916181488413346_1001 2010-11-30 19:35:49,427 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(148)) - ugi=cos ip=/127.0.0.1 cmd=open src=/home/cos/work/ hadoop/git/hdfs/build/test/data/2147484160.dat dst=null perm=null 2010-11-30 19:35:49,428 DEBUG hdfs.DFSClient (DFSInputStream.java:openInfo(113)) - newInfo = LocatedBlocks{ fileLength=2147484161 underConstruction=false blocks=[LocatedBlock{blk_1170274882140601397_1001; getBlockSize()=2147484160; corrupt=false; offset=0; locs=[127.0.0.1:35644]}] lastLocatedBlock=LocatedBlock{blk_2891916181488413346_1001; getBlockSize()=1; corrupt=false; offset=2147484160; locs=[127.0.0.1:35644]} isLastBlockComplete=true} ... 2010-11-30 19:36:16,761 INFO DataNode.clienttrace (BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35644, dest: /127.0.0.1:52290, bytes: 2164194816, op : HDFS_READ, cliID: DFSClient_635470834, offset: 0, srvID: DS-514949605-127.0.0.1-35644-1291174495289, blockid: blk_1170274882140601397_1001, duration: 273 20364567 2010-11-30 19:36:16,761 DEBUG datanode.DataNode (DataXceiver.java:run(135)) - DatanodeRegistration(127.0.0.1:35644, storageID=DS-514949605-127.0.0.1-35644- 1291174495289, infoPort=56924, ipcPort=60439):Number of active connections is: 2 2010-11-30 19:36:16,794 DEBUG datanode.DataNode (DataXceiver.java:<init>(86)) - Number of active connections is: 1 2010-11-30 19:36:16,794 DEBUG datanode.DataNode (BlockSender.java:<init>(140)) - block=blk_1170274882140601397_1001, replica=FinalizedReplica, blk_11702748 82140601397_1001, FINALIZED getNumBytes() = 2147484160 getBytesOnDisk() = 2147484160 getVisibleLength()= 2147484160 getVolume() = /home/cos/work/hadoop/git/hdfs/build/test/data/dfs/data/data1/current/finalized getBlockFile() = /home/cos/work/hadoop/git/hdfs/build/test/data/dfs/data/data1/current/finalized/blk_1170274882140601397 unlinked=false 2010-11-30 19:36:16,795 DEBUG datanode.DataNode (BlockSender.java:<init>(231)) - replica=FinalizedReplica, blk_1170274882140601397_1001, FINALIZED getNumBytes() = 2147484160 getBytesOnDisk() = 2147484160 getVisibleLength()= 2147484160 getVolume() = /home/cos/work/hadoop/git/hdfs/build/test/data/dfs/data/data1/current/finalized getBlockFile() = /home/cos/work/hadoop/git/hdfs/build/test/data/dfs/data/data1/current/finalized/blk_1170274882140601397 unlinked=false 2010-11-30 19:36:17,276 INFO DataNode.clienttrace (BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35644, dest: /127.0.0.1:52296, bytes: 135200256, op: HDFS_READ, cliID: DFSClient_635470834, offset: 2013265920, srvID: DS-514949605-127.0.0.1-35644-1291174495289, blockid: blk_1170274882140601397_1001, durat ion: 480762241 2010-11-30 19:36:17,276 DEBUG datanode.DataNode (DataXceiver.java:run(135)) - DatanodeRegistration(127.0.0.1:35644, storageID=DS-514949605-127.0.0.1-35644- 1291174495289, infoPort=56924, ipcPort=60439):Number of active connections is: 2 2010-11-30 19:36:17,290 WARN hdfs.DFSClient (DFSInputStream.java:readBuffer(486)) - Exception while reading from blk_1170274882140601397_1001 of /home/cos /work/hadoop/git/hdfs/build/test/data/2147484160.dat from 127.0.0.1:35644: java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:118) at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:275) so it seems like the test fails because wrong block is being read or something.
          Hide
          Konstantin Boudnik added a comment -

          Another look into the failure log shows that is has been failing always on 32-bit VM only. In this case attached patch will fix the situation by simply skipping the run under 32-bit VMs.

          I hope I am not hiding some nasty problem in HDFS by this. Druba, it's your baby - please review the issue.

          Also, patch fixes JUnit v3 issues; removes commented out code; replaces System.out with proper logging.

          Show
          Konstantin Boudnik added a comment - Another look into the failure log shows that is has been failing always on 32-bit VM only. In this case attached patch will fix the situation by simply skipping the run under 32-bit VMs. I hope I am not hiding some nasty problem in HDFS by this. Druba, it's your baby - please review the issue. Also, patch fixes JUnit v3 issues; removes commented out code; replaces System.out with proper logging.
          Hide
          Patrick Kling added a comment -

          If the 32 bit kernel doesn't have large file support enabled, this could be a problem with the 2GB file size limit in the underlying filesystem.

          Show
          Patrick Kling added a comment - If the 32 bit kernel doesn't have large file support enabled, this could be a problem with the 2GB file size limit in the underlying filesystem.
          Hide
          Konstantin Boudnik added a comment -

          If the 32 bit kernel doesn't have large file support

          That was my first thought. However, as you can see in my earlier comment it is failing on 64-bit kernel with 32-bit VM

          Show
          Konstantin Boudnik added a comment - If the 32 bit kernel doesn't have large file support That was my first thought. However, as you can see in my earlier comment it is failing on 64-bit kernel with 32-bit VM
          Hide
          Patrick Kling added a comment -

          +1 Your patch looks good, thanks for cleaning up the code too.

          After some initial debugging, it seems like the problem is somewhere in the read pipeline. I think it's reasonable to disable this test for 32 bit JVMs while we look for the revision that caused this problem. Once we find it, we can fix it in a separate JIRA.

          Show
          Patrick Kling added a comment - +1 Your patch looks good, thanks for cleaning up the code too. After some initial debugging, it seems like the problem is somewhere in the read pipeline. I think it's reasonable to disable this test for 32 bit JVMs while we look for the revision that caused this problem. Once we find it, we can fix it in a separate JIRA.
          Hide
          Konstantin Boudnik added a comment -

          I am going to commit this after usual validation unless I hear otherwise.
          I have already opened a separate HDFS-1525 to track 32-bit VM issue.

          Show
          Konstantin Boudnik added a comment - I am going to commit this after usual validation unless I hear otherwise. I have already opened a separate HDFS-1525 to track 32-bit VM issue.
          Hide
          Konstantin Boudnik added a comment -

          apparently changes in build.xml and log4j confs are junk.

          Show
          Konstantin Boudnik added a comment - apparently changes in build.xml and log4j confs are junk.
          Hide
          Konstantin Boudnik added a comment -

          -1 overall.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          -1 release audit. The applied patch generated 97 release audit warnings (more than the trunk's current 1 warnings).

          +1 system test framework. The patch passed system test framework compile.

          And the issue with 97 release audit warnings is already known and unrelated.

          Show
          Konstantin Boudnik added a comment - -1 overall. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. -1 release audit. The applied patch generated 97 release audit warnings (more than the trunk's current 1 warnings). +1 system test framework. The patch passed system test framework compile. And the issue with 97 release audit warnings is already known and unrelated.
          Hide
          Konstantin Boudnik added a comment -

          I have just committed this to 0.22 and trunk

          Show
          Konstantin Boudnik added a comment - I have just committed this to 0.22 and trunk
          Hide
          Suresh Srinivas added a comment -

          Cos, is there any change you made in this patch that fixes a problem or just formatting changes and Junit related changes? If it is just formatting changes, it would be appropriate to change the title of the jira to say so. Opening a separate jira to commit formatting changes and leaving this jira to track the issues (given all the discussions that have happened) would have been even better.

          BTW I noticed that this commit is logged as bug HDFS-1534 both in commit log and CHANGES.txt. That should be fixed.

          Show
          Suresh Srinivas added a comment - Cos, is there any change you made in this patch that fixes a problem or just formatting changes and Junit related changes? If it is just formatting changes, it would be appropriate to change the title of the jira to say so. Opening a separate jira to commit formatting changes and leaving this jira to track the issues (given all the discussions that have happened) would have been even better. BTW I noticed that this commit is logged as bug HDFS-1534 both in commit log and CHANGES.txt. That should be fixed.
          Hide
          Konstantin Boudnik added a comment -

          The essence of the change is here

          +  private static final String ALLOWED_VM = "64";
          ...
          +    assumeTrue(ALLOWED_VM.equals(System.getProperty("sun.arch.data.model")));
          

          I will fix the JIRA number - my fingers mustve slipped. Thanks for noticing.

          Show
          Konstantin Boudnik added a comment - The essence of the change is here + private static final String ALLOWED_VM = "64" ; ... + assumeTrue(ALLOWED_VM.equals( System .getProperty( "sun.arch.data.model" ))); I will fix the JIRA number - my fingers mustve slipped. Thanks for noticing.
          Hide
          Konstantin Boudnik added a comment -

          I have fixed wrong JIRA number in CHANGES.txt and in svn history by reverting and reapplying initial commit. Thanks

          Show
          Konstantin Boudnik added a comment - I have fixed wrong JIRA number in CHANGES.txt and in svn history by reverting and reapplying initial commit. Thanks
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #643 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/643/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #643 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/643/ )
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-22-branch #35 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-22-branch/35/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-22-branch #35 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-22-branch/35/ )

            People

            • Assignee:
              Konstantin Boudnik
              Reporter:
              Konstantin Boudnik
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development