Hadoop Common
  1. Hadoop Common
  2. HADOOP-9566

Performing direct read using libhdfs sometimes raises SIGPIPE (which in turn throws SIGABRT) causing client crashes

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.4-alpha
    • Fix Version/s: 2.1.0-beta
    • Component/s: native
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      Reading using libhdfs sometimes raises SIGPIPE (which in turn throws SIGABRT from JVM_handle_linux_signal). This can lead to crashes in the client application. It would be nice if libhdfs handled this signal internally.

      1. HDFS-4831.001.patch
        2 kB
        Colin Patrick McCabe
      2. HDFS-4831.003.patch
        3 kB
        Colin Patrick McCabe

        Activity

        Lenni Kuff created issue -
        Hide
        Colin Patrick McCabe added a comment -

        Logs and stack trace from a libhdfs run:

        13/05/16 08:37:01 DEBUG hdfs.BlockReaderLocal: putting FileInputStream for /test-warehouse/tpch.lineitem/lineitem.tbl back into FileInputStreamCache
        [Thread 0x7fffcfc88700 (LWP 2810) exited]
        13/05/16 08:37:01 DEBUG ipc.Client: IPC Client (1002500837) connection to localhost/127.0.0.1:20500 from lskuff sending #9
        13/05/16 08:37:01 DEBUG ipc.Client: IPC Client (1002500837) connection to localhost/127.0.0.1:20500 from lskuff got value #9
        13/05/16 08:37:01 DEBUG ipc.ProtobufRpcEngine: Call: getBlockLocations took 2ms
        13/05/16 08:37:01 DEBUG hdfs.DFSClient: newInfo = LocatedBlocks{
          fileLength=753862072
          underConstruction=false
          blocks=[LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_8973605789866450572_6252; getBlockSize()=67108864; corrupt=false; offset=0; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_6529746879505750363_6253; getBlockSize()=67108864; corrupt=false; offset=67108864; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_7137931742725996903_6254; getBlockSize()=67108864; corrupt=false; offset=134217728; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_2512286346981490115_6255; getBlockSize()=67108864; corrupt=false; offset=201326592; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_-4910107063321244927_6256; getBlockSize()=67108864; corrupt=false; offset=268435456; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_2541626567081697462_6257; getBlockSize()=67108864; corrupt=false; offset=335544320; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_3773253961084852594_6258; getBlockSize()=67108864; corrupt=false; offset=402653184; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_9156075801845636610_6259; getBlockSize()=67108864; corrupt=false; offset=469762048; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_-4734840374398159095_6260; getBlockSize()=67108864; corrupt=false; offset=536870912; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_7292053749276395282_6261; getBlockSize()=67108864; corrupt=false; offset=603979776; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}]
          lastLocatedBlock=LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_-1379632012241044332_6263; getBlockSize()=15664568; corrupt=false; offset=738197504; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}
          isLastBlockComplete=true}
        13/05/16 08:37:01 DEBUG hdfs.DFSClient: Connecting to datanode 127.0.0.1:47890
        
        Program received signal SIGPIPE, Broken pipe.
        [Switching to Thread 0x7fffebd58700 (LWP 2592)]
        0x00007ffff79b2ccd in write () from /lib/x86_64-linux-gnu/libpthread.so.0
        (gdb) bt
        #0  0x00007ffff79b2ccd in write () from /lib/x86_64-linux-gnu/libpthread.so.0
        #1  0x00007fffebf7a4ce in write_fully (env=0x32e79d0, fd=179, buf=0x7fffebd54a20 "", amt=76)
            at hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c:586
        #2  0x00007fffebf7a623 in Java_org_apache_hadoop_net_unix_DomainSocket_writeArray0 (env=0x32e79d0, 
            clazz=<optimized out>, fd=179, b=0x7fffebd56b08, offset=0, length=76)
            at hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c:902
        #3  0x00007fffefb46eee in ?? ()
        #4  0x0000000604d1a794 in ?? ()
        #5  0x00007fffebd56ac8 in ?? ()
        #6  0x0000000604d1b2c8 in ?? ()
        #7  0x0000000000000000 in ?? ()
        
        Show
        Colin Patrick McCabe added a comment - Logs and stack trace from a libhdfs run: 13/05/16 08:37:01 DEBUG hdfs.BlockReaderLocal: putting FileInputStream for /test-warehouse/tpch.lineitem/lineitem.tbl back into FileInputStreamCache [ Thread 0x7fffcfc88700 (LWP 2810) exited] 13/05/16 08:37:01 DEBUG ipc.Client: IPC Client (1002500837) connection to localhost/127.0.0.1:20500 from lskuff sending #9 13/05/16 08:37:01 DEBUG ipc.Client: IPC Client (1002500837) connection to localhost/127.0.0.1:20500 from lskuff got value #9 13/05/16 08:37:01 DEBUG ipc.ProtobufRpcEngine: Call: getBlockLocations took 2ms 13/05/16 08:37:01 DEBUG hdfs.DFSClient: newInfo = LocatedBlocks{ fileLength=753862072 underConstruction= false blocks=[LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_8973605789866450572_6252; getBlockSize()=67108864; corrupt= false ; offset=0; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_6529746879505750363_6253; getBlockSize()=67108864; corrupt= false ; offset=67108864; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_7137931742725996903_6254; getBlockSize()=67108864; corrupt= false ; offset=134217728; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_2512286346981490115_6255; getBlockSize()=67108864; corrupt= false ; offset=201326592; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_-4910107063321244927_6256; getBlockSize()=67108864; corrupt= false ; offset=268435456; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_2541626567081697462_6257; getBlockSize()=67108864; corrupt= false ; offset=335544320; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_3773253961084852594_6258; getBlockSize()=67108864; corrupt= false ; offset=402653184; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_9156075801845636610_6259; getBlockSize()=67108864; corrupt= false ; offset=469762048; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_-4734840374398159095_6260; getBlockSize()=67108864; corrupt= false ; offset=536870912; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}, LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_7292053749276395282_6261; getBlockSize()=67108864; corrupt= false ; offset=603979776; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]}] lastLocatedBlock=LocatedBlock{BP-866712037-127.0.0.1-1366931725003:blk_-1379632012241044332_6263; getBlockSize()=15664568; corrupt= false ; offset=738197504; locs=[127.0.0.1:47890, 127.0.0.1:60135, 127.0.0.1:47287]} isLastBlockComplete= true } 13/05/16 08:37:01 DEBUG hdfs.DFSClient: Connecting to datanode 127.0.0.1:47890 Program received signal SIGPIPE, Broken pipe. [Switching to Thread 0x7fffebd58700 (LWP 2592)] 0x00007ffff79b2ccd in write () from /lib/x86_64-linux-gnu/libpthread.so.0 (gdb) bt #0 0x00007ffff79b2ccd in write () from /lib/x86_64-linux-gnu/libpthread.so.0 #1 0x00007fffebf7a4ce in write_fully (env=0x32e79d0, fd=179, buf=0x7fffebd54a20 "", amt=76) at hadoop/hadoop-common-project/hadoop-common/src/main/ native /src/org/apache/hadoop/net/unix/DomainSocket.c:586 #2 0x00007fffebf7a623 in Java_org_apache_hadoop_net_unix_DomainSocket_writeArray0 (env=0x32e79d0, clazz=<optimized out>, fd=179, b=0x7fffebd56b08, offset=0, length=76) at hadoop/hadoop-common-project/hadoop-common/src/main/ native /src/org/apache/hadoop/net/unix/DomainSocket.c:902 #3 0x00007fffefb46eee in ?? () #4 0x0000000604d1a794 in ?? () #5 0x00007fffebd56ac8 in ?? () #6 0x0000000604d1b2c8 in ?? () #7 0x0000000000000000 in ?? ()
        Hide
        Colin Patrick McCabe added a comment -

        It's interesting that this is coming up only for libhdfs users. My guess is that when you launch a standalone JVM, it sets up a signal handler that ignores SIGPIPE, but JNI code does not get this same benefit.

        One way to fix this is modify your program which uses libhdfs to call this prior to initializing JNI:

        signal(SIGPIPE, SIG_IGN);
        

        This ignores the SIGPIPE signal whenever it happens. When you initialize JNI, it creates its own signal handlers, which will call the previously installed signal handler if it was explicitly set (i.e., is not the default).

        However, this is kind of a hack. We shouldn't expect libhdfs users to have to jump through these kind of hoops. The best fix is to use MSG_NOSIGNAL or MSG_NOSIGPIPE to avoid generating these signals in the first place.

        Show
        Colin Patrick McCabe added a comment - It's interesting that this is coming up only for libhdfs users. My guess is that when you launch a standalone JVM, it sets up a signal handler that ignores SIGPIPE , but JNI code does not get this same benefit. One way to fix this is modify your program which uses libhdfs to call this prior to initializing JNI : signal(SIGPIPE, SIG_IGN); This ignores the SIGPIPE signal whenever it happens. When you initialize JNI, it creates its own signal handlers, which will call the previously installed signal handler if it was explicitly set (i.e., is not the default). However, this is kind of a hack. We shouldn't expect libhdfs users to have to jump through these kind of hoops. The best fix is to use MSG_NOSIGNAL or MSG_NOSIGPIPE to avoid generating these signals in the first place.
        Colin Patrick McCabe made changes -
        Field Original Value New Value
        Attachment HDFS-4831.001.patch [ 12583542 ]
        Colin Patrick McCabe made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Hide
        Colin Patrick McCabe added a comment -

        I took the liberty of removing #define_GNU_SOURCE, to squash this compiler warning:

        /home/cmccabe/hadoop2/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c:19:0: warning: "_GNU_SOURCE" redefined [enabled by default]
        <command-line>:0:0: note: this is the location of the previous definition
        

        We don't need to define this file-by-file since it's a compiler flag now.

        Show
        Colin Patrick McCabe added a comment - I took the liberty of removing #define_GNU_SOURCE, to squash this compiler warning: /home/cmccabe/hadoop2/hadoop-common-project/hadoop-common/src/main/ native /src/org/apache/hadoop/net/unix/DomainSocket.c:19:0: warning: "_GNU_SOURCE" redefined [enabled by default ] <command-line>:0:0: note: this is the location of the previous definition We don't need to define this file-by-file since it's a compiler flag now.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12583542/HDFS-4831.001.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4405//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4405//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12583542/HDFS-4831.001.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4405//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4405//console This message is automatically generated.
        Hide
        Colin Patrick McCabe added a comment -

        I did the compile on FreeBSD 9.1. Spotted that it needed to include string.h, and also MSG_NOSIGPIPE should be SO_NOSIGPIPE.

        Show
        Colin Patrick McCabe added a comment - I did the compile on FreeBSD 9.1. Spotted that it needed to include string.h, and also MSG_NOSIGPIPE should be SO_NOSIGPIPE.
        Colin Patrick McCabe made changes -
        Attachment HDFS-4831.003.patch [ 12583547 ]
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12583547/HDFS-4831.003.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4406//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4406//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12583547/HDFS-4831.003.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/4406//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4406//console This message is automatically generated.
        Hide
        Lenni Kuff added a comment -

        This problem appears to fix my use case. I applied the HDFS-4831.003.patch and rebuilt the native libaries (hadoop/hdfs). My client application is no longer crashing after applying this fix. I then changed back to the old binaries (the ones without this fix) and my client app started crashing again.

        Show
        Lenni Kuff added a comment - This problem appears to fix my use case. I applied the HDFS-4831 .003.patch and rebuilt the native libaries (hadoop/hdfs). My client application is no longer crashing after applying this fix. I then changed back to the old binaries (the ones without this fix) and my client app started crashing again.
        Hide
        Aaron T. Myers added a comment -

        Good find, and thanks a lot for testing out the fix, Lenni.

        +1, the patch looks good to me. I don't think tests are necessary for this change since the issue is so difficult to reproduce. I also tested this by running the following, since I'm not sure if test-patch runs with native support:

        $ mvn -Pnative clean test -Dtest=TestParallelShortCircuitReadUnCached,TestParallelShortCircuitLegacyRead,TestParallelShortCircuitReadNoChecksum,TestShortCircuitLocalRead,TestParallelShortCircuitRead
        

        All the tests passed as expected.

        I'm going to commit this momentarily.

        Show
        Aaron T. Myers added a comment - Good find, and thanks a lot for testing out the fix, Lenni. +1, the patch looks good to me. I don't think tests are necessary for this change since the issue is so difficult to reproduce. I also tested this by running the following, since I'm not sure if test-patch runs with native support: $ mvn -Pnative clean test -Dtest=TestParallelShortCircuitReadUnCached,TestParallelShortCircuitLegacyRead,TestParallelShortCircuitReadNoChecksum,TestShortCircuitLocalRead,TestParallelShortCircuitRead All the tests passed as expected. I'm going to commit this momentarily.
        Hide
        Colin Patrick McCabe added a comment -

        Thanks, ATM. test-patch does run with native support, although at the moment, it builds 32-bit which is a little non-standard, and doesn't test fuse_dfs.

        Show
        Colin Patrick McCabe added a comment - Thanks, ATM. test-patch does run with native support, although at the moment, it builds 32-bit which is a little non-standard, and doesn't test fuse_dfs.
        Aaron T. Myers made changes -
        Project Hadoop HDFS [ 12310942 ] Hadoop Common [ 12310240 ]
        Key HDFS-4831 HADOOP-9566
        Affects Version/s 2.0.4-alpha [ 12324135 ]
        Affects Version/s 2.0.4-alpha [ 12324136 ]
        Component/s native [ 12312070 ]
        Component/s libhdfs [ 12313126 ]
        Hide
        Aaron T. Myers added a comment -

        Good to know, Colin. Thanks for that.

        Moved this JIRA to Common since that's where the change is.

        Show
        Aaron T. Myers added a comment - Good to know, Colin. Thanks for that. Moved this JIRA to Common since that's where the change is.
        Hide
        Aaron T. Myers added a comment -

        I've just committed this to trunk and branch-2.

        Thanks a lot for the contribution, Colin. And thanks a lot to Lenni for running those tests as well.

        Show
        Aaron T. Myers added a comment - I've just committed this to trunk and branch-2. Thanks a lot for the contribution, Colin. And thanks a lot to Lenni for running those tests as well.
        Aaron T. Myers made changes -
        Status Patch Available [ 10002 ] Resolved [ 5 ]
        Hadoop Flags Reviewed [ 10343 ]
        Target Version/s 2.0.5-beta [ 12324030 ]
        Assignee Colin Patrick McCabe [ cmccabe ]
        Fix Version/s 2.0.5-beta [ 12324030 ]
        Resolution Fixed [ 1 ]
        Hide
        Hudson added a comment -

        Integrated in Hadoop-trunk-Commit #3762 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3762/)
        HADOOP-9566. Performing direct read using libhdfs sometimes raises SIGPIPE (which in turn throws SIGABRT) causing client crashes. Contributed by Colin Patrick McCabe. (Revision 1483612)

        Result = SUCCESS
        atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1483612
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c
        Show
        Hudson added a comment - Integrated in Hadoop-trunk-Commit #3762 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3762/ ) HADOOP-9566 . Performing direct read using libhdfs sometimes raises SIGPIPE (which in turn throws SIGABRT) causing client crashes. Contributed by Colin Patrick McCabe. (Revision 1483612) Result = SUCCESS atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1483612 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Yarn-trunk #212 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/212/)
        HADOOP-9566. Performing direct read using libhdfs sometimes raises SIGPIPE (which in turn throws SIGABRT) causing client crashes. Contributed by Colin Patrick McCabe. (Revision 1483612)

        Result = FAILURE
        atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1483612
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c
        Show
        Hudson added a comment - Integrated in Hadoop-Yarn-trunk #212 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/212/ ) HADOOP-9566 . Performing direct read using libhdfs sometimes raises SIGPIPE (which in turn throws SIGABRT) causing client crashes. Contributed by Colin Patrick McCabe. (Revision 1483612) Result = FAILURE atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1483612 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Hdfs-trunk #1401 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1401/)
        HADOOP-9566. Performing direct read using libhdfs sometimes raises SIGPIPE (which in turn throws SIGABRT) causing client crashes. Contributed by Colin Patrick McCabe. (Revision 1483612)

        Result = FAILURE
        atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1483612
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c
        Show
        Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1401 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1401/ ) HADOOP-9566 . Performing direct read using libhdfs sometimes raises SIGPIPE (which in turn throws SIGABRT) causing client crashes. Contributed by Colin Patrick McCabe. (Revision 1483612) Result = FAILURE atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1483612 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c
        Hide
        Hudson added a comment -

        Integrated in Hadoop-Mapreduce-trunk #1428 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1428/)
        HADOOP-9566. Performing direct read using libhdfs sometimes raises SIGPIPE (which in turn throws SIGABRT) causing client crashes. Contributed by Colin Patrick McCabe. (Revision 1483612)

        Result = FAILURE
        atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1483612
        Files :

        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
        • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c
        Show
        Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1428 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1428/ ) HADOOP-9566 . Performing direct read using libhdfs sometimes raises SIGPIPE (which in turn throws SIGABRT) causing client crashes. Contributed by Colin Patrick McCabe. (Revision 1483612) Result = FAILURE atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1483612 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c
        Arun C Murthy made changes -
        Status Resolved [ 5 ] Closed [ 6 ]

          People

          • Assignee:
            Colin Patrick McCabe
            Reporter:
            Lenni Kuff
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development