Uploaded image for project: 'Mesos'
  1. Mesos
  2. MESOS-9877

Possible segfault due to spurious EPOLLHUP.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • None

    Description

      In Linux, calling `epoll()` on a TCP socket before calling connect() will return an EPOLLHUP event on that socket. This can be verified with the following code snippet:

      #include <sys/epoll.h>
      #include <sys/socket.h>
      
      #include <netinet/in.h>
      
      int main() {
              int epfd = epoll_create1(0);
              int s = socket(AF_INET, SOCK_STREAM, IPPROTO_IP);
              struct epoll_event event;
              event.events = EPOLLIN;
              event.data.u64 = s; // user data
              epoll_ctl(epfd, EPOLL_CTL_ADD, s, &event);
      
              struct epoll_event events[128];
              epoll_wait(epfd, events, 128, 500 /*ms*/);
      }
      
      // Run using `strace ./a.out`.
      

      Libevent then turns EPOLLHUP into an read/write event:

                      // epoll.c
                      if (what & (EPOLLHUP|EPOLLERR)) {
                              ev = EV_READ | EV_WRITE;
                      }
                      [...]
      

      This means, when another thread was inside `epoll_wait()` while that fd is added, the wait will return immediately for that new fd.

      Apparently, some of either our own or libevent code does not handle this case correctly. For example, here is a syscall sequence of `SSLTest.VerifyBadCA` failing:

      [pid 12012] 1562077806.912193 socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_IP) = 8
      [pid 12012] 1562077806.912244 epoll_ctl(3, EPOLL_CTL_ADD, 8, {EPOLLIN, {u32=8, u64=8}}) = 0
      [pid 12021] 1562077806.912261 <... epoll_wait resumed> [{EPOLLHUP, {u32=8, u64=8}}], 32, 100) = 1
      [pid 12012] 1562077806.912269 write(6, "\1\0\0\0\0\0\0\0", 8) = 8
      [pid 12012] 1562077806.912303 epoll_ctl(3, EPOLL_CTL_MOD, 8, {EPOLLIN|EPOLLOUT, {u32=8, u64=8}}) = 0
      [pid 12021] 1562077806.912371 write(8, "\26\3\1\0k\1\0\0g\3\3\r~\336VZ\227I\216\260\304\356\10\200\327\271\320\td\304'O"..., 112) = -1 EPIPE (Broken pipe)
      [pid 12021] 1562077806.912395 --- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=12012, si_uid=1000} ---
      [pid 12021] 1562077806.912415 epoll_ctl(3, EPOLL_CTL_MOD, 8, {EPOLLOUT, {u32=8, u64=8}}) = 0
      [pid 12021] 1562077806.912435 epoll_ctl(3, EPOLL_CTL_DEL, 8, 0x7fc35be23afc) = 0
      [pid 12021] 1562077806.912460 connect(8, {sa_family=AF_INET, sin_port=htons(45067), sin_addr=inet_addr("127.0.1.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
      [pid 12011] 1562077806.912533 <... epoll_wait resumed> [{EPOLLIN, {u32=7, u64=7}}], 32, 11) = 1
      [pid 12021] 1562077806.912543 epoll_ctl(3, EPOLL_CTL_ADD, 8, {EPOLLIN, {u32=8, u64=8}}) = 0
      [pid 12011] 1562077806.912562 epoll_ctl(3, EPOLL_CTL_DEL, 7, 0x7f1dbcee0a9c <unfinished ...>
      [pid 12021] 1562077806.912571 epoll_ctl(3, EPOLL_CTL_MOD, 8, {EPOLLIN|EPOLLOUT, {u32=8, u64=8}} <unfinished ...>
      [pid 12011] 1562077806.912580 <... epoll_ctl resumed> ) = 0
      [pid 12021] 1562077806.912586 <... epoll_ctl resumed> ) = 0
      [pid 12021] 1562077806.912599 epoll_wait(3, [{EPOLLIN, {u32=6, u64=6}}, {EPOLLOUT, {u32=8, u64=8}}], 32, 100) = 2
      [pid 12021] 1562077806.912636 write(8, "\26\3\1\0k\1\0\0g\3\3\r~\336VZ\227I\216\260\304\356\10\200\327\271\320\td\304'O"..., 112) = 112
      [pid 12021] 1562077806.912684 epoll_ctl(3, EPOLL_CTL_MOD, 8, {EPOLLIN, {u32=8, u64=8}}) = 0
      [pid 12021] 1562077806.912705 epoll_wait(3,  <unfinished ...>
      [pid 12011] 1562077806.912954 write(2, "W0702 16:30:06.912921 12011 proc"..., 113W0702 16:30:06.912921 12011 process.cpp:844] Failed to recv on socket 9 to peer '127.0.0.1:52578': Decoder error
      ) = 113
      [pid 12011] 1562077806.913004 epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN, {u32=7, u64=7}}) = 0
      [pid 12021] 1562077806.913088 <... epoll_wait resumed> [{EPOLLIN, {u32=8, u64=8}}], 32, 100) = 1
      [pid 12021] 1562077806.913119 epoll_ctl(3, EPOLL_CTL_DEL, 8, 0x7fc35be23afc) = 0
      [pid 12011] 1562077806.913159 epoll_wait(3,  <unfinished ...>
      [pid 12021] 1562077806.913168 write(2, "SETTING bev TO NULL 1\n", 22SETTING bev TO NULL 1
      ) = 22
      [pid 12021] 1562077806.913219 epoll_wait(3,  <unfinished ...>
      [pid 12003] 1562077806.913233 write(6, "\1\0\0\0\0\0\0\0", 8 <unfinished ...>
      [pid 12011] 1562077806.913253 <... epoll_wait resumed> [{EPOLLIN, {u32=6, u64=6}}], 32, 14990) = 1
      [pid 12003] 1562077806.913293 <... write resumed> ) = 8
      [pid 12011] 1562077806.913375 epoll_wait(3,  <unfinished ...>
      [pid 12012] 1562077806.913412 write(1, "../../../3rdparty/libprocess/src"..., 122) = 122
      [pid 12012] 1562077806.913449 write(6, "\1\0\0\0\0\0\0\0", 8 <unfinished ...>
      [pid 12021] 1562077806.913464 <... epoll_wait resumed> [{EPOLLIN, {u32=6, u64=6}}], 32, 99) = 1
      [pid 12012] 1562077806.913475 <... write resumed> ) = 8
      [pid 12021] 1562077806.913515 --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x128} ---
      [pid 12020] 1562077807.003305 +++ killed by SIGSEGV (core dumped) +++
      

      As we can see from the above, the first wakeup triggered the `ssl-client` to attempt to write the SSL Client Hello to the socket, resulting in SIGPIPE and a return value of -EPIPE.

      Later on, the write is retried after the socket actually connected and is ready for writing, but there is a race in the cleanup where these two get called out of order, causing a segfault in the latter:

          // LibeventSSLSocketImpl::event_callback()
          if (current_connect_request.get() != nullptr) {
              [...]
              bev = nullptr;
           }
      
            // LibeventSSLSocketImpl::recv_callback()
            size_t length = bufferevent_read(bev, request->data, request->size);
      

      Attachments

        Activity

          People

            Unassigned Unassigned
            bennoe Benno Evers
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: