Traffic Server
  1. Traffic Server
  2. TS-1247

Data from a response transform in a plugin causes excess memory usage

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Duplicate
    • Affects Version/s: 3.0.4
    • Fix Version/s: 3.3.2
    • Component/s: HTTP
    • Labels:
      None

      Description

      If you have a plugin implementing a response transform and the data sending to the UA is slow TS will use huge amounts of memory for larger files. (e.g. 16G Vsize for a 50M file.)
      This seems to be several problems combined, first, the MIOBuffer used for the chunking tunnel uses 2M buffers, and when handling data from a transform it ends up putting just the chunk markers in a buffer, then gets a buffer from the transform, then a new 2M buffer...Once I fix that, 50M files are ok, but 1G files still exhaust memory as they end up all in memory because there is no flow-control in some of the VConnections and such between a transform and the chunking tunnel. I did a lame patch that I need to revise that fixed the problem by making TransformTerminus not send data if the destination that it is writing to has too much data in it, plus I made my plugin check how much data the destination for the transform has and don't write to it then either. But it seems like I need to find the other side of both of those, and suppress WRITE_READY (and maybe IMMEDIATE) event sending when they are full, but I haven't been able to figure out what the other side is for those yet.

        Issue Links

          Activity

          Hide
          Alan M. Carroll added a comment -

          As far as I can tell, this is describing is a duplicate of TS-1457 with the flow control issue in TS-1496. I am going to mark this closed with the expectation that this issue will be resolved by those other two issues.

          Show
          Alan M. Carroll added a comment - As far as I can tell, this is describing is a duplicate of TS-1457 with the flow control issue in TS-1496 . I am going to mark this closed with the expectation that this issue will be resolved by those other two issues.
          Hide
          Leif Hedstrom added a comment -

          Moving to 3.3.2.

          Show
          Leif Hedstrom added a comment - Moving to 3.3.2.
          Hide
          William Bardwell added a comment -

          So the fixes that I came up with was checking TSIOBufferReaderAvail(TSVIOReaderGet(output_vio)) in my plugin's transform and if that got big, skip writing. And the following patch, but these seem like the wrong place to do the checks, it seems like the checks should be on reading not writing. And changing the buffer size seems like it might end up effecting other cases that are not broken, but maybe there is a better way to do the chunking that allocates a tiny buffer just for that purpose instead of just putting a couple bytes in the 2M buffer. Anyone have any better ideas?

          — proxy/http/HttpSM.cc 2012-06-26 10:57:38.000000000 -0400
          +++ proxy/http/HttpSM.cc 2012-06-26 11:28:46.000000000 -0400
          @@ -5936,6 +5936,9 @@

          if ( t_state.client_info.receive_chunked_response )

          { tunnel.set_producer_chunking_action(p, client_response_hdr_bytes, TCA_CHUNK_CONTENT); + // Reset the chunk buffer size down, since it will only be used for chunk markers in this case. + // The plugin will produce chunks that get interleaved with these buffers. + p->chunk_buffer_size_index = 4; // 2048 }

          return p;
          — proxy/http/HttpTunnel.cc 2012-06-26 10:57:20.000000000 -0400
          +++ proxy/http/HttpTunnel.cc 2012-06-26 11:28:46.000000000 -0400
          @@ -148,7 +148,7 @@
          if (p->do_chunking)

          { dechunked_reader = buffer_in->mbuf->clone_reader(buffer_in); dechunked_reader->mbuf->water_mark = min_block_transfer_bytes; - chunked_buffer = new_MIOBuffer(MAX_IOBUFFER_SIZE); + chunked_buffer = new_MIOBuffer(p->chunk_buffer_size_index); chunked_size = 0; }

          else {
          ink_assert(p->do_dechunking || p->do_chunked_passthru);
          @@ -447,7 +447,7 @@
          buffer_start(NULL), vc_type(HT_HTTP_SERVER), chunking_action(TCA_PASSTHRU_DECHUNKED_CONTENT),
          do_chunking(false), do_dechunking(false), do_chunked_passthru(false),
          init_bytes_done(0), nbytes(0), ntodo(0), bytes_read(0), handler_state(0), num_consumers(0), ali
          ve(false),

          • read_success(false), name(NULL)
            + read_success(false), name(NULL), chunk_buffer_size_index(MAX_IOBUFFER_SIZE)
            {
            }

          — proxy/http/HttpTunnel.h 2012-06-26 10:57:34.000000000 -0400
          +++ proxy/http/HttpTunnel.h 2012-06-26 11:28:46.000000000 -0400
          @@ -191,6 +191,8 @@
          bool alive;
          bool read_success;
          const char *name;
          +
          + int chunk_buffer_size_index;
          };

          class PostDataBuffers
          — proxy/Transform.cc 2012-06-26 10:57:03.000000000 -0400
          +++ proxy/Transform.cc 2012-06-26 11:28:46.000000000 -0400
          @@ -72,6 +72,7 @@

          #define ART 1
          #define AGIF 2
          +#define TRANSFORM_THROTTLE_LIMIT (2 * 1024 * 1024)

          TransformProcessor transformProcessor;
          @@ -198,6 +199,10 @@
          }

          towrite = m_write_vio.ntodo();
          + // The thing feeding this ends up with the whole file if we don't throttle
          + // the data, when getting data from plugin transform and with the UA
          + // connection being slow.
          + if (m_read_vio.get_writer()->max_read_avail() < TRANSFORM_THROTTLE_LIMIT || towrite <= 0) {
          if (towrite > 0) {
          if (towrite > m_write_vio.get_reader()->read_avail())

          { towrite = m_write_vio.get_reader()->read_avail(); @@ -222,6 +227,7 @@ }

          else

          { m_write_vio._cont->handleEvent(VC_EVENT_WRITE_COMPLETE, &m_write_vio); }

          + } // End throttle if

          // We could have closed on the write callback
          if (m_closed != 0 && m_tvc->m_closed != 0) {

          Show
          William Bardwell added a comment - So the fixes that I came up with was checking TSIOBufferReaderAvail(TSVIOReaderGet(output_vio)) in my plugin's transform and if that got big, skip writing. And the following patch, but these seem like the wrong place to do the checks, it seems like the checks should be on reading not writing. And changing the buffer size seems like it might end up effecting other cases that are not broken, but maybe there is a better way to do the chunking that allocates a tiny buffer just for that purpose instead of just putting a couple bytes in the 2M buffer. Anyone have any better ideas? — proxy/http/HttpSM.cc 2012-06-26 10:57:38.000000000 -0400 +++ proxy/http/HttpSM.cc 2012-06-26 11:28:46.000000000 -0400 @@ -5936,6 +5936,9 @@ if ( t_state.client_info.receive_chunked_response ) { tunnel.set_producer_chunking_action(p, client_response_hdr_bytes, TCA_CHUNK_CONTENT); + // Reset the chunk buffer size down, since it will only be used for chunk markers in this case. + // The plugin will produce chunks that get interleaved with these buffers. + p->chunk_buffer_size_index = 4; // 2048 } return p; — proxy/http/HttpTunnel.cc 2012-06-26 10:57:20.000000000 -0400 +++ proxy/http/HttpTunnel.cc 2012-06-26 11:28:46.000000000 -0400 @@ -148,7 +148,7 @@ if (p->do_chunking) { dechunked_reader = buffer_in->mbuf->clone_reader(buffer_in); dechunked_reader->mbuf->water_mark = min_block_transfer_bytes; - chunked_buffer = new_MIOBuffer(MAX_IOBUFFER_SIZE); + chunked_buffer = new_MIOBuffer(p->chunk_buffer_size_index); chunked_size = 0; } else { ink_assert(p->do_dechunking || p->do_chunked_passthru); @@ -447,7 +447,7 @@ buffer_start(NULL), vc_type(HT_HTTP_SERVER), chunking_action(TCA_PASSTHRU_DECHUNKED_CONTENT), do_chunking(false), do_dechunking(false), do_chunked_passthru(false), init_bytes_done(0), nbytes(0), ntodo(0), bytes_read(0), handler_state(0), num_consumers(0), ali ve(false), read_success(false), name(NULL) + read_success(false), name(NULL), chunk_buffer_size_index(MAX_IOBUFFER_SIZE) { } — proxy/http/HttpTunnel.h 2012-06-26 10:57:34.000000000 -0400 +++ proxy/http/HttpTunnel.h 2012-06-26 11:28:46.000000000 -0400 @@ -191,6 +191,8 @@ bool alive; bool read_success; const char *name; + + int chunk_buffer_size_index; }; class PostDataBuffers — proxy/Transform.cc 2012-06-26 10:57:03.000000000 -0400 +++ proxy/Transform.cc 2012-06-26 11:28:46.000000000 -0400 @@ -72,6 +72,7 @@ #define ART 1 #define AGIF 2 +#define TRANSFORM_THROTTLE_LIMIT (2 * 1024 * 1024) TransformProcessor transformProcessor; @@ -198,6 +199,10 @@ } towrite = m_write_vio.ntodo(); + // The thing feeding this ends up with the whole file if we don't throttle + // the data, when getting data from plugin transform and with the UA + // connection being slow. + if (m_read_vio.get_writer()->max_read_avail() < TRANSFORM_THROTTLE_LIMIT || towrite <= 0) { if (towrite > 0) { if (towrite > m_write_vio.get_reader()->read_avail()) { towrite = m_write_vio.get_reader()->read_avail(); @@ -222,6 +227,7 @@ } else { m_write_vio._cont->handleEvent(VC_EVENT_WRITE_COMPLETE, &m_write_vio); } + } // End throttle if // We could have closed on the write callback if (m_closed != 0 && m_tvc->m_closed != 0) {

            People

            • Assignee:
              Alan M. Carroll
              Reporter:
              William Bardwell
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development