When you fetch a file for replication we close the request output stream after writing the file which ruins the connection for reuse.
We can't close response output streams, we need to reuse these connections. If we do close them, clients are hit with connection problems when they try and reuse the connection from their pool.
At some point the above was addressed during refactoring. We should remove these neutered closes and review our close shield code.
If you are here to track down why this is done:
Connection reuse requires that we read all streams and do not close them - instead the container itself must manage request and response streams. If we allow them to be closed, not only do we lose some connection reuse, but we can cause spurious client errors that can cause expensive recoveries for no reason. The spec allows us to count on the container to manage streams. It's our job simply to not close them and to always read them fully, from client and server.
Java itself can help with always reading the streams fully up to some small default amount of unread stream slack, but that is very dangerous to count on, so we always manually eat up anything on the streams our normal logic ends up not reading for whatever reason.
We also cannot call abort without ruining the connection or sendError. These should be options of very last resort (requiring a blood sacrifice) or when shutting down.