Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
None
-
None
-
trunk and 0.11
Description
After rev r916868 (fix for CouchDB-597), doing a pull replication of a doc that has a large attachment no longer works.
The problem is in couch_rep_att.erl:
convert_stub(#att
{data=stub, name=Name} = Attachment,
{#http_db{} = Db, Id, Rev}) ->
= Rev,
Request = Db#http_db{
resource = lists:flatten([couch_util:url_encode(Id), "/",
couch_util:url_encode(Name)]),
qs = [{rev, couch_doc:rev_to_str(
)}]
},
Ref = make_ref(),
RcvFun = fun() ->
Bin = attachment_receiver(Ref, Request),
cleanup(),
Bin
end,
Attachment#att
.
The cleanup/0 function can not be called there, since when the attachment is received in multiple chunks, after receiving the first chunk the subsequent ones are silently discarded by cleanup/0:
cleanup() ->
receive
->
%% TODO maybe log, didn't expect to have data here
cleanup();
->
cleanup();
->
cleanup()
after 0 ->
erase(),
ok
end.
If you look into couch_db.erl, you'll see that the attachment receiver function maybe called multiple times:
flush_att(Fd, #att
{data=Fun,att_len=AttLen}=Att) when is_function(Fun) ->
with_stream(Fd, Att, fun(OutputStream) ->
write_streamed_attachment(OutputStream, Fun, AttLen)
end).
write_streamed_attachment(_Stream, _F, 0) ->
ok;
write_streamed_attachment(Stream, F, LenLeft) ->
Bin = F(),
ok = couch_stream:write(Stream, Bin),
write_streamed_attachment(Stream, F, LenLeft - size(Bin)).
This is serious
However, removing that call to cleanup/0 may cause the return of CouchDB-597, therefore I don't supply a patch here.