Details
-
Bug
-
Status: Closed
-
Minor
-
Resolution: Won't Fix
-
0.10.2, 0.11
-
None
-
Software: Using Python 2.6 (couchdbkit OR httplib) OR curl to submit. The 0.11 is the Debian unstable version; the 0.10.2 install is from Ubuntu.
CouchDB 0.11 is running on a Sun Fire X4600 M2, with NFS mounted storage to a Linux software RAID10 (x4 WD20EARS SATA drives). However, same issue arises using the server's 3G/s (10k RPM) SAS drives. The NFS share is mounted over dual intel gigabit NICs in a round-robin configuration.
Software: Using Python 2.6 (couchdbkit OR httplib) OR curl to submit. The 0.11 is the Debian unstable version; the 0.10.2 install is from Ubuntu. CouchDB 0.11 is running on a Sun Fire X4600 M2, with NFS mounted storage to a Linux software RAID10 (x4 WD20EARS SATA drives). However, same issue arises using the server's 3G/s (10k RPM) SAS drives. The NFS share is mounted over dual intel gigabit NICs in a round-robin configuration.
-
Regular Contributors Level (Easy to Medium)
Description
Situation:
Saving documents in bulk (lots of 1,000, 4,000, and 10,000 have been tested) to a single database results in degraded performance, and then a string of timeouts. The timeouts are not logged by CouchDB, so the HTTP interface becomes unusable for a period. It then returns and rapidly processes the next batch of jobs (read: the timeout is temporary).
Replication:
- I am having trouble replicating the behaviour with saving bulk loads of documents (I have been working against doing so), but it appears to happen after an extended period;
- I can replicate the behaviour by submitting a lot of individual files (single document saves) in rapid succession.
Diagnostics:
- I had tried true and false for delayed_commits, just to rule that out;
- Testing outside of CouchDB (postgres, file transfers, streaming, or otherwise trying to hammer the I/O) yielded no issues with the systems involved.
Functional Workarounds:
- I have sharded the database in question.