Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Won't Fix
-
None
-
None
-
None
Description
Chatting w/ a gentleman named Daniel Pol who is messing w/ bucketcache, he wants blocks to be the size specified in the configuration and no bigger. His hardware set ups fetches pages of 4k and so a block that has 4k of payload but has then a header and the header of the next block (which helps figure whats next when scanning) ends up being 4203 bytes or something, and this then then translates into two seeks per block fetch.
This issue is about what it would take to stay inside our configured size boundary writing out blocks.
If not possible, give back better signal on what to do so you could fit inside a particular constraint.