Thanks a lot, Todd for the patch.
I have taken a quick look on the patch. Yes, this approach should work as well.
Blocks will get processed for all the ops, so, the matching to current genStamp will get processed in current iteration and future ones will get postponed again.
A few comments on patch: Did not check for any javadoc issues as you mentioned already, i.e, will work on javadocs.
+ // TODO: why do we need an hflush for this test case to fail?
I remember, this is just added to ensure tthat the current packet will be en-queued and block will get allocated.
Other wise less than 64K content may not be flushed at that time.
DFSTestUtil.appendFile(fs, fileToAppend, "data");
Having the multiple append calls can give the regression for the case where we have many genstamp and they got processed in order and future ones will get postponed.
// Wait till DN reports blocks
comment need to update?
do we need to change the variable name? Since blocks are not declared as invalid yet.
Will take a look deeply on the patch tomorrow again. ( not able to concentrate much, as I am traveling today)