Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
4.4.16
-
None
Description
In 4.4.16-SNAPSHOT, when I use SharedInputBuffer class and do not read the data from the buffer by calling 'read' methods when the buffer is completely full, I experience a high CPU utilization. By looking at the 'SharedInputBuffer#consumeContent' method implementation, I can see that the input is suspended when the buffer does not have enough space to get data transferred from the ContentDecoder. But even after suspending the input, I noticed that the event mask associated with the session was still 'read'.
Furthermore, SharedInputBuffer in 5.x is implemented in a way to adjust the capacity of the buffer to ensure it can accommodate the required capacity (and no input suspension). Hence, I believe 5.x might not have the above behavior. But unfortunately, moving to 5.x is not an option for me at this moment.
So, does 4.x really have a bug here under the above-mentioned conditions? If not, could you please explain this behavior?
Attaching a sample program, which will exhibit the problem. Please note that I have done the following alteration to the BasicAsyncResponseConsumer class.
- the internal buffer is the type of 'SharedInputBuffer'.
- MAX_INITIAL_BUFFER_SIZE value is set to 16384 bytes.
Also,
- Backend sends a payload larger than MAX_INITIAL_BUFFER_SIZE.
- SharedInputBuffer#read method is not invoked by any threads purposefully.
- The latest 4.x branch (4.4.16-SNAPSHOT) is used inside this program.
You may run the 'Main' class of the program.
Your input is highly appreciated.