Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
0.9.2-incubating, 0.9.3
Description
Note: The original title of this ticket was: "Add Option to Config Message handling strategy when connection timeout".
This is to address a concern brought up during the work at STORM-297:
Robert Joseph Evans wrote: Your logic makes since to me on why these calls are blocking. My biggest concern around the blocking is in the case of a worker crashing. If a single worker crashes this can block the entire topology from executing until that worker comes back up. In some cases I can see that being something that you would want. In other cases I can see speed being the primary concern and some users would like to get partial data fast, rather then accurate data later.
Could we make it configurable on a follow up JIRA where we can have a max limit to the buffering that is allowed, before we block, or throw data away (which is what zeromq does)?
If some worker crash suddenly, how to handle the message which was supposed to be delivered to the worker?
1. Should we buffer all message infinitely?
2. Should we block the message sending until the connection is resumed?
3. Should we config a buffer limit, try to buffer the message first, if the limit is met, then block?
4. Should we neither block, nor buffer too much, but choose to drop the messages, and use the built-in storm failover mechanism?
Attachments
Attachments
Issue Links
- contains
-
STORM-404 Worker on one machine crashes due to a failure of another worker on another machine
- Resolved
- is related to
-
STORM-297 Storm Performance cannot be scaled up by adding more CPU cores
- Resolved
-
STORM-547 Build Problem(s)
- Closed
- relates to
-
STORM-510 Netty messaging client blocks transfer thread on reconnect
- Closed
-
STORM-677 Maximum retries strategy may cause data loss
- Closed
-
STORM-350 Update disruptor to latest version (3.3.2)
- Resolved