We stood up a cluster that is processing over 350,000 events per second, with each event a fixed payload size of 2K. The storage required to process that much data over an hour is beyond what we wanted to pay for at AWS. Additionally, we don't have a requirement to keep the files around for an extended period after processing.
It would be tremendously valuable for us to be able to define the log.retention in minutes, not hours. For example, we would prefer to only keep 30 minutes of logs around.
|Transition||Time In Source Status||Execution Times||Last Executer||Last Execution Date|
|153d 2h 52m||1||Alin Vasile||23/Oct/13 22:38|
|1d 16h 34m||1||Jun Rao||25/Oct/13 15:13|
|6s||1||Jun Rao||25/Oct/13 15:13|
|Workflow||Apache Kafka Workflow [ 13051116 ]||no-reopen-closed, patch-avail [ 13054537 ]|
|Workflow||no-reopen-closed, patch-avail [ 12783696 ]||Apache Kafka Workflow [ 13051116 ]|
|Issue Type||New Feature [ 2 ]||Improvement [ 4 ]|
|Status||Resolved [ 5 ]||Closed [ 6 ]|
|Status||Patch Available [ 10002 ]||Resolved [ 5 ]|
|Assignee||Alin Vasile [ avasile ]|
|Fix Version/s||0.8.1 [ 12322960 ]|
|Resolution||Fixed [ 1 ]|
|Status||Open [ 1 ]||Patch Available [ 10002 ]|
|Field||Original Value||New Value|