Kafka
  1. Kafka
  2. KAFKA-306

broker failure system test broken on replication branch

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.8.0
    • Fix Version/s: 0.8.0
    • Component/s: None
    • Labels:

      Description

      The system test in system_test/broker_failure is broken on the replication branch. This test is a pretty useful failure injection test that exercises the consumer rebalancing feature, various replication features like leader election. It will be good to have this test fixed as well as run on every checkin to the replication branch

      1. kafka-306-v1.patch
        13 kB
        John Fung
      2. kafka-306-v2.patch
        48 kB
        John Fung
      3. kafka-306-v3.patch
        51 kB
        John Fung
      4. kafka-306-v4.patch
        53 kB
        John Fung
      5. kafka-306-v5.patch
        55 kB
        John Fung
      6. kafka-306-v6.patch
        64 kB
        John Fung
      7. kafka-306-v7.patch
        66 kB
        John Fung
      8. kafka-306-v8.patch
        72 kB
        John Fung
      9. kafka-306-v9.patch
        72 kB
        John Fung

        Issue Links

          Activity

          Hide
          Neha Narkhede added a comment -

          KAFKA-45 marks the start of server side replication related code changes. I think this test is a pretty good sanity check, if not a complete system testing suite. I would prefer having this fixed before accepting more patches on 0.8 branch.

          Show
          Neha Narkhede added a comment - KAFKA-45 marks the start of server side replication related code changes. I think this test is a pretty good sanity check, if not a complete system testing suite. I would prefer having this fixed before accepting more patches on 0.8 branch.
          Hide
          John Fung added a comment -

          Broker Failure Test is broken in Kafka 0.8 branch. This patch is fixing the issues and contains the following changes:
          1. All server_*.properties are updated such that the first brokerid is starting from '0'
          2. All mirror_producer*.properties are updated to use zk.connect (and not broker.list)
          3. After the source brokers cluster is started, call kafka.admin.CreateTopicCommand to create topic.

          Currently this patch is working with branch 0.8 (rev. 1342841 patched with KAFKA-46) with the following workarounds:

          1. Before starting the target brokers cluster, start and stop one target broker to eliminate the following error:
          org.I0Itec.zkclient.exception.ZkNoNodeException:
          org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/ids

          2. The argument "--consumer-timeout-ms" doesn't seem to work properly. The consumer processes will be terminated manually

          3. Consumer Lag info is not available from Zookeeper. Therefore, extra sleep time is added to the test to wait for the complete consumption of messages

          The above issues are being investigated.

          Show
          John Fung added a comment - Broker Failure Test is broken in Kafka 0.8 branch. This patch is fixing the issues and contains the following changes: 1. All server_*.properties are updated such that the first brokerid is starting from '0' 2. All mirror_producer*.properties are updated to use zk.connect (and not broker.list) 3. After the source brokers cluster is started, call kafka.admin.CreateTopicCommand to create topic. Currently this patch is working with branch 0.8 (rev. 1342841 patched with KAFKA-46 ) with the following workarounds: 1. Before starting the target brokers cluster, start and stop one target broker to eliminate the following error: org.I0Itec.zkclient.exception.ZkNoNodeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/ids 2. The argument "--consumer-timeout-ms" doesn't seem to work properly. The consumer processes will be terminated manually 3. Consumer Lag info is not available from Zookeeper. Therefore, extra sleep time is added to the test to wait for the complete consumption of messages The above issues are being investigated.
          Hide
          Jun Rao added a comment -

          Sorry, just got to review this. Trunk has moved, could you rebase?

          For 1, do you know the cause of this? Is this a bug? If so, please create a jira.

          For 2,3, we just merged some changes from trunk to 0.8. Could you retry and see if this works now?

          Show
          Jun Rao added a comment - Sorry, just got to review this. Trunk has moved, could you rebase? For 1, do you know the cause of this? Is this a bug? If so, please create a jira. For 2,3, we just merged some changes from trunk to 0.8. Could you retry and see if this works now?
          Hide
          John Fung added a comment -

          Uploaded kafka-306-v2.patch for branch 0.8 with the following changes:

          1. Removed the worked around code and comments for NoNodeException (which is not reproducible with the latest 0.8 code).
          2. The script can take a command line argument to bounce any combination of source broker, target broker and mirror maker in a round-robin fashion.
          3. Use "info", "kill_child_processes" methods from a common script "system_test/common/util.sh".
          4. Updated README.

          Show
          John Fung added a comment - Uploaded kafka-306-v2.patch for branch 0.8 with the following changes: 1. Removed the worked around code and comments for NoNodeException (which is not reproducible with the latest 0.8 code). 2. The script can take a command line argument to bounce any combination of source broker, target broker and mirror maker in a round-robin fashion. 3. Use "info", "kill_child_processes" methods from a common script "system_test/common/util.sh". 4. Updated README.
          Hide
          Jun Rao added a comment -

          Thanks for patch v2. Some comments:

          21. run_test.sh
          21.1 Does the following check in start_test() need to be repeated for source, mirror and target, or can we just use 1 check for all 3 cases?
          if [[ $num_iterations -ge $iter && $svr_idx -gt 0 ]]; then
          echo
          info "=========================================="
          info "Iteration $iter of $

          {num_iterations}

          "
          info "=========================================="
          21.2 Do we need to sleep for 30s at the end start_test()? Isn't calling wait_for_zero_consumer_lags enough? Also, the comment says sleep for 10s.
          21.3 In the header, we should add that mirror make can be terminated too.
          21.4 If the test fails, could we generate a list of missing messages in a file? Ideally, messages can just be strings with sequential numbers in them.

          22. The following test seems to fail sometimes.
          bin/run-test.sh 2 23

          23. README: We should add that one needs to do ./sbt package at the root level first.

          Show
          Jun Rao added a comment - Thanks for patch v2. Some comments: 21. run_test.sh 21.1 Does the following check in start_test() need to be repeated for source, mirror and target, or can we just use 1 check for all 3 cases? if [[ $num_iterations -ge $iter && $svr_idx -gt 0 ]]; then echo info "==========================================" info "Iteration $iter of $ {num_iterations} " info "==========================================" 21.2 Do we need to sleep for 30s at the end start_test()? Isn't calling wait_for_zero_consumer_lags enough? Also, the comment says sleep for 10s. 21.3 In the header, we should add that mirror make can be terminated too. 21.4 If the test fails, could we generate a list of missing messages in a file? Ideally, messages can just be strings with sequential numbers in them. 22. The following test seems to fail sometimes. bin/run-test.sh 2 23 23. README: We should add that one needs to do ./sbt package at the root level first.
          Hide
          John Fung added a comment -

          Uploaded kafka-306-v3.patch with the following changes:

          1. Set the server_source*.properties - log file size to approx 10MB:
          log.file.size=10000000

          2. Set the server_target*.properties - log file size to approx 10MB:
          log.file.size=10000000

          Show
          John Fung added a comment - Uploaded kafka-306-v3.patch with the following changes: 1. Set the server_source*.properties - log file size to approx 10MB: log.file.size=10000000 2. Set the server_target*.properties - log file size to approx 10MB: log.file.size=10000000
          Hide
          John Fung added a comment -

          Hi Jun,

          Thanks for reviewing kafka-306-v2.patch.

          kafka-306-v4.patch is uploaded with the following changes suggested by you:

          21.1 The following check is required for each of the source, target and mirror maker. It is because the following 2 lines are needed for:
          Line 1: find out if the $bounce_source_id is a char in the string $svr_to_bounce
          Line 2: check to see if $num_iterations is already reached and if $svr_idx > 0 (meaning this server needs to be bounced)

          Line 1: svr_idx=`expr index $svr_to_bounce $bounce_source_id`
          Line 2: if [[ $num_iterations -ge $iter && $svr_idx -gt 0 ]]; then

          21.2 ConsumerOffsetChecker needs to be enhanced for 0.8 and it depends on KAFKA-313. "sleep" is temporarily used for kafka to catch up with the offset lags.

          21.3 The header is now updated to "#### Starting Kafka Broker / Mirror Maker Failure Test ####"

          21.4 There is a file "checksum.log" generated at the end of the test which will give the checksums found in producer, source consumer, target consumer logs

          22. You may see inconsistent failure in this test due to the issue specified in KAFKA-370

          23. README is updated with the steps for ./sbt package

          Thanks,
          John

          Show
          John Fung added a comment - Hi Jun, Thanks for reviewing kafka-306-v2.patch. kafka-306-v4.patch is uploaded with the following changes suggested by you: 21.1 The following check is required for each of the source, target and mirror maker. It is because the following 2 lines are needed for: Line 1: find out if the $bounce_source_id is a char in the string $svr_to_bounce Line 2: check to see if $num_iterations is already reached and if $svr_idx > 0 (meaning this server needs to be bounced) Line 1: svr_idx=`expr index $svr_to_bounce $bounce_source_id` Line 2: if [[ $num_iterations -ge $iter && $svr_idx -gt 0 ]]; then 21.2 ConsumerOffsetChecker needs to be enhanced for 0.8 and it depends on KAFKA-313 . "sleep" is temporarily used for kafka to catch up with the offset lags. 21.3 The header is now updated to "#### Starting Kafka Broker / Mirror Maker Failure Test ####" 21.4 There is a file "checksum.log" generated at the end of the test which will give the checksums found in producer, source consumer, target consumer logs 22. You may see inconsistent failure in this test due to the issue specified in KAFKA-370 23. README is updated with the steps for ./sbt package Thanks, John
          Hide
          Jun Rao added a comment -

          Thanks for patch v4. A few more comments:

          21.2 KAFA-313 adds 2 more options, which option does this jira depends on?

          21.3 I meant that we should add mirror maker in the following line in the header:

          1. 5. One of the Kafka SOURCE or TARGET brokers in the cluster will
          2. be randomly terminated and waiting for the consumer to catch up.

          21.4 Instead of using checksum, can we use the message string itself? This makes it a bit easier to figure out the missing messages, if any.

          22. Just attached a patch to kafka-370. Could you give it a try?

          Show
          Jun Rao added a comment - Thanks for patch v4. A few more comments: 21.2 KAFA-313 adds 2 more options, which option does this jira depends on? 21.3 I meant that we should add mirror maker in the following line in the header: 5. One of the Kafka SOURCE or TARGET brokers in the cluster will be randomly terminated and waiting for the consumer to catch up. 21.4 Instead of using checksum, can we use the message string itself? This makes it a bit easier to figure out the missing messages, if any. 22. Just attached a patch to kafka-370. Could you give it a try?
          Hide
          John Fung added a comment -
            • Uploaded kafka-306-v5.patch. Changes made in kafka-306-v5.patch:

          1. In "initialize" function, added code to find the location of the zk & kafka log4j log files.

          2. In "cleanup" function, added code to remove the zk & kafka log4j log files

          3. The header of the script is now removed and the description are in README

          4. Use getopt to process command line arguments

          5. Consolidated the following functions:

          • start_console_consumer_for_source_producer
          • start_console_consumer_for_mirror_producer
          • wait_for_zero_source_console_consumer_lags
          • wait_for_zero_mirror_console_consumer_lags

          6. The file to notify producer to stop:
          The producer is sent to the background to run in a while-loop. If a file is used to notify the producer process in the background, the producer will exit properly inside the while loop.

          7. The following check is required for each of the source, target and mirror maker. It is because the following 2 lines are needed for:

          • Line 1: find out if the $bounce_source_id is a char in the string $svr_to_bounce
          • Line 2: check to see if $num_iterations is already reached and if $svr_idx > 0 (meaning this server needs to be bounced)
          • Line 1: svr_idx=`expr index $svr_to_bounce $bounce_source_id`
          • Line 2: if [[ $num_iterations -ge $iter && $svr_idx -gt 0 ]]; then
          Show
          John Fung added a comment - In replying to Jun's question about KAFKA-313 : in this script, the function "wait_for_zero_consumer_lag" is calling ConsumerOffsetChecker to get the Consumer lag value. However, the topic-partition info is changed in 0.8 and it's not returned correctly in ConsumerOffsetChecker. Please refer to this comment: https://issues.apache.org/jira/browse/KAFKA-313?focusedCommentId=13397990&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13397990 Uploaded kafka-306-v5.patch. Changes made in kafka-306-v5.patch: 1. In "initialize" function, added code to find the location of the zk & kafka log4j log files. 2. In "cleanup" function, added code to remove the zk & kafka log4j log files 3. The header of the script is now removed and the description are in README 4. Use getopt to process command line arguments 5. Consolidated the following functions: start_console_consumer_for_source_producer start_console_consumer_for_mirror_producer wait_for_zero_source_console_consumer_lags wait_for_zero_mirror_console_consumer_lags 6. The file to notify producer to stop: The producer is sent to the background to run in a while-loop. If a file is used to notify the producer process in the background, the producer will exit properly inside the while loop. 7. The following check is required for each of the source, target and mirror maker. It is because the following 2 lines are needed for: Line 1: find out if the $bounce_source_id is a char in the string $svr_to_bounce Line 2: check to see if $num_iterations is already reached and if $svr_idx > 0 (meaning this server needs to be bounced) Line 1: svr_idx=`expr index $svr_to_bounce $bounce_source_id` Line 2: if [[ $num_iterations -ge $iter && $svr_idx -gt 0 ]]; then
          Hide
          John Fung added a comment - - edited

          Uploaded kafka-306-v6.patch and made further changes in ProducerPerformance and ConsoleConsumer to support producing sequential message IDs such that it would be easier to troubleshoot data loss.

          ProducerPerformance.scala

          • Added command line option "--seq-id-starting-from". This option enable "seqIdMode" with the following changes:
          • Every message will be tagged with a sequential message ID such that all IDs are unique
          • Every message will be sent by its own producer thread sequentially
          • Each producer thread will use a unique range of numbers to give sequential message IDs
          • All message IDs are leftpadded with 0s for easier troubleshooting
          • Extra characters are added to the message to make up the required message size

          ConsoleConsumer.scala

          • Added DecodedMessageFormatter class to display message contents

          run-test.sh

          • Modified to use the enhanced ProducerPerformance and ConsoleConsumer
          • Validate "MessageID" instead of "checksum" for data matching between source and target consumers
          Show
          John Fung added a comment - - edited Uploaded kafka-306-v6.patch and made further changes in ProducerPerformance and ConsoleConsumer to support producing sequential message IDs such that it would be easier to troubleshoot data loss. ProducerPerformance.scala Added command line option "--seq-id-starting-from". This option enable "seqIdMode" with the following changes: Every message will be tagged with a sequential message ID such that all IDs are unique Every message will be sent by its own producer thread sequentially Each producer thread will use a unique range of numbers to give sequential message IDs All message IDs are leftpadded with 0s for easier troubleshooting Extra characters are added to the message to make up the required message size ConsoleConsumer.scala Added DecodedMessageFormatter class to display message contents run-test.sh Modified to use the enhanced ProducerPerformance and ConsoleConsumer Validate "MessageID" instead of "checksum" for data matching between source and target consumers
          Hide
          Joel Koshy added a comment -

          John, thanks for the patch. The test script itself looks good - as we
          discussed on the other jira we can do further cleanup separately. Here are
          some comments on the new changes:

          ProducerPerformance:

          • seqIdStartFromopt -> startId or initialId would be more
            convenient/intuitive.
          • May be better not to describe the message format in detail in the help
            message. I think the template: "Message:000..1:xxx..." is good enough.
          • On line 136, 137 I think you mean if (options.has) and not
            if(!options.has) - something odd there. Can you double-check?
          • Try to avoid using vars if possible. vals are generally clearer and safer
          • for example,
            val isFixSize = options.has(seqIdStartFromOpt) || !options.has(varyMessageSizeOpt)
            val numThreads = if (options.has(seqIdStartFromOpt)) 1 else options.valueOf(numThreadsOpt).intValue()
            etc.
          • For user-specified options that you override can you log a warning?
          • Instead of the complicated padding logic I think you can get it for free
            with Java format strings - i.e., specify the width/justification of each
            column in the format string. That would be much easier I think.
          • numThreads override to 1 -> did it work to prefix the id with thread-id
            and allow > 1 thread?

          Server property files:

          • send/receive.buffer.size don't seem to be valid config options - may be
            deprecated by the socket buffer size settings, but not sure.

          Util functions:

          • Small suggestion: would be better to echo the result than return. So you
            can have: idx=$(get_random_range ...) which is clearer than
            get_random_range; idx=$? . Also, non-zero bash returns typically indicate
            an error.
          Show
          Joel Koshy added a comment - John, thanks for the patch. The test script itself looks good - as we discussed on the other jira we can do further cleanup separately. Here are some comments on the new changes: ProducerPerformance: seqIdStartFromopt -> startId or initialId would be more convenient/intuitive. May be better not to describe the message format in detail in the help message. I think the template: "Message:000..1:xxx..." is good enough. On line 136, 137 I think you mean if (options.has) and not if(!options.has) - something odd there. Can you double-check? Try to avoid using vars if possible. vals are generally clearer and safer for example, val isFixSize = options.has(seqIdStartFromOpt) || !options.has(varyMessageSizeOpt) val numThreads = if (options.has(seqIdStartFromOpt)) 1 else options.valueOf(numThreadsOpt).intValue() etc. For user-specified options that you override can you log a warning? Instead of the complicated padding logic I think you can get it for free with Java format strings - i.e., specify the width/justification of each column in the format string. That would be much easier I think. numThreads override to 1 -> did it work to prefix the id with thread-id and allow > 1 thread? Server property files: send/receive.buffer.size don't seem to be valid config options - may be deprecated by the socket buffer size settings, but not sure. Util functions: Small suggestion: would be better to echo the result than return. So you can have: idx=$(get_random_range ...) which is clearer than get_random_range; idx=$? . Also, non-zero bash returns typically indicate an error.
          Hide
          John Fung added a comment -

          Hi Joel,

          Thanks for reviewing. I just uploaded kafka-306-v7.patch with the changes you suggested:

          ProducerPerformance
          ===================

          • seqIdStartFromopt -> startId or initialId would be more convenient/intuitive.
          • Changed
          • May be better not to describe the message format in detail in the help message. I think the template: "Message:000..1:xxx..." is good enough.
          • Changed
          • On line 136, 137 I think you mean if (options.has) and not if(!options.has) - something odd there. Can you double-check?
          • In "seqIdMode", if "numThreadsOpt" is not specified, numThreads default to 1. Otherwise, it will take the user specified value
          • Try to avoid using vars if possible. vals are generally clearer and safer, for example,
            val isFixSize = options.has(seqIdStartFromOpt) || !options.has(varyMessageSizeOpt)
            val numThreads = if (options.has(seqIdStartFromOpt)) 1 else options.valueOf(numThreadsOpt).intValue()
          • This is because the values may be overridden later by user specified values. Therefore, some of the val is changed to var
          • For user-specified options that you override can you log a warning?
          • Changed
          • Instead of the complicated padding logic I think you can get it for free with Java format strings - i.e., specify the width/justification of each column in the format string. That would be much easier I think.
          • Changed
          • numThreads override to 1 -> did it work to prefix the id with thread-id and allow > 1 thread?
          • numThreads will be overridden if "--threads" is specified in command line arg

          Server property files
          =====================

          • send/receive.buffer.size don't seem to be valid config options - may be deprecated by the socket buffer size settings, but not sure.
          • Changed

          Util functions
          ==============

          • Small suggestion: would be better to echo the result than return.
            So you can have: idx=$(get_random_range ...) which is clearer than get_random_range; idx=$? .
            Also, non-zero bash returns typically indicate an error.
          • Changed
          Show
          John Fung added a comment - Hi Joel, Thanks for reviewing. I just uploaded kafka-306-v7.patch with the changes you suggested: ProducerPerformance =================== seqIdStartFromopt -> startId or initialId would be more convenient/intuitive. Changed May be better not to describe the message format in detail in the help message. I think the template: "Message:000..1:xxx..." is good enough. Changed On line 136, 137 I think you mean if (options.has) and not if(!options.has) - something odd there. Can you double-check? In "seqIdMode", if "numThreadsOpt" is not specified, numThreads default to 1. Otherwise, it will take the user specified value Try to avoid using vars if possible. vals are generally clearer and safer, for example, val isFixSize = options.has(seqIdStartFromOpt) || !options.has(varyMessageSizeOpt) val numThreads = if (options.has(seqIdStartFromOpt)) 1 else options.valueOf(numThreadsOpt).intValue() This is because the values may be overridden later by user specified values. Therefore, some of the val is changed to var For user-specified options that you override can you log a warning? Changed Instead of the complicated padding logic I think you can get it for free with Java format strings - i.e., specify the width/justification of each column in the format string. That would be much easier I think. Changed numThreads override to 1 -> did it work to prefix the id with thread-id and allow > 1 thread? numThreads will be overridden if "--threads" is specified in command line arg Server property files ===================== send/receive.buffer.size don't seem to be valid config options - may be deprecated by the socket buffer size settings, but not sure. Changed Util functions ============== Small suggestion: would be better to echo the result than return. So you can have: idx=$(get_random_range ...) which is clearer than get_random_range; idx=$? . Also, non-zero bash returns typically indicate an error. Changed
          Hide
          John Fung added a comment -

          Uploaded kafka-306-v8.patch.

          The changes made in the previous patch (kafka-306-v7.patch) will break single_host_multi_brokers/bin/run-test.sh due to the fact that ProducerPerformance will no longer print the message checksum.

          The changes made in this patch supports single_host_multi_brokers/bin/run-test.sh to make use of the sequential message ID for test results validation.

          Show
          John Fung added a comment - Uploaded kafka-306-v8.patch. The changes made in the previous patch (kafka-306-v7.patch) will break single_host_multi_brokers/bin/run-test.sh due to the fact that ProducerPerformance will no longer print the message checksum. The changes made in this patch supports single_host_multi_brokers/bin/run-test.sh to make use of the sequential message ID for test results validation.
          Hide
          Joel Koshy added a comment -

          Thanks for making the changes - looks better.

          > - This is because the values may be overridden later by user specified
          > values. Therefore, some of the val is changed to var

          I meant even with overrides I don't think you need these vars and they can
          be handled better with vals. However, it's a minor issue and looking at
          ProducerPerformance it seems it needs an overhaul - the main loop is pretty hard to
          read. We should probably do that in a separate jira as it isn't directly
          related to this one.

          BTW, it seems bytesSent is not updated in seqIdMode.

          Show
          Joel Koshy added a comment - Thanks for making the changes - looks better. > - This is because the values may be overridden later by user specified > values. Therefore, some of the val is changed to var I meant even with overrides I don't think you need these vars and they can be handled better with vals. However, it's a minor issue and looking at ProducerPerformance it seems it needs an overhaul - the main loop is pretty hard to read. We should probably do that in a separate jira as it isn't directly related to this one. BTW, it seems bytesSent is not updated in seqIdMode.
          Hide
          Jun Rao added a comment -

          Patch v8 looks good overall. Some minor comments on ProducerPerformance:

          81. Could we default numThreadsOpt to 1? Then we can get rid of the following override.

          if (!options.has(numThreadsOpt))

          { numThreads = 1 warn("seqIdMode - numThreads is overridden to: " + numThreads) }

          82. Could we replace the following code
          if (config.seqIdMode)

          { producer.send(new ProducerData[Message,Message](config.topic, null, message)) }

          else if(!config.isFixSize) {
          with
          if(!config.isFixSize || !config.seqIdMode) {

          Show
          Jun Rao added a comment - Patch v8 looks good overall. Some minor comments on ProducerPerformance: 81. Could we default numThreadsOpt to 1? Then we can get rid of the following override. if (!options.has(numThreadsOpt)) { numThreads = 1 warn("seqIdMode - numThreads is overridden to: " + numThreads) } 82. Could we replace the following code if (config.seqIdMode) { producer.send(new ProducerData[Message,Message](config.topic, null, message)) } else if(!config.isFixSize) { with if(!config.isFixSize || !config.seqIdMode) {
          Hide
          John Fung added a comment -

          Thanks Jun for reviewing. Your suggestion are made in kafka-306-v9.patch.

          The changes are:
          91. numThreadsOpt is defaulted to 1 and the 'if' block is removed

          92. The following block is actually not necessary and it's now removed:
          if (config.seqIdMode)

          { producer.send(new ProducerData[Message,Message](config.topic, null, message)) }

          Show
          John Fung added a comment - Thanks Jun for reviewing. Your suggestion are made in kafka-306-v9.patch. The changes are: 91. numThreadsOpt is defaulted to 1 and the 'if' block is removed 92. The following block is actually not necessary and it's now removed: if (config.seqIdMode) { producer.send(new ProducerData[Message,Message](config.topic, null, message)) }
          Hide
          Jun Rao added a comment -

          John, thanks for patch v9. Removed the commented out code in ProducerPerformance and committed to 0.8.

          Show
          Jun Rao added a comment - John, thanks for patch v9. Removed the commented out code in ProducerPerformance and committed to 0.8.

            People

            • Assignee:
              John Fung
              Reporter:
              Neha Narkhede
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development