Details

    • Type: Task Task
    • Status: Resolved
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: 0.8.0
    • Fix Version/s: 0.8.0
    • Component/s: None
    • Labels:
      None
    1. kafka328.notes.2.txt
      23 kB
      Joe Stein
    2. kafka-348_ProducerSendThread_KafkaApis.patch
      2 kB
      Jun Rao
    3. kafka348.patch.1.txt
      233 kB
      Joe Stein
    4. kafka348.patch.2.txt
      231 kB
      Joe Stein
    5. kafka348.patch.3.txt
      297 kB
      Joe Stein
    6. kafka348.patch.4.txt
      282 kB
      Joe Stein
    7. KAFKA-348.v3.notes.txt
      28 kB
      Joe Stein
    8. KAFKA-348.v4.notes.txt
      48 kB
      Joe Stein
    9. kafka-348-delta.patch
      7 kB
      Jun Rao

      Activity

      Hide
      Joe Stein added a comment - - edited

      a few issues

      1) conflict in ZookeeperConsumerConnector related to releasePartitionOwnership and kafka tickets 300,239 and 286

      and a bunch of errors from merge related to to other tickets stepping on each other though no specific conflict

      [info] Compiling main sources...
      [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:556: type mismatch;
      [error] found : kafka.utils.Pool[String,kafka.utils.Pool[Int,kafka.consumer.PartitionTopicInfo]]
      [error] required: kafka.utils.Pool[String,kafka.utils.Pool[kafka.cluster.Partition,kafka.consumer.PartitionTopicInfo]]
      [error] addPartitionTopicInfo(currentTopicRegistry, topicDirs, partition, topic, consumerThreadId)
      [error] ^
      [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:661: not found: value partition
      [error] val leaderOpt = getLeaderForPartition(zkClient, topic, partition.toInt)
      [error] ^
      [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:664: not found: value partition
      [error] format(partition, topic))
      [error] ^
      [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:665: not found: value partition
      [error] case Some(l) => debug("Leader for partition %s for topic %s is %d".format(partition, topic, l))
      [error] ^
      [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:669: not found: value partition
      [error] val znode = topicDirs.consumerOffsetDir + "/" + partition
      [error] ^
      [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:676: not found: value partition
      [error] earliestOrLatestOffset(topic, leader, partition.toInt, OffsetRequest.EarliestTime)
      [error] ^
      [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:678: not found: value partition
      [error] earliestOrLatestOffset(topic, leader, partition.toInt, OffsetRequest.LatestTime)
      [error] ^
      [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:691: not found: value partition
      [error] partition.toInt,
      [error] ^
      [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:696: not found: value partition
      [error] partTopicInfoMap.put(partition.toInt, partTopicInfo)
      [error] ^
      [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/producer/SyncProducer.scala:62: value produces is not a member of kafka.api.ProducerRequest
      [error] for (produce <- request.produces) {
      [error] ^
      [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/producer/SyncProducer.scala:145: not found: type MessageSizeTooLargeException
      [error] throw new MessageSizeTooLargeException
      [error] ^
      [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/server/KafkaApis.scala:100: type mismatch;
      [error] found : kafka.message.MessageSet
      [error] required: kafka.message.ByteBufferMessageSet
      [error] log.append(partitionData.messages)
      [error] ^
      [error] 12 errors found

      all other conflicts resolved I will take a look at the errors tomorrow but if someone else can take a look also would be great seems like integrating these last few files is going to be a little more involved making sure not to lose any features and such

      Show
      Joe Stein added a comment - - edited a few issues 1) conflict in ZookeeperConsumerConnector related to releasePartitionOwnership and kafka tickets 300,239 and 286 and a bunch of errors from merge related to to other tickets stepping on each other though no specific conflict [info] Compiling main sources... [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:556: type mismatch; [error] found : kafka.utils.Pool[String,kafka.utils.Pool [Int,kafka.consumer.PartitionTopicInfo] ] [error] required: kafka.utils.Pool[String,kafka.utils.Pool [kafka.cluster.Partition,kafka.consumer.PartitionTopicInfo] ] [error] addPartitionTopicInfo(currentTopicRegistry, topicDirs, partition, topic, consumerThreadId) [error] ^ [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:661: not found: value partition [error] val leaderOpt = getLeaderForPartition(zkClient, topic, partition.toInt) [error] ^ [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:664: not found: value partition [error] format(partition, topic)) [error] ^ [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:665: not found: value partition [error] case Some(l) => debug("Leader for partition %s for topic %s is %d".format(partition, topic, l)) [error] ^ [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:669: not found: value partition [error] val znode = topicDirs.consumerOffsetDir + "/" + partition [error] ^ [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:676: not found: value partition [error] earliestOrLatestOffset(topic, leader, partition.toInt, OffsetRequest.EarliestTime) [error] ^ [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:678: not found: value partition [error] earliestOrLatestOffset(topic, leader, partition.toInt, OffsetRequest.LatestTime) [error] ^ [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:691: not found: value partition [error] partition.toInt, [error] ^ [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:696: not found: value partition [error] partTopicInfoMap.put(partition.toInt, partTopicInfo) [error] ^ [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/producer/SyncProducer.scala:62: value produces is not a member of kafka.api.ProducerRequest [error] for (produce <- request.produces) { [error] ^ [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/producer/SyncProducer.scala:145: not found: type MessageSizeTooLargeException [error] throw new MessageSizeTooLargeException [error] ^ [error] /Users/josephstein/apache/0.8_kafka/core/src/main/scala/kafka/server/KafkaApis.scala:100: type mismatch; [error] found : kafka.message.MessageSet [error] required: kafka.message.ByteBufferMessageSet [error] log.append(partitionData.messages) [error] ^ [error] 12 errors found all other conflicts resolved I will take a look at the errors tomorrow but if someone else can take a look also would be great seems like integrating these last few files is going to be a little more involved making sure not to lose any features and such
      Hide
      Jun Rao added a comment -

      Joe,

      Thanks for helping out. The patch seems to miss some of the new classes created in trunk (e.g., KafkaStream, TopicFilter, etc).

      Show
      Jun Rao added a comment - Joe, Thanks for helping out. The patch seems to miss some of the new classes created in trunk (e.g., KafkaStream, TopicFilter, etc).
      Hide
      Jun Rao added a comment -

      For addPartitionTopicInfo(currentTopicRegistry, topicDirs, partition, topic, consumerThreadId):
      In 0.8, the method doesn't take topicRegistry in the input any more, but uses it in the method directly. Also, in 0.8, we are using the partitionId as the key in the inner map of topicRegistry, instead Partition object.

      Show
      Jun Rao added a comment - For addPartitionTopicInfo(currentTopicRegistry, topicDirs, partition, topic, consumerThreadId): In 0.8, the method doesn't take topicRegistry in the input any more, but uses it in the method directly. Also, in 0.8, we are using the partitionId as the key in the inner map of topicRegistry, instead Partition object.
      Hide
      Joe Stein added a comment -

      << The patch seems to miss some of the new classes created in trunk (e.g., KafkaStream, TopicFilter, etc).

      odd, I have them local and when trying to add them I get a warning saying they are already under version control I wonder if it is some odd svn thing not getting picked up with the diff since the merge has them in the branch now and

      svn status shows them in there so they will end up getting commited

      A + core/src/main/scala/kafka/consumer/KafkaStream.scala
      A + core/src/main/scala/kafka/consumer/TopicFilter.scala

      along with the other added files are in svn status very weird they are not showing through the diff... ugh will look into that but want to get these errors cleared first

      << For addPartitionTopicInfo(currentTopicRegistry, topicDirs, partition, topic, consumerThreadId):
      <<In 0.8, the method doesn't take topicRegistry in the input any more, but uses it in the method directly. Also, in 0.8, we are using the partitionId as the key in the inner map of topicRegistry, instead Partition object.

      thanks this is helpful going through some of the nuances I should be able to work through these more later this evening along with the conflict for releasePartitionOwnership

      Show
      Joe Stein added a comment - << The patch seems to miss some of the new classes created in trunk (e.g., KafkaStream, TopicFilter, etc). odd, I have them local and when trying to add them I get a warning saying they are already under version control I wonder if it is some odd svn thing not getting picked up with the diff since the merge has them in the branch now and svn status shows them in there so they will end up getting commited A + core/src/main/scala/kafka/consumer/KafkaStream.scala A + core/src/main/scala/kafka/consumer/TopicFilter.scala along with the other added files are in svn status very weird they are not showing through the diff... ugh will look into that but want to get these errors cleared first << For addPartitionTopicInfo(currentTopicRegistry, topicDirs, partition, topic, consumerThreadId): <<In 0.8, the method doesn't take topicRegistry in the input any more, but uses it in the method directly. Also, in 0.8, we are using the partitionId as the key in the inner map of topicRegistry, instead Partition object. thanks this is helpful going through some of the nuances I should be able to work through these more later this evening along with the conflict for releasePartitionOwnership
      Hide
      Joe Stein added a comment -

      ok, I started again from scratch and took detailed notes for all conflicts from the merge and what how it was resolved.

      Attached are

      1) my notes for all the conflicts encountered
      2) the latest patch. Please note I figured out why the "added" files to trunk are not showing up in the diff from the merge. the reason is because these files are actually not new but actually a direct descendant of a real file in the repo. supposedly a new version of SVN client has a new flag called --show-copies-as-adds so that these files will show up in the diff. I will update a machine for the latest client version of SVN and make a new patch so it is holy complete before commit

      I did want to post everything for where it is now because:

      1) there is one last error

      [info] Compiling main sources...
      [error] /Users/josephstein/rebase_kafka/rebase_0.8/core/src/main/scala/kafka/server/KafkaApis.scala:100: type mismatch;
      [error] found : kafka.message.MessageSet
      [error] required: kafka.message.ByteBufferMessageSet
      [error] log.append(partitionData.messages)
      [error] ^
      [error] one error found

      this is a result of KAFKA-310 on the trunk changing the append function in Log.scala https://github.com/apache/kafka/commit/2bcc91134316a546322bf59a422bec1934613607

      and the use of that function in KAFKA-49

      https://github.com/apache/kafka/commit/1461a3877f34db9ac5ce1f809d53bc353df24b01

      any thoughts?

      2) review of everything going in

      I will work on updating svn so i can produce a patch with the merged copies that are from trunk as new to this branch and also mull on this error maybe someone knows a fix here?

      Show
      Joe Stein added a comment - ok, I started again from scratch and took detailed notes for all conflicts from the merge and what how it was resolved. Attached are 1) my notes for all the conflicts encountered 2) the latest patch. Please note I figured out why the "added" files to trunk are not showing up in the diff from the merge. the reason is because these files are actually not new but actually a direct descendant of a real file in the repo. supposedly a new version of SVN client has a new flag called --show-copies-as-adds so that these files will show up in the diff. I will update a machine for the latest client version of SVN and make a new patch so it is holy complete before commit I did want to post everything for where it is now because: 1) there is one last error [info] Compiling main sources... [error] /Users/josephstein/rebase_kafka/rebase_0.8/core/src/main/scala/kafka/server/KafkaApis.scala:100: type mismatch; [error] found : kafka.message.MessageSet [error] required: kafka.message.ByteBufferMessageSet [error] log.append(partitionData.messages) [error] ^ [error] one error found this is a result of KAFKA-310 on the trunk changing the append function in Log.scala https://github.com/apache/kafka/commit/2bcc91134316a546322bf59a422bec1934613607 and the use of that function in KAFKA-49 https://github.com/apache/kafka/commit/1461a3877f34db9ac5ce1f809d53bc353df24b01 any thoughts? 2) review of everything going in I will work on updating svn so i can produce a patch with the merged copies that are from trunk as new to this branch and also mull on this error maybe someone knows a fix here?
      Hide
      Joe Stein added a comment -

      I updated to the latest svn and produced a patch that is inclusive of the new files that were on the trunk (using --show-copies-as-adds) since they are not technically new to the tree the older SVN could not deal with that in a diff.

      I also updated and integrated Prashanth commit from this morning so the rebase is complete (added that to the notes)

      Still hung up on how to fix this error

      [error] /Users/josephstein/rebase_kafka/rebase_0.8/core/src/main/scala/kafka/server/KafkaApis.scala:99: type mismatch;
      [error] found : kafka.message.MessageSet
      [error] required: kafka.message.ByteBufferMessageSet
      [error] log.append(partitionData.messages)
      [error] ^
      [error] one error found

      I don't know enough about KAFKA-310 but if there is a way to change that back to use MessageSet again we don't have to re-wire the new PartitionData structures unless someone knows of a way or I can spend time to figure out a way to conver them (seems clunky converting an abstract base class into the child class...)

      Show
      Joe Stein added a comment - I updated to the latest svn and produced a patch that is inclusive of the new files that were on the trunk (using --show-copies-as-adds) since they are not technically new to the tree the older SVN could not deal with that in a diff. I also updated and integrated Prashanth commit from this morning so the rebase is complete (added that to the notes) Still hung up on how to fix this error [error] /Users/josephstein/rebase_kafka/rebase_0.8/core/src/main/scala/kafka/server/KafkaApis.scala:99: type mismatch; [error] found : kafka.message.MessageSet [error] required: kafka.message.ByteBufferMessageSet [error] log.append(partitionData.messages) [error] ^ [error] one error found I don't know enough about KAFKA-310 but if there is a way to change that back to use MessageSet again we don't have to re-wire the new PartitionData structures unless someone knows of a way or I can spend time to figure out a way to conver them (seems clunky converting an abstract base class into the child class...)
      Hide
      Joe Stein added a comment - - edited

      I got past this error making

      log.append(partitionData.messages)

      into

      log.append(new ByteBufferMessageSet(partitionData.messages.getSerialized))

      now the rest of tests are compiling 46 more errors working through them now (likely just messed up the merge of the conflicted imports is all)

      Show
      Joe Stein added a comment - - edited I got past this error making log.append(partitionData.messages) into log.append(new ByteBufferMessageSet(partitionData.messages.getSerialized)) now the rest of tests are compiling 46 more errors working through them now (likely just messed up the merge of the conflicted imports is all)
      Hide
      Jun Rao added a comment -

      Joe,

      PartitionData has to use MessageSet since FetchRequest creates PartitionData with FileMessageSet, instead of ByteBufferMessageSet. However, we know that in ProduceRequest, PartitionData always uses ByteBufferMessageSet. So, we can just do log.append( (ByteBufferMessageSet) partitionData.messages).

      Show
      Jun Rao added a comment - Joe, PartitionData has to use MessageSet since FetchRequest creates PartitionData with FileMessageSet, instead of ByteBufferMessageSet. However, we know that in ProduceRequest, PartitionData always uses ByteBufferMessageSet. So, we can just do log.append( (ByteBufferMessageSet) partitionData.messages).
      Hide
      Joe Stein added a comment -

      Looks like I am down to 1 test case failing

      wanted to post where I am probably not able to look at this again until tomorrow evening or the next day

      [error] Test Failed: testQueueTimeExpired(kafka.producer.AsyncProducerTest)
      java.lang.AssertionError:
      Expectation failure on verify:
      handle(ListBuffer(ProducerData(topic1,null,List(msg0)), ProducerData(topic1,null,List(msg1)))): expected: 1, actual: 0
      at org.easymock.internal.MocksControl.verify(MocksControl.java:184)
      at org.easymock.EasyMock.verify(EasyMock.java:2038)
      at kafka.producer.AsyncProducerTest.testQueueTimeExpired(AsyncProducerTest.scala:163)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      at java.lang.reflect.Method.invoke(Method.java:597)
      at junit.framework.TestCase.runTest(TestCase.java:164)
      at junit.framework.TestCase.runBare(TestCase.java:130)
      at junit.framework.TestResult$1.protect(TestResult.java:110)
      at junit.framework.TestResult.runProtected(TestResult.java:128)
      at junit.framework.TestResult.run(TestResult.java:113)
      at junit.framework.TestCase.run(TestCase.java:120)
      at junit.framework.TestSuite.runTest(TestSuite.java:228)
      at junit.framework.TestSuite.run(TestSuite.java:223)
      at junit.framework.TestSuite.runTest(TestSuite.java:228)
      at junit.framework.TestSuite.run(TestSuite.java:223)
      at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:309)
      at org.scalatest.tools.ScalaTestFramework$ScalaTestRunner.run(ScalaTestFramework.scala:40)
      at sbt.TestRunner.run(TestFramework.scala:53)
      at sbt.TestRunner.runTest$1(TestFramework.scala:67)
      at sbt.TestRunner.run(TestFramework.scala:76)
      at sbt.TestFramework$$anonfun$10$$anonfun$apply$11.runTest$2(TestFramework.scala:194)
      at sbt.TestFramework$$anonfun$10$$anonfun$apply$11$$anonfun$apply$12.apply(TestFramework.scala:205)
      at sbt.TestFramework$$anonfun$10$$anonfun$apply$11$$anonfun$apply$12.apply(TestFramework.scala:205)
      at sbt.NamedTestTask.run(TestFramework.scala:92)
      at sbt.ScalaProject$$anonfun$sbt$ScalaProject$$toTask$1.apply(ScalaProject.scala:193)
      at sbt.ScalaProject$$anonfun$sbt$ScalaProject$$toTask$1.apply(ScalaProject.scala:193)
      at sbt.TaskManager$Task.invoke(TaskManager.scala:62)
      at sbt.impl.RunTask.doRun$1(RunTask.scala:77)
      at sbt.impl.RunTask.runTask(RunTask.scala:85)
      at sbt.impl.RunTask.sbt$impl$RunTask$$runIfNotRoot(RunTask.scala:60)
      at sbt.impl.RunTask$$anonfun$runTasksExceptRoot$2.apply(RunTask.scala:48)
      at sbt.impl.RunTask$$anonfun$runTasksExceptRoot$2.apply(RunTask.scala:48)
      at sbt.Distributor$Run$Worker$$anonfun$2.apply(ParallelRunner.scala:131)
      at sbt.Distributor$Run$Worker$$anonfun$2.apply(ParallelRunner.scala:131)
      at sbt.Control$.trapUnit(Control.scala:19)
      at sbt.Distributor$Run$Worker.run(ParallelRunner.scala:131)

      Show
      Joe Stein added a comment - Looks like I am down to 1 test case failing wanted to post where I am probably not able to look at this again until tomorrow evening or the next day [error] Test Failed: testQueueTimeExpired(kafka.producer.AsyncProducerTest) java.lang.AssertionError: Expectation failure on verify: handle(ListBuffer(ProducerData(topic1,null,List(msg0)), ProducerData(topic1,null,List(msg1)))): expected: 1, actual: 0 at org.easymock.internal.MocksControl.verify(MocksControl.java:184) at org.easymock.EasyMock.verify(EasyMock.java:2038) at kafka.producer.AsyncProducerTest.testQueueTimeExpired(AsyncProducerTest.scala:163) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:164) at junit.framework.TestCase.runBare(TestCase.java:130) at junit.framework.TestResult$1.protect(TestResult.java:110) at junit.framework.TestResult.runProtected(TestResult.java:128) at junit.framework.TestResult.run(TestResult.java:113) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:228) at junit.framework.TestSuite.run(TestSuite.java:223) at junit.framework.TestSuite.runTest(TestSuite.java:228) at junit.framework.TestSuite.run(TestSuite.java:223) at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:309) at org.scalatest.tools.ScalaTestFramework$ScalaTestRunner.run(ScalaTestFramework.scala:40) at sbt.TestRunner.run(TestFramework.scala:53) at sbt.TestRunner.runTest$1(TestFramework.scala:67) at sbt.TestRunner.run(TestFramework.scala:76) at sbt.TestFramework$$anonfun$10$$anonfun$apply$11.runTest$2(TestFramework.scala:194) at sbt.TestFramework$$anonfun$10$$anonfun$apply$11$$anonfun$apply$12.apply(TestFramework.scala:205) at sbt.TestFramework$$anonfun$10$$anonfun$apply$11$$anonfun$apply$12.apply(TestFramework.scala:205) at sbt.NamedTestTask.run(TestFramework.scala:92) at sbt.ScalaProject$$anonfun$sbt$ScalaProject$$toTask$1.apply(ScalaProject.scala:193) at sbt.ScalaProject$$anonfun$sbt$ScalaProject$$toTask$1.apply(ScalaProject.scala:193) at sbt.TaskManager$Task.invoke(TaskManager.scala:62) at sbt.impl.RunTask.doRun$1(RunTask.scala:77) at sbt.impl.RunTask.runTask(RunTask.scala:85) at sbt.impl.RunTask.sbt$impl$RunTask$$runIfNotRoot(RunTask.scala:60) at sbt.impl.RunTask$$anonfun$runTasksExceptRoot$2.apply(RunTask.scala:48) at sbt.impl.RunTask$$anonfun$runTasksExceptRoot$2.apply(RunTask.scala:48) at sbt.Distributor$Run$Worker$$anonfun$2.apply(ParallelRunner.scala:131) at sbt.Distributor$Run$Worker$$anonfun$2.apply(ParallelRunner.scala:131) at sbt.Control$.trapUnit(Control.scala:19) at sbt.Distributor$Run$Worker.run(ParallelRunner.scala:131)
      Hide
      Jun Rao added a comment -

      Joe,

      Thanks a lot for your help. Really appreciate it. I fixed the broken unit test and my earlier comment on converting MessageSet to ByteBufferMessageSet. To get the fix, please
      a. apply your patch
      b. revert ProducerSendThread.scala and KafkaApis.scala
      c. apply the new patch I attached.

      Another comment. Please remove unused imports in the following classes:
      BackwarsCompatibilityTest
      KafkaETLContext
      KafkaLog4jAppender
      KafkaLog4jAppenderTest
      KafkaRequetHandler
      SeverShutdownTest
      SimpleConsumerDemo
      TestUtils

      Other than that, the patch is ready for commit. Since this patch has a lot of changes, we probably should let you commit this before kafka-46.

      Show
      Jun Rao added a comment - Joe, Thanks a lot for your help. Really appreciate it. I fixed the broken unit test and my earlier comment on converting MessageSet to ByteBufferMessageSet. To get the fix, please a. apply your patch b. revert ProducerSendThread.scala and KafkaApis.scala c. apply the new patch I attached. Another comment. Please remove unused imports in the following classes: BackwarsCompatibilityTest KafkaETLContext KafkaLog4jAppender KafkaLog4jAppenderTest KafkaRequetHandler SeverShutdownTest SimpleConsumerDemo TestUtils Other than that, the patch is ready for commit. Since this patch has a lot of changes, we probably should let you commit this before kafka-46.
      Hide
      Joe Stein added a comment -

      committed

      +1 to dual commits

      Show
      Joe Stein added a comment - committed +1 to dual commits
      Hide
      Jun Rao added a comment -

      Reopen the jira. Found a few missed merges and included them in kafka-348-delta.patch.

      Show
      Jun Rao added a comment - Reopen the jira. Found a few missed merges and included them in kafka-348-delta.patch.
      Hide
      Joe Stein added a comment -

      thanks Jun, reviewed and commited

      Show
      Joe Stein added a comment - thanks Jun, reviewed and commited

        People

        • Assignee:
          Joe Stein
          Reporter:
          Joe Stein
        • Votes:
          0 Vote for this issue
          Watchers:
          2 Start watching this issue

          Dates

          • Created:
            Updated:
            Resolved:

            Development