Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-9747

No tasks created for a connector

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.4.0
    • 3.1.0, 2.8.1, 3.0.1
    • KafkaConnect
    • None
    • OS: Ubuntu 18.04 LTS
      Platform: Confluent Platform 5.4
      HW: The same behaviour on various AWS instances - from t3.small to c5.xlarge

    Description

      We are running Kafka Connect in a distributed mode on 3 nodes using Debezium (MongoDB) and Confluent S3 connectors. When adding a new connector via the REST API the connector is created in RUNNING state, but no tasks are created for the connector.

      Pausing and resuming the connector does not help. When we stop all workers and then start them again, the tasks are created and everything runs as it should.

      The issue does not show up if we run only a single node.

      The issue is not caused by the connector plugins, because we see the same behaviour for both Debezium and S3 connectors. Also in debug logs I can see that Debezium is correctly returning a task configuration from the Connector.taskConfigs() method.

      Connector configuration examples

      Debezium:

      {
        "name": "qa-mongodb-comp-converter-task|1",
        "config": {
          "connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
          "mongodb.hosts": "mongodb-qa-001:27017,mongodb-qa-002:27017,mongodb-qa-003:27017",
          "mongodb.name": "qa-debezium-comp",
          "mongodb.ssl.enabled": true,
          "collection.whitelist": "converter[.]task",
          "tombstones.on.delete": true
        }
      }
      

      S3 Connector:

      {
        "name": "qa-s3-sink-task|1",
        "config": {
          "connector.class": "io.confluent.connect.s3.S3SinkConnector",
          "topics": "qa-debezium-comp.converter.task",
          "topics.dir": "data/env/qa",
          "s3.region": "eu-west-1",
          "s3.bucket.name": "<bucket-name>",
          "flush.size": "15000",
          "rotate.interval.ms": "3600000",
          "storage.class": "io.confluent.connect.s3.storage.S3Storage",
          "format.class": "custom.kafka.connect.s3.format.plaintext.PlaintextFormat",
          "schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
          "partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
          "schema.compatibility": "NONE",
          "key.converter": "org.apache.kafka.connect.json.JsonConverter",
          "value.converter": "org.apache.kafka.connect.json.JsonConverter",
          "key.converter.schemas.enable": false,
          "value.converter.schemas.enable": false,
          "transforms": "ExtractDocument",
          "transforms.ExtractDocument.type":"custom.kafka.connect.transforms.ExtractDocument$Value"
        }
      }
      

      The connectors are created using curl: curl -X POST -H "Content-Type: application/json" --data @<json_file> http:/<connect_host>:10083/connectors

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            akatona Andras Katona
            vko Vit Koma
            Votes:
            12 Vote for this issue
            Watchers:
            21 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment