Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-7242

Externalized secrets are revealed in task configuration

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.0.1, 2.1.0
    • Component/s: None
    • Labels:
      None

      Description

      Trying to use new externalized secrets feature I noticed that task configuration is being saved in config topic with disclosed secrets. It seems like the main goal of feature was not achieved - secrets are still persisted in plain-text. Probably I'm misusing this new config, please correct me if I wrong.

      I'm running connect in distributed mode, creating connector with following config:

      {
        "name" : "jdbc-sink-test",
        "config" : {
          "connector.class" : "io.confluent.connect.jdbc.JdbcSinkConnector",
          "tasks.max" : "1",
          "config.providers" : "file",
          "config.providers.file.class" : "org.apache.kafka.common.config.provider.FileConfigProvider",
          "config.providers.file.param.secrets" : "/opt/mysecrets",
          "topics" : "test_topic",
          "connection.url" : "${file:/opt/mysecrets:url}",
          "connection.user" : "${file:/opt/mysecrets:user}",
          "connection.password" : "${file:/opt/mysecrets:password}",
          "insert.mode" : "upsert",
          "pk.mode" : "record_value",
          "pk.field" : "id"
        }
      }
      

      Connector works fine, placeholders are substituted with correct values from file, but then updated config is written into  the topic again (see 3 following records in config topic):

      key: connector-jdbc-sink-test
      value:
      {
      "properties": {
      "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
      "tasks.max": "1",
      "config.providers": "file",
      "config.providers.file.class": "org.apache.kafka.common.config.provider.FileConfigProvider",
      "config.providers.file.param.secrets": "/opt/mysecrets",
      "topics": "test_topic",
      "connection.url": "${file:/opt/mysecrets:url}",
      "connection.user": "${file:/opt/mysecrets:user}",
      "connection.password": "${file:/opt/mysecrets:password}",
      "insert.mode": "upsert",
      "pk.mode": "record_value",
      "pk.field": "id",
      "name": "jdbc-sink-test"
      }
      }
      
      
      key: task-jdbc-sink-test-0
      value:
      {
      "properties": {
      "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
      "config.providers.file.param.secrets": "/opt/mysecrets",
      "connection.password": "actualpassword",
      "tasks.max": "1",
      "topics": "test_topic",
      "config.providers": "file",
      "pk.field": "id",
      "task.class": "io.confluent.connect.jdbc.sink.JdbcSinkTask",
      "connection.user": "datawarehouse",
      "name": "jdbc-sink-test",
      "config.providers.file.class": "org.apache.kafka.common.config.provider.FileConfigProvider",
      "connection.url": "jdbc:postgresql://actualurl:5432/datawarehouse?stringtype=unspecified",
      "insert.mode": "upsert",
      "pk.mode": "record_value"
      }
      }
      
      key: commit-jdbc-sink-test
      value:
      {
      "tasks":1
      }
      

      Please advice have I misunderstood the goal of the given feature, have I missed smth in configuration or is it actually a bug? Thank you

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                rayokota Robert Yokota
                Reporter:
                bsiamionau Bahdan Siamionau
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: