Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-24595

Programmatic configuration of S3 doesn't pass parameters to Hadoop FS

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 1.14.0
    • None
    • None

    Description

      When running in mini-cluster mode Flink apparently doesn't pass S3 configuration to underlying Hadoop FS. With a code like this

      Configuration conf = new Configuration();
      conf.setString("s3.endpoint", "http://localhost:4566");
      conf.setString("s3.aws.credentials.provider","org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider");
      conf.setString("s3.access.key", "harvester");
      conf.setString("s3.secret.key", "harvester");
      StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment(conf);
      

      Application fails with an exception with most relevant error being Caused by: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : com.amazonaws.SdkClientException: Failed to connect to service endpoint: 

      So Hadoop lists all the providers but it should use only the one set in configuration. Full project that reproduces this behaviour is available at https://github.com/PavelPenkov/flink-s3-conf and relevant files are attached to this issue.

      Attachments

        1. FlinkApp.java
          2 kB
          Pavel Penkov
        2. TickingSource.java
          0.5 kB
          Pavel Penkov
        3. flink_exception.txt
          7 kB
          Pavel Penkov

        Activity

          People

            Unassigned Unassigned
            pavel.penkov Pavel Penkov
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: