Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-16446

org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified by setting the fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey properties

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 1.1.0
    • None
    • Hive
    • None

    Description

      After upgrading our Cloudera cluster to CDH 5.10.1 we are experiencing the following problem during some Hive DDL.

      ....

      SET fs.s3n.awsSecretAccessKey=<our s3 secret access key>;
      SET fs.s3n.awsAccessKeyId=<our s3 access id>;
      ....

      ALTER TABLE hive_1k_partitions ADD IF NOT EXISTS partition (year='2014', month='2014-01', dt='2014-01-01', hours='00', minutes='16', seconds='22') location 's3n://<location to our s3 bucket>'

      ....

      Stack trace I was able to recover:
      [ Message content over the limit has been removed. ]
      at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
      at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
      at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:416)
      at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:432)
      at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:726)
      at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
      at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:606)
      at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
      at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
      Job Submission failed with exception ‘java.lang.IllegalArgumentException(AWS Access Key ID and Secret Access Key must be specified by setting the fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey properties (respectively).)’
      FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

      [9:31]
      Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/jars/hive-common-1.1.0-cdh5.10.1.jar!/hive-log4j.properties

      In the past we did not have to set s3 key and ID in core-site.xml because we were using them dynamically inside our hive DDL scripts.

      After setting S3 secret key and Access ID in core-site.xml this problem goes away. However this is an incompatibility change from the previous Hive shipped in CDH 5.9.

      Cloudera 5.10.x release note mentioned (HIVE-14269 : Enhanced write performance for Hive tables stored on Amazon S3.) is the only Hive related changes.
      https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_new_in_cdh_510.html

      https://issues.apache.org/jira/browse/HIVE-14269

      Attachments

        Activity

          People

            vihangk1 Vihang Karajgaonkar
            kalexin Kalexin Baoerjiin
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated: