Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-27803

Unable to write to s3 with Hudi format via Flink - Scala

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Invalid
    • 1.14.4
    • None
    • Important

    Description

      Getting this error , when writing to S3 as Hudi file format via  Flink:

      java.nio.file.AccessDeniedException: data: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : com.amazonaws.SdkClientException: Failed to connect to service endpoint: 

       

      • Tried updating the core-site.xml in the class path in the libs
      • All the Hadoop and awssdk jars are in the lib folder

       

      Stacktrace:

       

      13:45:27.026 [main] ERROR org.apache.flink.core.fs.FileSystem - Failed to load a file system via services

      java.util.ServiceConfigurationError: org.apache.flink.core.fs.FileSystemFactory: Provider org.apache.flink.fs.s3presto.S3FileSystemFactory could not be instantiated

      at java.util.ServiceLoader.fail(ServiceLoader.java:232) ~[?:1.8.0_332]

      at java.util.ServiceLoader.access$100(ServiceLoader.java:185) ~[?:1.8.0_332]

      at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384) ~[?:1.8.0_332]

      at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) ~[?:1.8.0_332]

      at java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_332]

      at org.apache.flink.core.plugin.PluginLoader$ContextClassLoaderSettingIterator.next(PluginLoader.java:136) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.shaded.guava30.com.google.common.collect.Iterators$ConcatenatedIterator.next(Iterators.java:1364) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.shaded.guava30.com.google.common.collect.TransformedIterator.next(TransformedIterator.java:47) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.core.fs.FileSystem.addAllFactoriesToList(FileSystem.java:1079) [flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.core.fs.FileSystem.loadFileSystemFactories(FileSystem.java:1060) [flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.core.fs.FileSystem.initialize(FileSystem.java:340) [flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.configureFileSystems(ClusterEntrypoint.java:229) [flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:185) [flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:617) [flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:59) [flink-dist_2.12-1.14.4.jar:1.14.4]

      Caused by: java.lang.LinkageError: loader constraint violation: when resolving overridden method "org.apache.flink.fs.s3presto.S3FileSystemFactory.createHadoopFileSystem()Lorg/apache/hadoop/fs/FileSystem;" the class loader (instance of org/apache/flink/core/plugin/PluginLoader$PluginClassLoader) of the current class, org/apache/flink/fs/s3presto/S3FileSystemFactory, and its superclass loader (instance of sun/misc/Launcher$AppClassLoader), have different Class objects for the type org/apache/hadoop/fs/FileSystem used in the signature

      at java.lang.Class.getDeclaredConstructors0(Native Method) ~[?:1.8.0_332]

      at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671) ~[?:1.8.0_332]

      at java.lang.Class.getConstructor0(Class.java:3075) ~[?:1.8.0_332]

      at java.lang.Class.newInstance(Class.java:412) ~[?:1.8.0_332]

      at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380) ~[?:1.8.0_332]

      ... 12 more

      13:45:27.032 [main] ERROR org.apache.flink.core.fs.FileSystem - Failed to load a file system via services

      java.util.ServiceConfigurationError: org.apache.flink.core.fs.FileSystemFactory: Provider org.apache.flink.fs.s3presto.S3PFileSystemFactory could not be instantiated

      at java.util.ServiceLoader.fail(ServiceLoader.java:232) ~[?:1.8.0_332]

      at java.util.ServiceLoader.access$100(ServiceLoader.java:185) ~[?:1.8.0_332]

      at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384) ~[?:1.8.0_332]

      at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) ~[?:1.8.0_332]

      at java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_332]

      at org.apache.flink.core.plugin.PluginLoader$ContextClassLoaderSettingIterator.next(PluginLoader.java:136) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.shaded.guava30.com.google.common.collect.Iterators$ConcatenatedIterator.next(Iterators.java:1364) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.shaded.guava30.com.google.common.collect.TransformedIterator.next(TransformedIterator.java:47) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.core.fs.FileSystem.addAllFactoriesToList(FileSystem.java:1079) [flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.core.fs.FileSystem.loadFileSystemFactories(FileSystem.java:1060) [flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.core.fs.FileSystem.initialize(FileSystem.java:340) [flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.configureFileSystems(ClusterEntrypoint.java:229) [flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:185) [flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:617) [flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:59) [flink-dist_2.12-1.14.4.jar:1.14.4]

      Caused by: java.lang.LinkageError: loader constraint violation: when resolving overridden method "org.apache.flink.fs.s3presto.S3FileSystemFactory.createHadoopFileSystem()Lorg/apache/hadoop/fs/FileSystem;" the class loader (instance of org/apache/flink/core/plugin/PluginLoader$PluginClassLoader) of the current class, org/apache/flink/fs/s3presto/S3FileSystemFactory, and its superclass loader (instance of sun/misc/Launcher$AppClassLoader), have different Class objects for the type org/apache/hadoop/fs/FileSystem used in the signature

      at java.lang.Class.getDeclaredConstructors0(Native Method) ~[?:1.8.0_332]

      at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671) ~[?:1.8.0_332]

      at java.lang.Class.getConstructor0(Class.java:3075) ~[?:1.8.0_332]

      at java.lang.Class.newInstance(Class.java:412) ~[?:1.8.0_332]

      at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380) ~[?:1.8.0_332]

      ... 12 more

      exception ignored

      13:50:18.015 [flink-akka.actor.default-dispatcher-19] ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Fatal error occurred in the cluster entrypoint.

      org.apache.flink.util.FlinkException: JobMaster for job da27b015abb68c146dd4306c1ed619a7 failed.

      at org.apache.flink.runtime.dispatcher.Dispatcher.jobMasterFailed(Dispatcher.java:913) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.dispatcher.Dispatcher.jobManagerRunnerFailed(Dispatcher.java:473) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$runJob$3(Dispatcher.java:430) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836) ~[?:1.8.0_332]

      at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811) ~[?:1.8.0_332]

      at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456) ~[?:1.8.0_332]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:455) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:455) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:213) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.actor.Actor.aroundReceive(Actor.scala:537) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.actor.Actor.aroundReceive$(Actor.scala:535) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.actor.ActorCell.invoke(ActorCell.scala:548) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.dispatch.Mailbox.run(Mailbox.scala:231) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.dispatch.Mailbox.exec(Mailbox.scala:243) [flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) [?:1.8.0_332]

      at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) [?:1.8.0_332]

      at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) [?:1.8.0_332]

      at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) [?:1.8.0_332]

      Caused by: org.apache.flink.runtime.jobmaster.JobMasterException: Could not start the JobMaster.

      at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:391) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:624) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:623) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      ... 13 more

      Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to start the operator coordinators

      at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:90) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:585) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:965) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:882) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:389) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:624) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:623) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      ... 13 more

      Caused by: org.apache.hudi.exception.HoodieIOException: Failed to get instance of org.apache.hadoop.fs.FileSystem

      at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:104) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.util.StreamerUtil.tableExists(StreamerUtil.java:288) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:258) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:164) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:585) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:965) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:882) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:389) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:624) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:623) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      ... 13 more

      Caused by: java.nio.file.AccessDeniedException: data: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : com.amazonaws.SdkClientException: Failed to connect to service endpoint: 

      at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:187) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:391) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:322) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3375) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:125) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3424) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3392) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:485) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:102) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.util.StreamerUtil.tableExists(StreamerUtil.java:288) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:258) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:164) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:585) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:965) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:882) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:389) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:624) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:623) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      ... 13 more

      Caused by: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : com.amazonaws.SdkClientException: Failed to connect to service endpoint: 

      at org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:159) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1257) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:833) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:783) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:5700) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:5673) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4904) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1394) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1333) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:392) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:391) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:322) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3375) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:125) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3424) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3392) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:485) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:102) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.util.StreamerUtil.tableExists(StreamerUtil.java:288) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:258) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:164) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:585) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:965) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:882) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:389) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:624) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:623) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      ... 13 more

      Caused by: com.amazonaws.SdkClientException: Failed to connect to service endpoint: 

      at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:100) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.getToken(InstanceMetadataServiceResourceFetcher.java:91) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.readResource(InstanceMetadataServiceResourceFetcher.java:69) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.internal.EC2ResourceFetcher.readResource(EC2ResourceFetcher.java:66) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsEndpoint(InstanceMetadataServiceCredentialsFetcher.java:58) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsResponse(InstanceMetadataServiceCredentialsFetcher.java:46) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.auth.BaseCredentialsFetcher.fetchCredentials(BaseCredentialsFetcher.java:112) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.auth.BaseCredentialsFetcher.getCredentials(BaseCredentialsFetcher.java:68) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:165) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:137) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1257) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:833) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:783) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:5700) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:5673) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4904) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1394) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1333) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:392) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:391) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:322) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3375) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:125) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3424) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3392) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:485) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:102) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.util.StreamerUtil.tableExists(StreamerUtil.java:288) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:258) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:164) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:585) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:965) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:882) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:389) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:624) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:623) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      ... 13 more

      Caused by: java.net.ConnectException: Connection refused (Connection refused)

      at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_332]

      at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_332]

      at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_332]

      at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_332]

      at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_332]

      at java.net.Socket.connect(Socket.java:607) ~[?:1.8.0_332]

      at sun.net.NetworkClient.doConnect(NetworkClient.java:175) ~[?:1.8.0_332]

      at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) ~[?:1.8.0_332]

      at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) ~[?:1.8.0_332]

      at sun.net.www.http.HttpClient.<init>(HttpClient.java:242) ~[?:1.8.0_332]

      at sun.net.www.http.HttpClient.New(HttpClient.java:339) ~[?:1.8.0_332]

      at sun.net.www.http.HttpClient.New(HttpClient.java:357) ~[?:1.8.0_332]

      at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1228) ~[?:1.8.0_332]

      at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1207) ~[?:1.8.0_332]

      at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1056) ~[?:1.8.0_332]

      at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:990) ~[?:1.8.0_332]

      at com.amazonaws.internal.ConnectionUtils.connectToEndpoint(ConnectionUtils.java:52) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:80) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.getToken(InstanceMetadataServiceResourceFetcher.java:91) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.readResource(InstanceMetadataServiceResourceFetcher.java:69) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.internal.EC2ResourceFetcher.readResource(EC2ResourceFetcher.java:66) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsEndpoint(InstanceMetadataServiceCredentialsFetcher.java:58) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsResponse(InstanceMetadataServiceCredentialsFetcher.java:46) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.auth.BaseCredentialsFetcher.fetchCredentials(BaseCredentialsFetcher.java:112) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.auth.BaseCredentialsFetcher.getCredentials(BaseCredentialsFetcher.java:68) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:165) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:137) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1257) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:833) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:783) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:5700) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:5673) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4904) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1394) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1333) ~[aws-java-sdk-s3-1.11.563.jar:?]

      at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:392) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:391) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:322) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3375) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:125) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3424) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3392) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:485) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) ~[flink-s3-fs-hadoop-1.14.4.jar:1.14.4]

      at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:102) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.util.StreamerUtil.tableExists(StreamerUtil.java:288) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:258) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:164) ~[hudi-flink-bundle_2.12-0.10.1.jar:0.10.1]

      at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:585) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:965) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:882) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:389) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:624) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:623) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185) ~[flink-rpc-akka_480230f4-b23b-49e6-a9c3-249df9c40dc7.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]

      at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) ~[?:?]

      at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) ~[flink-dist_2.12-1.14.4.jar:1.14.4]

      ... 13 more

      Attachments

        1. Dockerfile
          3 kB
          sathyan sethumadhavan
        2. core-site.xml
          2 kB
          sathyan sethumadhavan

        Activity

          People

            Unassigned Unassigned
            satnair sathyan sethumadhavan
            Votes:
            1 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: