Details
-
Sub-task
-
Status: Closed
-
Blocker
-
Resolution: Done
-
None
Description
Follow up the test for FLINK-32315.
In Flink 1.20, we introduced a local file upload possibility for Kubernetes deployments. To verify this feature, you can check the relevant PR, which includes the docs, and examples for more information.
To test this feature, it is required to have an available Kubernetes cluster to deploy to, and some DFS where Flink can deploy the local JAR. For a sandbox setup, I recommend to install minikube. The flink-k8s-operator quickstart guide explains that pretty well (helm is not needed here). For the DFS, I have a gist to setup Minio on a K8s pod here.
The two following main use-case should be handled correctly:
- Deploy job with a local job JAR, but without further dependencies
$ ./bin/flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=my-first-application-cluster \ -Dkubernetes.container.image=flink:1.20 \ -Dkubernetes.artifacts.local-upload-enabled=true \ -Dkubernetes.artifacts.local-upload-target=s3://my-bucket/ \ local:///path/to/TopSpeedWindowing.jar
- Deploy job with a local job JAR, and further dependencies (e.g. a UDF included in a separate JAR).
$ ./bin/flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=my-first-application-cluster \ -Dkubernetes.container.image=flink:1.20 \ -Dkubernetes.artifacts.local-upload-enabled=true \ -Dkubernetes.artifacts.local-upload-target=s3://my-bucket/ \ -Duser.artifacts.artifact-list=local:///tmp/my-flink-udf1.jar\;s3://my-bucket/my-flink-udf2.jar \ local:///tmp/my-flink-job.jar
Attachments
Attachments
Issue Links
- is a clone of
-
FLINK-35690 Release Testing: Verify FLIP-459: Support Flink hybrid shuffle integration with Apache Celeborn
- Closed