Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Fixed
-
0.10.0
-
None
Description
There are programs that rely on custom environment variables. In hadoop mapreduce job we can use -Dmapreduce.map.env and - Dmapreduce.reduce.env to do pass them. Similarly in Spark
we can use --conf 'spark.executor.XXX=value for XXX'. There is no such feature yet in Flink.
This has given Flink a serious disadvantage when customers need such feature.