Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
None
-
None
-
None
-
None
Description
Hi there,
If I am wrong, correct me. Oozie does run all workflows inside a MapReduce job. Even if you want to run Spark job, it will run by MapReduce job. In HiveOnSpark situation(with hive.exection.engine=spark and spark.master=yarn in job.properties), that job will be failed with no reason. Based on my logic, there is no reason to run spark job inside a MapReduce job.
What do you think? It may be an issue I think.
Best,
Mobin
Attachments
Issue Links
- duplicates
-
OOZIE-1770 Create Oozie Application Master for YARN
- Closed