Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
v2.6.5
-
None
-
None
-
Production
-
Important
-
Newcomer (Easy) - Everyone can do this
Description
When the cube is created using Spark engine selection load Hfile to Hbase step is required to copy ~9 GB data. But in turn the job runs in a never ending loop and the files are getting copied over several times. Even after discarding the cube the job keeps running in backend.
Hbase master service is required to be rebooted for this stop.
Unable to identify the root cause.
When running using MR engine do not encounter this issue.