Details
-
New Feature
-
Status: Resolved
-
Major
-
Resolution: Invalid
-
3.2.1
-
None
-
None
Description
when id do clean data,one rdd according one value(>or <) filter data, and then generate two different set,one is error data file, another is errorless data file.
Now I use filter, but this filter must have two spark dag job, that cost too much.
exactly some code like iterator.span(preidicate) and then return one tuple(iter1,iter2)
one dataset will be spilted tow dataset in one rule data clean progress.
i hope compute once not twice.