Type: New Feature
Resolution: Won't Fix
Affects Version/s: None
Fix Version/s: None
Component/s: Spark Core
Users need a very simple way to create counters in their jobs. Accumulators provide a way to do this, but are a little clunky, for two reasons:
1) the setup is a nuisance
2) w/ delayed evaluation, you don't know when it will actually run, so its hard to look at the values
consider this code:
The println will always say 0 records were filtered, because its printed before anything has actually run. I could print out the value later on, but note that it would destroy the modularity of the method – kinda ugly to return the accumulator just so that it can get printed later on. (and of course, the caller in turn might not know when the filter is going to get applied, and would have to pass the accumulator up even further ...)
I'd like to have Counters which just automatically get printed out whenever a stage has been run, and also with some api to get them back. I realize this is tricky b/c a stage can get re-computed, so maybe you should only increment the counters once.
Maybe a more general way to do this is to provide some callback for whenever an RDD is computed – by default, you would just print the counters, but the user could replace w/ a custom handler.