When writing a custom Evaluator with PySpark, it's often useful to be able to support negative values in the evaluate function. For example, the relative difference between predicted and actual values. In this case, the goal is to select the value closest to 0 rather than the smaller or largest value. We should add a flag that enables users to specify this scenario.
For example, CrossValidator may be used with a parameter grid that results in the following metric values for different folds:
- [ 0.5, 0.5, 0.5, 0, 0 ]
- [ 0.5, -0.5, 0.5, 0, 0 ]
- [ -0.5, -0.5, -0.5, 0, 0 ]
This results in the following values for avgMetrics: [ 1.5, 0.5, -1.5 ]. There is currently no way to tell the cross validator to select the second model, with the avg metrics closest to zero.
Here's an example Evaluator where this functionality is useful:
This is a custom evaluator that compares the different between the total and predicted values in a regression problem. I am proposing a new function for the Evaluator, that specifies if an absolute transformation should be applied to the cross validated metrics.