ML Evaluator currently requires that metrics be maximized (bigger is better). That is counterintuitive for some metrics. Currently, we hackily negate some metrics in RegressionEvaluator, which is weird. Instead, we should:
- Return the metric as expected (e.g., "rmse" should return RMSE, not its negation).
- Provide an indicator of whether the metric should be maximized or minimized.
Model selection algorithms can use the indicator as needed.