Details
-
New Feature
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
1.3.0
-
None
Description
The gradient boosting as currently implemented estimates the loss-gradient in each iteration using regression trees. At every iteration, the regression trees are trained/split to minimize predicted gradient variance. Additionally, the terminal node predictions are computed to minimize the prediction variance.
However, such predictions won't be optimal for loss functions other than the mean-squared error. The TreeBoosting refinement can help mitigate this issue by modifying terminal node prediction values so that those predictions would directly minimize the actual loss function. Although this still doesn't change the fact that the tree splits were done through variance reduction, it should still lead to improvement in gradient estimations, and thus better performance.
The details of this can be found in the R vignette. This paper also shows how to refine the terminal node predictions.
Attachments
Issue Links
- Is contained by
-
SPARK-14047 GBT improvement umbrella
- Resolved
- is related to
-
SPARK-8547 xgboost exploration
- Resolved
- relates to
-
SPARK-3727 Trees and ensembles: More prediction functionality
- Resolved