Description
The feature importance calculation in org.apache.spark.ml.classification.GBTClassificationModel.featureImportances follows a flawed implementation from scikit-learn resulting in incorrect importance values. This error was recently discovered and updated in scikit-learn version 0.20.0. This error is inherited in the spark implementation and needs to be fixed here as well.
As described in the scikit-learn release notes (https://scikit-learn.org/stable/whats_new.html#version-0-20-0):
Fix Fixed a bug in ensemble.GradientBoostingRegressor and ensemble.GradientBoostingClassifier to have feature importances summed and then normalized, rather than normalizing on a per-tree basis. The previous behavior over-weighted the Gini importance of features that appear in later stages. This issue only affected feature importances. #11176 by Gil Forsyth.
Full discussion of this error and debate ultimately validating the correctness of the change can be found in the comment thread of the scikit-learn pull request: https://github.com/scikit-learn/scikit-learn/pull/11176
I believe the main change required would be to the featureImportances function in mllib/src/main/scala/org/apache/spark/ml/tree/treeModels.scala , however, I do not have the experience to make this change myself.
Attachments
Issue Links
- is duplicated by
-
SPARK-28222 Feature importance outputs different values in GBT and Random Forest in 2.3.3 and 2.4 pyspark version
- Resolved
- links to