Details
Description
At work we were 'taking a closer look' at ML transformers&estimators and I spotted that anomally.
On first look, resolution looks simple:
Add to StopWordsRemover.transformSchema line (as is done in e.g. PCA.transformSchema, StandardScaler.transformSchema, OneHotEncoder.transformSchema):
require(!schema.fieldNames.contains($(outputCol)), s"Output column ${$(outputCol)} already exists.")
Am I correct? Is that a bug? If yes - I am willing to prepare an appropriate pull request.
Maybe a better idea is to make use of super.transformSchema in StopWordsRemover (and possibly in all other places)?
Links to files at github, mentioned above:
https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/feature/StopWordsRemover.scala#L147
https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/Transformer.scala#L109-L111
https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/feature/StandardScaler.scala#L101-L102
https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/feature/PCA.scala#L138-L139
https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala#L75-L76