Details
-
Sub-task
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
3.0.0
-
None
Description
This is a grouping of related but not identical issues in the 2.13 migration, where the compiler is more picky about explicit types and imports. I'm grouping them as they seem moderately related.
Some are fairly self-evident like wanting an explicit generic type. In a few cases it looks like import resolution rules tightened up a bit and have to be explicit.
A few more cause problems like:
[ERROR] [Error] /Users/seanowen/Documents/spark_2.13/mllib/src/main/scala/org/apache/spark/ml/feature/CountVectorizer.scala:220: missing parameter type for expanded function
The argument types of an anonymous function must be fully known. (SLS 8.5)
Expected type was: ?
In some cases it's just a matter of adding an explicit type, like .map { m: Matrix =>.
Many seem to concern functions of tuples, or tuples of tuples.
.mapGroups { case (g, iter) => needs to be simply .mapGroups { (g, iter) =>
Or more annoyingly:
}.reduceByKey { case ((wc1, df1), (wc2, df2)) =>
(wc1 + wc2, df1 + df2)
}
Apparently can only be fully known without nesting tuples. This won't work:
}.reduceByKey { case ((wc1: Long, df1: Int), (wc2: Long, df2: Int)) => (wc1 + wc2, df1 + df2) }
This does:
}.reduceByKey { (wcdf1, wcdf2) => (wcdf1._1 + wcdf2._1, wcdf1._2 + wcdf2._2) }
I'm not super clear why most of the problems seem to affect reduceByKey.
Attachments
Issue Links
- links to