Details
-
New Feature
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
None
-
None
Description
For `spark.logit`, there is a `probabilityCol`, which is a vector in the backend (scala side). When we do collect(select(df, "probabilityCol")), backend returns the java object handle (memory address). We need to implement a method to convert a Vector/Dense Vector column as R vector, which can be read in SparkR. It is a followup JIRA of adding `spark.logit`.