Description
scala> import org.apache.spark.ml.linalg.{Vector, Vectors} import org.apache.spark.ml.linalg.{Vector, Vectors}scala> import org.apache.spark.ml.stat.ChiSquareTest import org.apache.spark.ml.stat.ChiSquareTestscala> val data = Seq( | (0.0, Vectors.dense(0.5, 10.0)), | (0.0, Vectors.dense(1.5, 20.0)), | (1.0, Vectors.dense(1.5, 30.0)), | (0.0, Vectors.dense(3.5, 30.0)), | (0.0, Vectors.dense(3.5, 40.0)), | (1.0, Vectors.dense(3.5, 40.0)) | ) data: Seq[(Double, org.apache.spark.ml.linalg.Vector)] = List((0.0,[0.5,10.0]), (0.0,[1.5,20.0]), (1.0,[1.5,30.0]), (0.0,[3.5,30.0]), (0.0,[3.5,40.0]), (1.0,[3.5,40.0]))scala> scala> scala> val df = data.toDF("label", "features") df: org.apache.spark.sql.DataFrame = [label: double, features: vector]scala> val chi = ChiSquareTest.test(df, "features", "label") chi: org.apache.spark.sql.DataFrame = [pValues: vector, degreesOfFreedom: array<int> ... 1 more field]scala> chi.show +--------------------+----------------+----------+ | pValues|degreesOfFreedom|statistics| +--------------------+----------------+----------+ |[0.68728927879097...| [2, 3]|[0.75,1.5]| +--------------------+----------------+----------+
Current impls of ChiSquareTest, ANOVATest, FValueTest, Correlation all return a df only containing one row.
I think this is quite hard to use, suppose we have a dataset with dim=1000, the only operation we can deal with the test result is to collect it by head() or first(), and then use it in the driver.
While what I really want to do is filtering the df like pValue>0.1 or corr<0.5, So I suggest to flatten the output df in those tests.
note: {{ANOVATest}} and{{FValueTest}} are newly added in 3.1.0, but {{ChiSquareTest}} and {{Correlation}} were here for a long time.
Attachments
Issue Links
- is related to
-
SPARK-31492 flatten the result dataframe of FValueTest
- Resolved
-
SPARK-31494 flatten the result dataframe of ANOVATest
- Resolved
- links to