Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Not A Problem
-
2.0.2, 2.1.2, 2.2.0
-
None
-
None
Description
I save dataframe containing vectors in ORC format, when I read it back, the format is changed.
scala> import org.apache.spark.ml.linalg._ import org.apache.spark.ml.linalg._ scala> val data = Seq((1,Vectors.dense(1.0,2.0)), (2,Vectors.sparse(8, Array(4), Array(1.0)))) data: Seq[(Int, org.apache.spark.ml.linalg.Vector)] = List((1,[1.0,2.0]), (2,(8,[4],[1.0]))) scala> val df = data.toDF("i", "vec") df: org.apache.spark.sql.DataFrame = [i: int, vec: vector] scala> df.schema res0: org.apache.spark.sql.types.StructType = StructType(StructField(i,IntegerType,false), StructField(vec,org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7,true)) scala> df.write.orc("/tmp/123") scala> val df2 = spark.sqlContext.read.orc("/tmp/123") df2: org.apache.spark.sql.DataFrame = [i: int, vec: struct<type: tinyint, size: int ... 2 more fields>] scala> df2.schema res3: org.apache.spark.sql.types.StructType = StructType(StructField(i,IntegerType,true), StructField(vec,StructType(StructField(type,ByteType,true), StructField(size,IntegerType,true), StructField(indices,ArrayType(IntegerType,true),true), StructField(values,ArrayType(DoubleType,true),true)),true))
Attachments
Attachments
Issue Links
- blocks
-
SPARK-20901 Feature parity for ORC with Parquet
- Open
- relates to
-
SPARK-17765 org.apache.spark.mllib.linalg.VectorUDT cannot be cast to org.apache.spark.sql.types.StructType
- Resolved