Description
The code below is not working properly in Spark Connect:
>>> sdf = spark.range(10) >>> spark.createDataFrame(sdf.tail(5), sdf.schema) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/.../spark/python/pyspark/sql/connect/dataframe.py", line 94, in __repr__ return "DataFrame[%s]" % (", ".join("%s: %s" % c for c in self.dtypes)) File "/.../spark/python/pyspark/sql/connect/dataframe.py", line 162, in dtypes return [(str(f.name), f.dataType.simpleString()) for f in self.schema.fields] File "/.../spark/python/pyspark/sql/connect/dataframe.py", line 1346, in schema self._schema = self._session.client.schema(query) File "/.../spark/python/pyspark/sql/connect/client.py", line 614, in schema proto_schema = self._analyze(method="schema", plan=plan).schema File "/.../spark/python/pyspark/sql/connect/client.py", line 755, in _analyze self._handle_error(rpc_error) File "/.../spark/python/pyspark/sql/connect/client.py", line 894, in _handle_error raise convert_exception(info, status.message) from None pyspark.errors.exceptions.connect.AnalysisException: [NULLABLE_COLUMN_OR_FIELD] Column or field `id` is nullable while it's required to be non-nullable.
whereas working properly in regular PySpark:
>>> sdf = spark.range(10) >>> spark.createDataFrame(sdf.tail(5), sdf.schema).show() +---+ | id| +---+ | 5| | 6| | 7| | 8| | 9| +---+
Attachments
Issue Links
- is duplicated by
-
SPARK-42679 createDataFrame doesn't work with non-nullable schema.
- Resolved