Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.4.0
-
None
Description
In Spark 3.1.0, we added ability to read and write from/to catalog table for streaming query. (Read: SPARK-32885 / Write: SPARK-32896)
When the streaming query is being executed with such API, the table identifier for source and sink is "conditionally" exposed to the logical plan, depending on the DSv1 vs DSv2 and source vs sink.
Since this is the only way to get the information of table, we would like to make some changes on relevant logical nodes to carry out and expose the information of table, specifically, table identifier.
Attachments
Issue Links
- breaks
-
SPARK-41040 Self-union streaming query may fail when using readStream.table
- Resolved
- links to