Description
The Ignite Spark SQL interface currently takes just “table name” as a
parameter which it uses to supply a Spark dataset with data from the
underlying Ignite SQL table with that name.
To do this it loops through each cache and finds the first one with the
given table name [1]. This causes issues if there are multiple tables
registered in different schema with the same table name as you can only
access one of those from Spark. We could either:
1. Pass an extra parameter through the Ignite Spark data source which
optionally specifies the schema name.
2. Support namespacing in the existing table name parameter, ie
“schemaName.tableName”
[1 ]https://github.com/apache/ignite/blob/ca973ad99c6112160a305df05be9458e29f88307/modules/spark/src/main/scala/org/apache/ignite/spark/impl/package.scala#L119
Attachments
Issue Links
- is related to
-
IGNITE-12141 Ignite Spark Integration Support Schema on Table Write
- Open
- links to