1) Spark SQL fails to analyze this query: select db1.t1.i1 from db1.t1, db2.t1
Most of the other database systems support this ( e.g DB2, Oracle, MySQL).
Note: In DB2, Oracle, the notion is of <schema>.<tablename>.<columnname>
2) Another scenario where this fully qualified name is useful is as follows:
// current database is db1.
select t1.i1 from t1, db2.t1
If the i1 column exists in both tables: db1.t1 and db2.t1, this will throw an error during column resolution in the analyzer, as it is ambiguous.
Lets say the user intended to retrieve i1 from db1.t1 but in the example db2.t1 only has i1 column. The query would still succeed instead of throwing an error.
One way to avoid confusion would be to explicitly specify using the fully qualified name db1.t1.i1
For e.g: select db1.t1.i1 from t1, db2.t1
There is a workaround for these issues, which is to use an alias.