Description
Currently, we do not support column aliases for aliased relation;
scala> Seq((1, 2), (2, 0)).toDF("id", "value").createOrReplaceTempView("t1") scala> Seq((1, 2), (2, 0)).toDF("id", "value").createOrReplaceTempView("t2") scala> sql("SELECT * FROM (t1 JOIN t2)") scala> sql("SELECT * FROM (t1 INNER JOIN t2 ON t1.id = t2.id) AS t(a, b, c, d)").show org.apache.spark.sql.catalyst.parser.ParseException: mismatched input '(' expecting {<EOF>, ',', 'WHERE', 'GROUP', 'ORDER', 'HAVING', 'LIMIT', 'JOIN', 'CROSS', 'INNER', 'LEFT', 'RIGHT', 'FULL', 'NATURAL', 'LATERAL', 'WINDOW', 'UNION', 'EXCEPT', 'MINUS', 'INTERSECT', 'SORT', 'CLUSTER', 'DISTRIBUTE', 'ANTI'}(line 1, pos 54) == SQL == SELECT * FROM (t1 INNER JOIN t2 ON t1.id = t2.id) AS t(a, b, c, d) ------------------------------------------------------^^^ at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:217) at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:114) at org.apache.spark.sql.execution.SparkSqlParser.parse(Spa
We could support this by referring;
http://docs.aws.amazon.com/redshift/latest/dg/r_FROM_clause30.html