Description
Currently, we do not support subquery column aliases;
scala> sql("SELECT * FROM (SELECT 1 AS col1, 1 AS col2) t(a, b)").show org.apache.spark.sql.catalyst.parser.ParseException: mismatched input '(' expecting {<EOF>, ',', 'WHERE', 'GROUP', 'ORDER', 'HAVING', 'LIMIT', 'JOIN', 'CROSS', 'INNER', 'LEFT', 'RIGHT', 'FULL', 'NATURAL', 'LATERAL', 'WINDOW', 'UNION', 'EXCEPT', 'MINUS', 'INTERSECT', 'SORT', 'CLUSTER', 'DISTRIBUTE', 'ANTI'}(line 1, pos 45) == SQL == SELECT * FROM (SELECT 1 AS col1, 1 AS col2) t(a, b) ---------------------------------------------^^^ at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:217) at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:114) at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48) at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:68) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
We could support this by referring;
http://docs.aws.amazon.com/redshift/latest/dg/r_FROM_clause30.html