Description
Executed at clean tip of the master branch, with all default settings:
scala> spark.sql("SELECT * FROM range(1)")
res1: org.apache.spark.sql.DataFrame = [id: bigint]
scala> spark.sql("SELECT * FROM RANGE(1)")
org.apache.spark.sql.AnalysisException: could not resolve `RANGE` to a table-valued function; line 1 pos 14
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at org.apache.spark.sql.catalyst.analysis.ResolveTableValuedFunctions$$anonfun$apply$1.applyOrElse(ResolveTableValuedFunctions.scala:126)
at org.apache.spark.sql.catalyst.analysis.ResolveTableValuedFunctions$$anonfun$apply$1.applyOrElse(ResolveTableValuedFunctions.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
...
I believe it should be case insensitive?