Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
2.3.2
Description
UnresolvedAttribute.sql() output is incorrectly escaped for nested columns
import org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute // The correct output is a.b, without backticks, or `a`.`b`. $"a.b".expr.asInstanceOf[UnresolvedAttribute].sql // res1: String = `a.b` // Parsing is correct; the bug is localized to sql() $"a.b".expr.asInstanceOf[UnresolvedAttribute].nameParts // res2: Seq[String] = ArrayBuffer(a, b)
The likely culprit is that the sql() implementation does not check for nameParts being non-empty.
override def sql: String = name match { case ParserUtils.escapedIdentifier(_) | ParserUtils.qualifiedEscapedIdentifier(_, _) => name case _ => quoteIdentifier(name) }