Description
There are use cases when content in CSV format is dumped into an external storage as one of columns. For example, CSV records are stored together with other meta-info to Kafka. Current Spark API doesn't allow to parse such columns directly. The existing method csv() requires a dataset with one string column. The API is inconvenient in parsing CSV column in dataset with many columns. The ticket aims to add new function similar to from_json() with the following signatures in Scala:
def from_csv(e: Column, schema: StructType, options: Map[String, String]): Column
and for using from Python, R and Java:
def from_csv(e: Column, schema: String, options: java.util.Map[String, String]): Column