2.4.5, 2.4.6, 2.4.7, 3.0.0, 3.0.1
Q1. What are you trying to do? Articulate your objectives using absolutely no jargon.
I would like to have the extended SQL syntax "SELECT * EXCEPT someColumn FROM .."
to be able to select all columns except some in a SELECT clause.
It would be similar to SQL syntax from some databases, like Google BigQuery or PostgresQL.
Google question "select * EXCEPT one column", and you will see many developpers have the same problems.
There are several typicall examples where is is very helpfull :
you add "count ( * ) countCol" column, and then filter on it using for example "having countCol = 1"
... and then you want to select all columns EXCEPT this dummy column which always is "1"
same with analytical function "partition over(...) rankCol ... where rankCol=1"
For example to get the latest row before a given time, in a time series table.
This is "Time-Travel" queries addressed by framework like "DeltaLake"
copy some data from table "t" to corresponding table "t_snapshot", and back to "t"
Q2. What problem is this proposal NOT designed to solve?
It is only a SQL syntaxic sugar.
It does not change SQL execution plan or anything complex.
Q3. How is it done today, and what are the limits of current practice?
Today, you can either use the DataSet API, with .dropColumn(someColumn)
or you need to HARD-CODE manually all columns in your SQL. Therefore your code is NOT generic (or you are using a SQL meta-code generator?)
Q4. What is new in your approach and why do you think it will be successful?
It is NOT new... it is already a proven solution from DataSet.dropColumn(), Postgresql, BigQuery
Q5. Who cares? If you are successful, what difference will it make?
It simplifies life of developpers, dba, data analysts, end users.
It simplify development of SQL code, in a more generic way for many tasks.
Q6. What are the risks?
There is VERY limited risk on spark SQL, because it already exists in DataSet API.
It is an extension of SQL syntax, so the risk is annoying some IDE SQL editors for a new SQL syntax.
Q7. How long will it take?
No idea. I guess someone experienced in the Spark SQL internals might do it relatively "quickly".
It is a kind of syntaxic sugar to add in antlr grammar rule, then transform in DataSet api
Q8. What are the mid-term and final “exams” to check for success?
The 3 standard use-cases given in question Q1.