In popular DBMS like MySQL/PostgreSQL/Oracle, runtime exceptions are thrown on casting, e.g. cast('abc' as boolean)
While in Spark, the result is converted as null silently. It is by design since we don't want a long-running job aborted by some casting failure. But there are scenarios that users want to make sure all the data conversion are correct, like the way they use MySQL/PostgreSQL/Oracle.
This one has a bigger scope than https://issues.apache.org/jira/browse/SPARK-28741