As we discussed in that [PR](https://github.com/apache/spark/pull/16938)
The following DDL for a managed table with an existed default location should throw an exception:
Currently there are some situations which are not consist with above logic:
1. CREATE TABLE ... (PARTITIONED BY ...) succeed with an existed default location
situation: for both hive/datasource(with HiveExternalCatalog/InMemoryCatalog)
2. CREATE TABLE ... (PARTITIONED BY ...) AS SELECT ...
situation: hive table succeed with an existed default location