Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
5.0-alpha
-
None
Description
When obtaining delta column information, because the column information is not saved in the catalog (due to the use of DeltaCatalog), it cannot be obtained directly from the catalog. We first determine whether the table is a delta table. The Delta SDK provides a function. If so, read the table through spark.table to get the schema. Here, spark scans Metadata to get the schema information under the back path
Also due to the use of DeltaCatalog, delta table does not support the show create table statement, this is because deltaCatalog does some checks, does not support this SQL , here by judging whether it is delta in advance, if it is directly through location and table spell a ddl return.
Limit: The partition column is not processed here, so the partition column is not recognized, and delta does not manage its own partition through the catalog. It is obtained in real time by scanning the Metadata under the confidant path, so it does not affect the reading of data. The only place that has an impact is the function of snapshot partition construction.