Initially, Spark SQL does not store any partition information in the catalog for data source tables, because initially it was designed to work with arbitrary files. This, however, has a few issues for catalog tables:
1. Listing partitions for a large table (with millions of partitions) can be very slow during cold start.
2. Does not support heterogeneous partition naming schemes.
3. Cannot leverage pushing partition pruning into the metastore.
This ticket tracks the work required to push the tracking of partitions into the metastore. This change should be feature flagged.