While Spark exposes state metrics for single state, Spark still doesn't expose overall memory usage of state (loadedMaps) in HDFSBackedStateStoreProvider.
The rationalize of the patch is that state backed by HDFSBackedStateStoreProvider will consume more memory than the number what we can get from query status due to caching multiple versions of states. The memory footprint to be much larger than query status reports in situations where the state store is getting a lot of updates: while shallow-copying map incurs additional small memory usages due to the size of map entities and references, but row objects will still be shared across the versions. If there're lots of updates between batches, less row objects will be shared and more row objects will exist in memory consuming much memory then what we expect.
It would be better to expose it as well so that end users can determine actual memory usage for state.