Currently, spark would not release ShuffleBlockFetcherIterator until the whole task finished.
In some conditions, it incurs memory leak.
An example is Shuffle -> map -> Coalesce(shuffle = false). Each ShuffleBlockFetcherIterator contains some metas about MapStatus(blocksByAddress) and each ShuffleMapTask will keep n(max to shuffle partitions) shuffleBlockFetcherIterator for they are refered by onCompleteCallbacks of TaskContext, in some case, it may take huge memory and the memory will not released until the task finished.
Actually, We can release ShuffleBlockFetcherIterator as soon as it's consumed.