Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-22540

HighlyCompressedMapStatus's avgSize is incorrect

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.3.0
    • Fix Version/s: 2.2.1, 2.3.0
    • Component/s: Spark Core
    • Labels:
      None

      Description

      The calculation of HighlyCompressedMapStatus's avgSize is incorrect.
      Currently, it looks like "sum of small blocks / count of all non empty blocks", the count of all non empty blocks not only contains small blocks, which contains huge blocks number also, but we need the count of small blocks only.

        Attachments

          Activity

            People

            • Assignee:
              yucai yucai
              Reporter:
              yucai yucai
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: