Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-22805

Use aliases for StorageLevel in event logs

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotBulk Copy AttachmentsBulk Move AttachmentsVotersWatch issueWatchersCreate sub-taskConvert to sub-taskMoveLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete CommentsDelete
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Won't Fix
    • 2.1.2, 2.2.1
    • None
    • Spark Core
    • None

    Description

      Fact 1: StorageLevel has a private constructor, therefore a list of predefined levels is not extendable (by the users).

      Fact 2: The format of event logs uses redundant representation for storage levels

      >>> len('{"Use Disk": true, "Use Memory": false, "Deserialized": true, "Replication": 1}')
      79
      >>> len('DISK_ONLY')
      9
      

      Fact 3: This leads to excessive log sizes for workloads with lots of partitions, because every partition would have the storage level field which is 60-70 bytes more than it should be.

      Suggested quick win: use the names of the predefined levels to identify them in the event log.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned Assign to me
            lebedev Sergei Lebedev
            Votes:
            1 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment