Log workAgile BoardRank to TopRank to BottomBulk Copy AttachmentsBulk Move AttachmentsAdd voteVotersWatch issueWatchersConvert to IssueMoveLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • llap
    • llap
    • None

    Description

      Due to the nature of cache now (metadata cache + disk cache), when data is read from ORC, whole bunch of processing is still done with metadata, columns, streams, contexts, offsets, etc. to get the data that is in cache. Essentially only the disk reads are eliminated, everything else is as if we are reading an unknown file.
      We could have a better metadata representation that is saved during first read - for example, (file, stripe) -> DiskRange[] (incl. cache buffers that are not locked) + multi-dimensional array per column per stream per RG pointing to offsets in DiskRange array.
      That way if such structure is found in cache, reader can avoid all the calculation and just do dumb conversion into results to pass to decoder plus disk reading for missing parts.

      This java cache cannot figure in the main data eviction policy so it should be small. With java objects no cache locking is needed, we can evict while someone is still using the structure, and it will be GCed

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned Assign to me
            sershe Sergey Shelukhin

            Dates

              Created:
              Updated:

              Slack

                Issue deployment