Uploaded image for project: 'CarbonData'
  1. CarbonData
  2. CARBONDATA-4151

When data sampling is done on large data set using Spark's df.sample function - the size of sampled table is not matching with record size of non sampled (Raw Table)

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Open
    • Priority: Blocker
    • Resolution: Unresolved
    • Affects Version/s: 2.0.1
    • Fix Version/s: 2.1.0, 2.0.1
    • Component/s: core
    • Labels:
      None
    • Environment:
      Apache carbondata 2.0.1, spark 2.4.5, hadoop 2.7.2
    • Flags:
      Patch

      Description

      Hi Team,

      When we are performing 5%, 10% data sampling on large dataset using Spark's df.sample - the size of sampled table is not matching with record size of non sampled (Raw Table).

      Our Raw table size is around 11 GB, so when we perform 5%, 10% sampling then the sampled table size should come as 550 MB, 1.1 GB. However in our case they are coming as 1.5 GB and 3 GB. Which is 3 times higher than the expected number. 

      Could you please check and help us in understand where is the issue?

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              amarvayyala Amaranadh Vayyala
            • Votes:
              3 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated: