Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-5861

s3n files are not getting split by default

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.19.1
    • 0.21.0
    • fs/s3
    • None
    • ec2

    • Incompatible change, Reviewed
    • Files stored on the native S3 filesystem (s3n:// URIs) now report a block size determined by the fs.s3n.block.size property (default 64MB).

    Description

      running with stock ec2 scripts against hadoop-19 - i tried to run a job against a directory with 4 text files - each about 2G in size. These were not split (only 4 mappers were run).

      The reason seems to have two parts - primarily that S3N files report a block size of 5G. This causes FileInputFormat.getSplits to fall back on goal size (which is totalsize/conf.get("mapred.map.tasks")).Goal Size in this case was 4G - hence the files were not split. This is not an issue with other file systems since the block size reported is much smaller and the splits get based on block size (not goal size).

      can we make the S3N files report a more reasonable block size?

      Attachments

        1. hadoop-5861.patch
          4 kB
          Thomas White
        2. hadoop-5861-v2.patch
          4 kB
          Thomas White

        Activity

          People

            tomwhite Thomas White
            jsensarma Joydeep Sen Sarma
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: