Hadoop Common
  1. Hadoop Common
  2. HADOOP-5804

neither s3.block.size not fs.s3.block.size are honoured

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.18.2, 0.19.1, 0.20.0
    • Fix Version/s: 0.21.0
    • Component/s: fs/s3
    • Labels:
      None
    • Environment:

      all

      Description

      S3FileSystem does not override FileSystem.getDefaultBlockSize(), so the s3 default block size is actualy controlled by fs.local.block.size

      As far as I can see, the s3 block size specific parameters (either with or without the fs. prefix) are read nowhere.

        Activity

        Tom White made changes -
        Status Resolved [ 5 ] Closed [ 6 ]
        Tom White made changes -
        Field Original Value New Value
        Resolution Fixed [ 1 ]
        Status Open [ 1 ] Resolved [ 5 ]
        Assignee Tom White [ tomwhite ]
        Fix Version/s 0.21.0 [ 12313563 ]
        Mathieu Poumeyrol created issue -

          People

          • Assignee:
            Tom White
            Reporter:
            Mathieu Poumeyrol
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development