Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-2722

HttpFs shouldn't be using an int for block size

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.23.1
    • Component/s: hdfs-client
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      ./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java: blockSize = fs.getConf().getInt("dfs.block.size", 67108864);

      Should instead be using dfs.blocksize and should instead be long.

      I'll post a patch for this after HDFS-1314 is resolved – which changes the internal behavior a bit (should be getLongBytes, and not just getLong, to gain formatting advantages).

        Issue Links

          Activity

          No work has yet been logged on this issue.

            People

            • Assignee:
              Harsh J
              Reporter:
              Harsh J
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development