Details

    • Type: Sub-task Sub-task
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      The FileInputStreamCache currently defaults to holding only 10 input stream pairs (corresponding to 10 blocks). In many HBase workloads, the region server will be issuing random reads against a local file which is 2-4GB in size or even larger (hence 20+ blocks).

      Given that the memory usage for caching these input streams is low, and applications like HBase tend to already increase their ulimit -n substantially (eg up to 32,000), I think we should raise the default cache size to 50 or more. In the rare case that someone has an application which uses local reads with hundreds of open blocks and can't feasibly raise their ulimit -n, they can lower the limit appropriately.

      1. hdfs-4418.txt
        2 kB
        Todd Lipcon

        Activity

        No work has yet been logged on this issue.

          People

          • Assignee:
            Todd Lipcon
            Reporter:
            Todd Lipcon
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development