Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-16973

Revisiting default value for hbase.client.scanner.caching

    Details

    • Type: Task
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      We are observing below logs for a long-running scan:

      2016-10-30 08:51:41,692 WARN  [B.defaultRpcServer.handler=50,queue=12,port=16020] ipc.RpcServer:
      (responseTooSlow-LongProcessTime): {"processingtimems":24329,
      "call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)",
      "client":"11.251.157.108:50415","scandetails":"table: ae_product_image region: ae_product_image,494:
      ,1476872321454.33171a04a683c4404717c43ea4eb8978.","param":"scanner_id: 5333521 number_of_rows: 2147483647
      close_scanner: false next_call_seq: 8 client_handles_partials: true client_handles_heartbeats: true",
      "starttimems":1477788677363,"queuetimems":0,"class":"HRegionServer","responsesize":818,"method":"Scan"}
      

      From which we found the "number_of_rows" is as big as Integer.MAX_VALUE

      And we also observed a long filter list on the customized scan. After checking application code we confirmed that there's no Scan.setCaching or hbase.client.scanner.caching setting on client side, so it turns out using the default value the caching for Scan will be Integer.MAX_VALUE, which is really a big surprise.

      After checking code and commit history, I found it's HBASE-11544 which changes HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING from 100 to Integer.MAX_VALUE, and from the release note there I could see below notation:

      Scan caching default has been changed to Integer.Max_Value 
      This value works together with the new maxResultSize value from HBASE-12976 (defaults to 2MB) 
      Results returned from server on basis of size rather than number of rows 
      Provides better use of network since row size varies amongst tables
      

      And I'm afraid this lacks of consideration of the case of scan with filters, which may involve many rows but only return with a small result.

      What's more, we still have below comment/code in Scan.java

        /*
         * -1 means no caching
         */
        private int caching = -1;
      

      But actually the implementation does not follow (instead of no caching, we are caching Integer.MAX_VALUE...).

      So here I'd like to bring up two points:
      1. Change back the default value of HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING to some small value like 128
      2. Reenforce the semantic of "no caching"

        Issue Links

          Activity

          Hide
          ndimiduk Nick Dimiduk added a comment -

          Ouch.

          Show
          ndimiduk Nick Dimiduk added a comment - Ouch.
          Hide
          carp84 Yu Li added a comment -

          For 1.1, based on our observation from online, should be the former case...

          Show
          carp84 Yu Li added a comment - For 1.1, based on our observation from online, should be the former case...
          Hide
          ndimiduk Nick Dimiduk added a comment -

          Trying to understand the state of things here for 1.1. Looks like HBASE-11544 made it, meaning DEFAULT_HBASE_CLIENT_SCANNER_CACHING = Integer.MAX_VALUE; thus the default limit based on total number of rows is effectively unbounded. We also have HBASE-12976, so DEFAULT_HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE = 2 * 1024 * 1024. hbase.client.scanner.timeout.period is 1m in hbase-defaults.xml. This means for a highly selective filter, we'd end up hitting a timeout and throwing away any partial results before the 2mb is filled? Or does it mean we go back to the client after 1m with whatever we've accumulated so far? The former is a pretty bad situation and warrants some comment about the sharp edge. I'm against changing the default this late into the maintenance cycle, but a table in the book that breaks things out by release branch would help users stumbling through the mirk.

          Show
          ndimiduk Nick Dimiduk added a comment - Trying to understand the state of things here for 1.1. Looks like HBASE-11544 made it, meaning DEFAULT_HBASE_CLIENT_SCANNER_CACHING = Integer.MAX_VALUE ; thus the default limit based on total number of rows is effectively unbounded. We also have HBASE-12976 , so DEFAULT_HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE = 2 * 1024 * 1024 . hbase.client.scanner.timeout.period is 1m in hbase-defaults.xml. This means for a highly selective filter, we'd end up hitting a timeout and throwing away any partial results before the 2mb is filled? Or does it mean we go back to the client after 1m with whatever we've accumulated so far? The former is a pretty bad situation and warrants some comment about the sharp edge. I'm against changing the default this late into the maintenance cycle, but a table in the book that breaks things out by release branch would help users stumbling through the mirk.
          Hide
          carp84 Yu Li added a comment -

          Changing the issue type from Bug to Task since there's actually no bug to fix here after all the discussions but still some javadoc/refguide work to complete.

          Show
          carp84 Yu Li added a comment - Changing the issue type from Bug to Task since there's actually no bug to fix here after all the discussions but still some javadoc/refguide work to complete.
          Hide
          carp84 Yu Li added a comment -

          Maybe mark as Resolved after all sub-tasks closed?

          Show
          carp84 Yu Li added a comment - Maybe mark as Resolved after all sub-tasks closed?
          Hide
          mantonov Mikhail Antonov added a comment -

          Assuming this jira would be..invalid or won't fix then or..?

          Show
          mantonov Mikhail Antonov added a comment - Assuming this jira would be..invalid or won't fix then or..?
          Hide
          mantonov Mikhail Antonov added a comment -

          All right - that's what I think is the right thing to do. Thanks for checking!

          Show
          mantonov Mikhail Antonov added a comment - All right - that's what I think is the right thing to do. Thanks for checking!
          Hide
          carp84 Yu Li added a comment -

          I think this default value change, if we decide on it, should not go to 1.3 (let me know if I'm missing here).

          Thanks for chiming in sir, per discussion there won't be any change on the default value but only update on javadoc/refguid, JFYI.

          Show
          carp84 Yu Li added a comment - I think this default value change, if we decide on it, should not go to 1.3 (let me know if I'm missing here). Thanks for chiming in sir, per discussion there won't be any change on the default value but only update on javadoc/refguid, JFYI.
          Hide
          mantonov Mikhail Antonov added a comment -

          "I think changing default at this stage in 1.1.x and 1.2.x lifecycle, it would surprise more than it would help changing the default but we could add a notice on downloads page and to release notes on this finding of Yu Li's?"

          I think above applies to 1.3 here as well. Current default / javadoc are misleading, but changing the default for that - seems like it could affect in negative ways people doing scans over pretty small rows or something. W/o any changes in their client config would be bad to debug and possibly seen as a regression? +1 for documentation and javadoc updates on 1.3. I think this default value change, if we decide on it, should not go to 1.3 (let me know if I'm missing here).

          Conceptually I think Phil Yang rightly pointed out that using size-time based limits provides for better bandwidth utilization and dynamic adaptations to workload/schema changes. Maybe there's more here to discuss.

          Show
          mantonov Mikhail Antonov added a comment - "I think changing default at this stage in 1.1.x and 1.2.x lifecycle, it would surprise more than it would help changing the default but we could add a notice on downloads page and to release notes on this finding of Yu Li's?" I think above applies to 1.3 here as well. Current default / javadoc are misleading, but changing the default for that - seems like it could affect in negative ways people doing scans over pretty small rows or something. W/o any changes in their client config would be bad to debug and possibly seen as a regression? +1 for documentation and javadoc updates on 1.3. I think this default value change, if we decide on it, should not go to 1.3 (let me know if I'm missing here). Conceptually I think Phil Yang rightly pointed out that using size-time based limits provides for better bandwidth utilization and dynamic adaptations to workload/schema changes. Maybe there's more here to discuss.
          Hide
          stack stack added a comment -

          Yes Yu Li

          Show
          stack stack added a comment - Yes Yu Li
          Hide
          carp84 Yu Li added a comment -

          Thank you sir. I guess we should leave this one open until HBASE-16987 done, right?

          Show
          carp84 Yu Li added a comment - Thank you sir. I guess we should leave this one open until HBASE-16987 done, right?
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1897 (See https://builds.apache.org/job/HBase-Trunk_matrix/1897/)
          HBASE-16973 Revisiting default value for hbase.client.scanner.caching (stack: rev 9cfebf49339f1ce167fbee02b6d6d498eacc0ee5)

          • (edit) src/main/asciidoc/_chapters/upgrading.adoc
            Revert "HBASE-16973 Revisiting default value for (stack: rev b2d1e21e77644c9b0b5e83dcb662e5b2f71df072)
          • (edit) src/main/asciidoc/_chapters/upgrading.adoc
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1897 (See https://builds.apache.org/job/HBase-Trunk_matrix/1897/ ) HBASE-16973 Revisiting default value for hbase.client.scanner.caching (stack: rev 9cfebf49339f1ce167fbee02b6d6d498eacc0ee5) (edit) src/main/asciidoc/_chapters/upgrading.adoc Revert " HBASE-16973 Revisiting default value for (stack: rev b2d1e21e77644c9b0b5e83dcb662e5b2f71df072) (edit) src/main/asciidoc/_chapters/upgrading.adoc
          Hide
          stack stack added a comment -

          I filed a subtask for above.

          Show
          stack stack added a comment - I filed a subtask for above.
          Hide
          carp84 Yu Li added a comment -

          Lets make the Phil Yang story the way it is going forward. File an issue to update refguide, javadoc., and unit tests all to enforce "...Setting cache is an old style to limit size and time...".

          It seems this is the only work left here besides HBASE-16970 and HBASE-16986? Or anything else? Thanks.

          Show
          carp84 Yu Li added a comment - Lets make the Phil Yang story the way it is going forward. File an issue to update refguide, javadoc., and unit tests all to enforce "...Setting cache is an old style to limit size and time...". It seems this is the only work left here besides HBASE-16970 and HBASE-16986 ? Or anything else? Thanks.
          Hide
          carp84 Yu Li added a comment -

          Thanks for chiming in Enis Soztutar

          Yes we have 3 kinds of limit for scan and the rows limit is removed by default after HBASE-11544. I'm convinced to keep the default as is for branch-1.1+, but this indeed is a behavior change from 0.98 to 1.x and requires user to specifically set hbase.client.scanner.caching in some cases, like our case the scan.next p999 latency increased from seconds to minutes w/ the default value... It's a bad user experience since application unchanged but performance downgrades...

          Show
          carp84 Yu Li added a comment - Thanks for chiming in Enis Soztutar Yes we have 3 kinds of limit for scan and the rows limit is removed by default after HBASE-11544 . I'm convinced to keep the default as is for branch-1.1+, but this indeed is a behavior change from 0.98 to 1.x and requires user to specifically set hbase.client.scanner.caching in some cases, like our case the scan.next p999 latency increased from seconds to minutes w/ the default value... It's a bad user experience since application unchanged but performance downgrades...
          Hide
          carp84 Yu Li added a comment -

          Thanks for the quick response sir.

          File an issue to update refguide, javadoc., and unit tests all to enforce "...Setting cache is an old style to limit size and time...".

          Sounds great.

          I think changing default at this stage in 1.1.x and 1.2.x lifecycle, it would surprise more than it would help

          Makes sense, especially for those have noticed and adapted to the behavior change...

          In HBASE-11544 there's already a release note talking about changing the default value for scan caching, but I think it's worthwhile to emphasize on the refguide of migration from 0.98 to 1.x so people won't neglect this change like we did...

          Show
          carp84 Yu Li added a comment - Thanks for the quick response sir. File an issue to update refguide, javadoc., and unit tests all to enforce "...Setting cache is an old style to limit size and time...". Sounds great. I think changing default at this stage in 1.1.x and 1.2.x lifecycle, it would surprise more than it would help Makes sense, especially for those have noticed and adapted to the behavior change... In HBASE-11544 there's already a release note talking about changing the default value for scan caching, but I think it's worthwhile to emphasize on the refguide of migration from 0.98 to 1.x so people won't neglect this change like we did...
          Hide
          enis Enis Soztutar added a comment -

          We have three limits, the caching, time and size? So even with caching = MAX and time limit not implemented, the size limit defaults to 2MB, no?

          With very selective filters, it still will take a long time to accumulate 100 rows worth of data. But with the current defaults now, we will accumulate 2MB of data (without time limit). I guess, we should look at on average row sizes, 100 rows is smaller or 2MB is smaller.

          Show
          enis Enis Soztutar added a comment - We have three limits, the caching, time and size? So even with caching = MAX and time limit not implemented, the size limit defaults to 2MB, no? With very selective filters, it still will take a long time to accumulate 100 rows worth of data. But with the current defaults now, we will accumulate 2MB of data (without time limit). I guess, we should look at on average row sizes, 100 rows is smaller or 2MB is smaller.
          Hide
          mantonov Mikhail Antonov added a comment -

          Will have a look later today

          Show
          mantonov Mikhail Antonov added a comment - Will have a look later today
          Hide
          stack stack added a comment -

          Dang. Good finding Yu Li

          Lets make the Phil Yang story the way it is going forward. File an issue to update refguide, javadoc., and unit tests all to enforce "...Setting cache is an old style to limit size and time...".

          For released software, lets add to the refguide warning as per @yu li recommendation on migration from 0.98 to 1.1.x (New issue or part of this issue?). I think changing default at this stage in 1.1.x and 1.2.x lifecycle, it would surprise more than it would help changing the default but we could add a notice on downloads page and to release notes on this finding of Yu Li's?

          Show
          stack stack added a comment - Dang. Good finding Yu Li Lets make the Phil Yang story the way it is going forward. File an issue to update refguide, javadoc., and unit tests all to enforce "...Setting cache is an old style to limit size and time...". For released software, lets add to the refguide warning as per @yu li recommendation on migration from 0.98 to 1.1.x (New issue or part of this issue?). I think changing default at this stage in 1.1.x and 1.2.x lifecycle, it would surprise more than it would help changing the default but we could add a notice on downloads page and to release notes on this finding of Yu Li 's?
          Hide
          yangzhe1991 Phil Yang added a comment -

          there's no released version with client-controllable time limit yet

          Yes, my thought only applies to 1.3+ and we can discuss more about the suitable default value in 1.3+. The released branch is another issue. HBASE-15593 showed the time limit is not useful because the time limit can not be controlled by client. So I agree that we should change the default value back in 1.2 and 1.1.

          Show
          yangzhe1991 Phil Yang added a comment - there's no released version with client-controllable time limit yet Yes, my thought only applies to 1.3+ and we can discuss more about the suitable default value in 1.3+. The released branch is another issue. HBASE-15593 showed the time limit is not useful because the time limit can not be controlled by client. So I agree that we should change the default value back in 1.2 and 1.1.
          Hide
          carp84 Yu Li added a comment -

          Setting cache is an old style to limit size and time, what users really need is limit time and size, right?

          Sounds reasonable, but I think we need more explicit description/instruction on hbase book. And still, there's no released version with client-controllable time limit yet, so this default value change (from 100 to Integer.MAX_VALUE) is something users must pay attention to when migrating from branch-0.98 (just like we did from 0.98.12 to 1.1.2), especially those using filters as well as Scan.addColumn (yes this is another common case for application to only read a key column out of one row so the result might be small).

          Notice that HBase-11544 has been only applied on branch-1.1.x, So the default value for branch-1.2.x is still 100?

          Thanks for chiming in Heng Chen. I've checked latest code of branch-1.2 and confirmed the default value has been changed to Integer.MAX_VALUE, so I guess HBASE-11544 goes in ever since v1.1.0...

          stack/Andrew Purtell/Enis Soztutar/Sean Busbey/Mikhail Antonov/Matteo Bertozzi/Nick Dimiduk could you also share your thoughts here Bosses? Many thanks.

          Show
          carp84 Yu Li added a comment - Setting cache is an old style to limit size and time, what users really need is limit time and size, right? Sounds reasonable, but I think we need more explicit description/instruction on hbase book. And still, there's no released version with client-controllable time limit yet, so this default value change (from 100 to Integer.MAX_VALUE) is something users must pay attention to when migrating from branch-0.98 (just like we did from 0.98.12 to 1.1.2), especially those using filters as well as Scan.addColumn (yes this is another common case for application to only read a key column out of one row so the result might be small). Notice that HBase-11544 has been only applied on branch-1.1.x, So the default value for branch-1.2.x is still 100? Thanks for chiming in Heng Chen . I've checked latest code of branch-1.2 and confirmed the default value has been changed to Integer.MAX_VALUE, so I guess HBASE-11544 goes in ever since v1.1.0... stack / Andrew Purtell / Enis Soztutar / Sean Busbey / Mikhail Antonov / Matteo Bertozzi / Nick Dimiduk could you also share your thoughts here Bosses? Many thanks.
          Hide
          chenheng Heng Chen added a comment -

          Notice that HBase-11544 has been only applied on branch-1.1.x, So the default value for branch-1.2.x is still 100? Did our default value has some compatibility rules (If not, should we have it)? It confused our users. And in this case, i think we should keep the default value to be small as Yu Li mentioned, and respect all configurations about scanner.

          Show
          chenheng Heng Chen added a comment - Notice that HBase-11544 has been only applied on branch-1.1.x, So the default value for branch-1.2.x is still 100? Did our default value has some compatibility rules (If not, should we have it)? It confused our users. And in this case, i think we should keep the default value to be small as Yu Li mentioned, and respect all configurations about scanner.
          Hide
          yangzhe1991 Phil Yang added a comment -

          Yes, in 1.2.x this feature is useless... But if this feature works, for example, since 1.3.0, I think for users time limit and size limit are more direct than caching and these two limit are enough. I don't think users need to know how many rows the client will "cache" for one call. Setting cache is an old style to limit size and time, what users really need is limit time and size, right? If we can guarantee we will response in time and will not response too much data, we should read as much as possible to speed up the total scanning operations.

          Show
          yangzhe1991 Phil Yang added a comment - Yes, in 1.2.x this feature is useless... But if this feature works, for example, since 1.3.0, I think for users time limit and size limit are more direct than caching and these two limit are enough. I don't think users need to know how many rows the client will "cache" for one call. Setting cache is an old style to limit size and time, what users really need is limit time and size, right? If we can guarantee we will response in time and will not response too much data, we should read as much as possible to speed up the total scanning operations.
          Hide
          carp84 Yu Li added a comment -

          Thanks for chiming in Phil Yang and mentioning timeLimit.

          For your case, if you want scanner.next only blocking several seconds, you can set hbase.client.scanner.timeout.period to 10000 at client. The time limit will be 5000

          Yes but not before HBASE-15593 right (btw good work at 15593)? Or say in current released versions, this time limit is controlled by server side settings and in our case hbase.client.scanner.timeout.period is set to 180000 or say 3min...

          Still I'd like to focus on the discussion about default value of hbase.client.scanner.caching. Setting it to Integer.MAX_VALUE requires user to understand quite some details of our implementations and I'm wondering whether this is proper.

          Show
          carp84 Yu Li added a comment - Thanks for chiming in Phil Yang and mentioning timeLimit. For your case, if you want scanner.next only blocking several seconds, you can set hbase.client.scanner.timeout.period to 10000 at client. The time limit will be 5000 Yes but not before HBASE-15593 right (btw good work at 15593 )? Or say in current released versions, this time limit is controlled by server side settings and in our case hbase.client.scanner.timeout.period is set to 180000 or say 3min... Still I'd like to focus on the discussion about default value of hbase.client.scanner.caching . Setting it to Integer.MAX_VALUE requires user to understand quite some details of our implementations and I'm wondering whether this is proper.
          Hide
          yangzhe1991 Phil Yang added a comment -

          For your case, if you want scanner.next only blocking several seconds, you can set hbase.client.scanner.timeout.period to 10000 at client. The time limit will be 5000. After 5 seconds and the scanner has scanned at least one row, the user will get at least one Result from next()

          Show
          yangzhe1991 Phil Yang added a comment - For your case, if you want scanner.next only blocking several seconds, you can set hbase.client.scanner.timeout.period to 10000 at client. The time limit will be 5000. After 5 seconds and the scanner has scanned at least one row, the user will get at least one Result from next()
          Hide
          yangzhe1991 Phil Yang added a comment -

          Besides size limit, we also have time limit which is half of scanner timeout at client. If the scanner running time reach time limit, it will return what it has scanned. So in theory even if users use a sparse filter, they will still get some of rows in time?

          Show
          yangzhe1991 Phil Yang added a comment - Besides size limit, we also have time limit which is half of scanner timeout at client. If the scanner running time reach time limit, it will return what it has scanned. So in theory even if users use a sparse filter, they will still get some of rows in time?
          Hide
          carp84 Yu Li added a comment -

          Let me reword my question: Currently we set default of scanner caching to Integer.MAX_VALUE to make the network fill the chunk size defined by hbase.client.scanner.max.result.size rather than be limited by a particular number of rows, but what if the result never fills the chunk size due to some customized filter? Are we requesting too much for customers to know what size the result will be when using filters and explicitly set the caching size instead of simply using the default value? Should we use a smaller value or even no caching as the default and let advanced users to set it for performance optimization?

          Attached is a screenshot of the server side monitoring metrics of p999 of scan.next, which reduces from 1min+ to few seconds after customer setting the caching size from default to 10, FWIW.

          Show
          carp84 Yu Li added a comment - Let me reword my question: Currently we set default of scanner caching to Integer.MAX_VALUE to make the network fill the chunk size defined by hbase.client.scanner.max.result.size rather than be limited by a particular number of rows , but what if the result never fills the chunk size due to some customized filter? Are we requesting too much for customers to know what size the result will be when using filters and explicitly set the caching size instead of simply using the default value? Should we use a smaller value or even no caching as the default and let advanced users to set it for performance optimization? Attached is a screenshot of the server side monitoring metrics of p999 of scan.next, which reduces from 1min+ to few seconds after customer setting the caching size from default to 10, FWIW.
          Hide
          carp84 Yu Li added a comment -

          You mean when set -1 explicitly treat as no caching? #1 says make the default as 128 so I dont think u mean to make def as no caching.

          From the comment of the code it should be treat as no caching when set -1, but actually it won't. In HTable#getScanner of our current master branch code we have:

              if (scan.getCaching() <= 0) {
                scan.setCaching(scannerCaching);
              }
          

          And this scannerCaching is initialized as connConfiguration.getScannerCaching() then:

              this.scannerCaching = conf.getInt(
                HConstants.HBASE_CLIENT_SCANNER_CACHING, HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING);
          

          So with Scan.setCaching(-1) we will have caching as Integer.MAX_VALUE. I checked our hbase book and there it already talks about default will be Integer.MAX_VALUE, so for this part I guess we should update the below comment of code to what it actually will be to avoid confusion:

            /*
             * -1 means no caching
             */
            private int caching = -1;
          

          So still there is no timeouts happening because of the partial result return stuff and/or heart beat. Correct?

          Correct, it runs for 24s and timeout set to 1min so no timeouts, and no partial result

          Show
          carp84 Yu Li added a comment - You mean when set -1 explicitly treat as no caching? #1 says make the default as 128 so I dont think u mean to make def as no caching. From the comment of the code it should be treat as no caching when set -1, but actually it won't. In HTable#getScanner of our current master branch code we have: if (scan.getCaching() <= 0) { scan.setCaching(scannerCaching); } And this scannerCaching is initialized as connConfiguration.getScannerCaching() then: this .scannerCaching = conf.getInt( HConstants.HBASE_CLIENT_SCANNER_CACHING, HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING); So with Scan.setCaching(-1) we will have caching as Integer.MAX_VALUE. I checked our hbase book and there it already talks about default will be Integer.MAX_VALUE, so for this part I guess we should update the below comment of code to what it actually will be to avoid confusion: /* * -1 means no caching */ private int caching = -1; So still there is no timeouts happening because of the partial result return stuff and/or heart beat. Correct? Correct, it runs for 24s and timeout set to 1min so no timeouts, and no partial result
          Hide
          anoop.hbase Anoop Sam John added a comment -

          So still there is no timeouts happening because of the partial result return stuff and/or heart beat. Correct?

          Show
          anoop.hbase Anoop Sam John added a comment - So still there is no timeouts happening because of the partial result return stuff and/or heart beat. Correct?
          Hide
          anoop.hbase Anoop Sam John added a comment -

          Interesting issue..
          You mean when set -1 explicitly treat as no caching? #1 says make the default as 128 so I dont think u mean to make def as no caching.

          Show
          anoop.hbase Anoop Sam John added a comment - Interesting issue.. You mean when set -1 explicitly treat as no caching? #1 says make the default as 128 so I dont think u mean to make def as no caching.

            People

            • Assignee:
              carp84 Yu Li
              Reporter:
              carp84 Yu Li
            • Votes:
              0 Vote for this issue
              Watchers:
              16 Start watching this issue

              Dates

              • Created:
                Updated:

                Development