Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.8.0
    • Component/s: None
    • Labels:
      None

      Description

      If my understanding is correct, the metrics we provide are for every 60 seconds and all counters will be reset every 60 seconds. Current the MetricsSnapshotReporter seems missing this implementation. It sends out the metrics every 60 seconds but does not reset the counter value.

       registry.getGroup(group).foreach {
                case (name, metric) =>
                  metric.visit(new MetricsVisitor {
                    def counter(counter: Counter) = groupMsg.put(name, counter.getCount: java.lang.Long)
                    def gauge[T](gauge: Gauge[T]) = groupMsg.put(name, gauge.getValue.asInstanceOf[Object])
                  })
              }
      
      1. SAMZA-349.1.patch
        2 kB
        Yan Fang
      2. SAMZA-349.2.patch
        25 kB
        Yan Fang
      3. SAMZA-349.3.patch
        31 kB
        Yan Fang
      4. SAMZA-349.patch
        25 kB
        Yan Fang
      5. SAMZA-349.patch
        1.0 kB
        Yan Fang

        Issue Links

          Activity

          Hide
          closeuris Yan Fang added a comment - - edited

          a quick fix by adding the counter.clear() in the code. It only contains 2 lines of change. RB: https://reviews.apache.org/r/23716/

          Thank you.

          Show
          closeuris Yan Fang added a comment - - edited a quick fix by adding the counter.clear() in the code. It only contains 2 lines of change. RB: https://reviews.apache.org/r/23716/ Thank you.
          Hide
          criccomini Chris Riccomini added a comment -

          What should be done here is dependent on the downstream consumer. If you have a running count, the downstream system can simply keep track of the last report it got, and subtract the last report from the current report to get the difference. If you have just an incremental count, then the downstream system can keep the running total.

          Given that both implementations allow you to derive the same overall information, I agree that it makes more sense to just send incremental updates.

          I see a problem with the patch you've provided, though. We fetch the value in one line, and then reset back to 0 in the next. This leads to a race condition where we might fetch the value, then another thread updates the count (reporting happens on a separate thread from SamzaContainer's main thread), then we call clear. In this scenario, there is data loss since we reset the counter after it's been updated again, but before it's been reported.

          To solve this problem, I think we need to use getAndSet in Counter.set, and return the old value. Then we can call clear() and atomically get the old value while updating the new value back to 0.

          Show
          criccomini Chris Riccomini added a comment - What should be done here is dependent on the downstream consumer. If you have a running count, the downstream system can simply keep track of the last report it got, and subtract the last report from the current report to get the difference. If you have just an incremental count, then the downstream system can keep the running total. Given that both implementations allow you to derive the same overall information, I agree that it makes more sense to just send incremental updates. I see a problem with the patch you've provided, though. We fetch the value in one line, and then reset back to 0 in the next. This leads to a race condition where we might fetch the value, then another thread updates the count (reporting happens on a separate thread from SamzaContainer's main thread), then we call clear. In this scenario, there is data loss since we reset the counter after it's been updated again, but before it's been reported. To solve this problem, I think we need to use getAndSet in Counter.set, and return the old value. Then we can call clear() and atomically get the old value while updating the new value back to 0.
          Hide
          martinkl Martin Kleppmann added a comment -

          Some systems expect counter values to be always increasing, e.g. rrdtool's COUNTER type. I just wanted to point out that some people may be depending on the existing behaviour for integration with their metrics systems.

          Personally I think that resetting every 60 seconds makes much more sense, e.g. because it allows you to sum the rate of events in several parallel processes, it's not so dependent on whether you're using 32-bit or 64-bit numbers, etc. So I'm in favour of this patch, I just wanted to call out the potential compatibility issue.

          Show
          martinkl Martin Kleppmann added a comment - Some systems expect counter values to be always increasing, e.g. rrdtool's COUNTER type . I just wanted to point out that some people may be depending on the existing behaviour for integration with their metrics systems. Personally I think that resetting every 60 seconds makes much more sense, e.g. because it allows you to sum the rate of events in several parallel processes, it's not so dependent on whether you're using 32-bit or 64-bit numbers, etc. So I'm in favour of this patch, I just wanted to call out the potential compatibility issue.
          Hide
          closeuris Yan Fang added a comment -

          Thank you for pointing this issue, Martin.
          Also, since we only refresh the metrics in MetricsSnapshotReporter (kafka metrics stream) but do not reset in the JMX report, will this be a little confusing for users? Though we can mention that in the docs.

          Show
          closeuris Yan Fang added a comment - Thank you for pointing this issue, Martin. Also, since we only refresh the metrics in MetricsSnapshotReporter (kafka metrics stream) but do not reset in the JMX report, will this be a little confusing for users? Though we can mention that in the docs.
          Hide
          criccomini Chris Riccomini added a comment -

          Personally I think that resetting every 60 seconds makes much more sense, e.g. because it allows you to sum the rate of events in several parallel processes, it's not so dependent on whether you're using 32-bit or 64-bit numbers, etc. So I'm in favour of this patch, I just wanted to call out the potential compatibility issue.

          Yea, that's why I did it the way it is. I also agree it makes more sense to reset, though. It's possible for us to make the reset configurable, but that might lead to more confusion.

          Also, since we only refresh the metrics in MetricsSnapshotReporter (kafka metrics stream) but do not reset in the JMX report, will this be a little confusing for users?

          I think it should be OK. The metrics reporters are all independent implementations, so it should be OK to alter how we handle metrics on a per-reporter basis.

          Show
          criccomini Chris Riccomini added a comment - Personally I think that resetting every 60 seconds makes much more sense, e.g. because it allows you to sum the rate of events in several parallel processes, it's not so dependent on whether you're using 32-bit or 64-bit numbers, etc. So I'm in favour of this patch, I just wanted to call out the potential compatibility issue. Yea, that's why I did it the way it is. I also agree it makes more sense to reset, though. It's possible for us to make the reset configurable, but that might lead to more confusion. Also, since we only refresh the metrics in MetricsSnapshotReporter (kafka metrics stream) but do not reset in the JMX report, will this be a little confusing for users? I think it should be OK. The metrics reporters are all independent implementations, so it should be OK to alter how we handle metrics on a per-reporter basis.
          Hide
          closeuris Yan Fang added a comment - - edited

          according to Chris' comment, use getAndSet in Counter.clear and Counter.set, and use counter.clear to get the old value and reset in MetricsSnapshotReport.

          RB: https://reviews.apache.org/r/23716/

          Show
          closeuris Yan Fang added a comment - - edited according to Chris' comment, use getAndSet in Counter.clear and Counter.set, and use counter.clear to get the old value and reset in MetricsSnapshotReport. RB: https://reviews.apache.org/r/23716/
          Hide
          closeuris Yan Fang added a comment -

          Canceling the patch. After second thought, I think it's better to create a new method, called getAndClear, instead of modifying existing methods.

          Show
          closeuris Yan Fang added a comment - Canceling the patch. After second thought, I think it's better to create a new method, called getAndClear, instead of modifying existing methods.
          Hide
          martinkl Martin Kleppmann added a comment -

          Also, since we only refresh the metrics in MetricsSnapshotReporter (kafka metrics stream) but do not reset in the JMX report, will this be a little confusing for users?

          I'm a little concerned about that. I would say that adding a reporter to your job config should not alter the values of the metrics that are reported. However, adding MetricsSnapshotReporter does just that.

          Rather than resetting, how about having MetricsSnapshotReporter track the previous value for each counter, and report the difference? That would remove the side-effect.

          Show
          martinkl Martin Kleppmann added a comment - Also, since we only refresh the metrics in MetricsSnapshotReporter (kafka metrics stream) but do not reset in the JMX report, will this be a little confusing for users? I'm a little concerned about that. I would say that adding a reporter to your job config should not alter the values of the metrics that are reported. However, adding MetricsSnapshotReporter does just that. Rather than resetting, how about having MetricsSnapshotReporter track the previous value for each counter, and report the difference? That would remove the side-effect.
          Hide
          criccomini Chris Riccomini added a comment -

          I would say that adding a reporter to your job config should not alter the values of the metrics that are reported.

          Doh, good point.

          Rather than resetting, how about having MetricsSnapshotReporter track the previous value for each counter, and report the difference? That would remove the side-effect.

          Good idea.

          +1 to Martin's suggestions.

          Show
          criccomini Chris Riccomini added a comment - I would say that adding a reporter to your job config should not alter the values of the metrics that are reported. Doh, good point. Rather than resetting, how about having MetricsSnapshotReporter track the previous value for each counter, and report the difference? That would remove the side-effect. Good idea. +1 to Martin's suggestions.
          Hide
          martinkl Martin Kleppmann added a comment -

          I just realized another issue: the Counter class has a decrement method. That means it can be used for things that are not monotonically increasing, and indeed they are: SAMZA-245 (which I'm reviewing just now) added an unprocessedMessages metric which tracks the length of a queue of unprocessed messages. As the queue grows and shrinks, that counter is incremented and decremented.

          A counter that can be decremented can't meaningfully report a rate (number of events per minute). If we're going to switch the MetricsSnapshotReporter to report the delta since the last minutely report, I'd say we should also remove the dec() methods on Counter.

          Show
          martinkl Martin Kleppmann added a comment - I just realized another issue: the Counter class has a decrement method. That means it can be used for things that are not monotonically increasing, and indeed they are: SAMZA-245 (which I'm reviewing just now) added an unprocessedMessages metric which tracks the length of a queue of unprocessed messages. As the queue grows and shrinks, that counter is incremented and decremented. A counter that can be decremented can't meaningfully report a rate (number of events per minute). If we're going to switch the MetricsSnapshotReporter to report the delta since the last minutely report, I'd say we should also remove the dec() methods on Counter.
          Hide
          closeuris Yan Fang added a comment -

          I think the most useful part of having the metrics for every minute is in the timing part - SAMZA-251 - to measure the time a block of code spend. Instead of using Counter, maybe it's time to introduce Timer here?

          Other metrics seem not have big difference between displaying the accumulated value or incremented value.

          Show
          closeuris Yan Fang added a comment - I think the most useful part of having the metrics for every minute is in the timing part - SAMZA-251 - to measure the time a block of code spend. Instead of using Counter, maybe it's time to introduce Timer here? Other metrics seem not have big difference between displaying the accumulated value or incremented value.
          Hide
          criccomini Chris Riccomini added a comment -

          Instead of using Counter, maybe it's time to introduce Timer here?

          Yea, this might be the way to go. I notice Coda's yammer stuff also has the distinction between a counter and a timer. This would allow individual reporters to handle things differently.

          Show
          criccomini Chris Riccomini added a comment - Instead of using Counter, maybe it's time to introduce Timer here? Yea, this might be the way to go. I notice Coda's yammer stuff also has the distinction between a counter and a timer. This would allow individual reporters to handle things differently.
          Hide
          martinkl Martin Kleppmann added a comment -

          Yeah, Coda's Timer is a good kind of metric to have. However, I don't like its implementation so much – it outputs strange results on low-volume events, and has unpredictable decay behaviour. Rather than EWMA and forward decay I'd rather use fixed-length windows for rate and histogram calculations. I also wrote a metrics library in the past (sadly not open source) which is why I have so many opinions on this stuff. Although that's probably getting a bit off-topic for this particular issue

          Show
          martinkl Martin Kleppmann added a comment - Yeah, Coda's Timer is a good kind of metric to have. However, I don't like its implementation so much – it outputs strange results on low-volume events, and has unpredictable decay behaviour. Rather than EWMA and forward decay I'd rather use fixed-length windows for rate and histogram calculations. I also wrote a metrics library in the past (sadly not open source) which is why I have so many opinions on this stuff. Although that's probably getting a bit off-topic for this particular issue
          Hide
          closeuris Yan Fang added a comment -

          I think I will implement the fixed-length windows instead of EWMA currently. We can change/add EWMA whenever needed.

          Show
          closeuris Yan Fang added a comment - I think I will implement the fixed-length windows instead of EWMA currently. We can change/add EWMA whenever needed.
          Hide
          closeuris Yan Fang added a comment -

          As discussed, it's better to create a Timer to monitor the spending time of a block of code, instead of refreshing the metrics every 60 seconds.

          Show
          closeuris Yan Fang added a comment - As discussed, it's better to create a Timer to monitor the spending time of a block of code, instead of refreshing the metrics every 60 seconds.
          Hide
          closeuris Yan Fang added a comment -

          RB: https://reviews.apache.org/r/24141/

          1) use the same logic as the Timer in Yammer Metrics' but simplify it a little by getting rid of Meter and Histogram

          2) since I am using the same logic, some code is very similar to what Yammer Metrics has, especially the Reservoir interface which is too simple to make changes. Should it be fine? Yammer Metrics has the Apache 2.0 license.

          3) adding the Timer actually changes the API, have to modify the samza-core and samza-yarn reports (detail is in RB).

          4) TODO: current only report the average duration in 5 mins. Should report max/min duration in the 5 mins as well. This can be accomplished in a separate ticket because it's about reporting result, not "adding Timer".

          Thank you.

          Show
          closeuris Yan Fang added a comment - RB: https://reviews.apache.org/r/24141/ 1) use the same logic as the Timer in Yammer Metrics' but simplify it a little by getting rid of Meter and Histogram 2) since I am using the same logic, some code is very similar to what Yammer Metrics has, especially the Reservoir interface which is too simple to make changes. Should it be fine? Yammer Metrics has the Apache 2.0 license. 3) adding the Timer actually changes the API, have to modify the samza-core and samza-yarn reports (detail is in RB). 4) TODO: current only report the average duration in 5 mins. Should report max/min duration in the 5 mins as well. This can be accomplished in a separate ticket because it's about reporting result, not "adding Timer". Thank you.
          Hide
          closeuris Yan Fang added a comment -

          Made changes according to Chris' comments in RB https://reviews.apache.org/r/24141/. Thank you.

          Show
          closeuris Yan Fang added a comment - Made changes according to Chris' comments in RB https://reviews.apache.org/r/24141/ . Thank you.
          Hide
          criccomini Chris Riccomini added a comment -

          +1 Looks good to me!

          Show
          criccomini Chris Riccomini added a comment - +1 Looks good to me!
          Hide
          closeuris Yan Fang added a comment -

          Thank you. Committed.

          Show
          closeuris Yan Fang added a comment - Thank you. Committed.
          Hide
          martinkl Martin Kleppmann added a comment -

          Late to the party, but still got a few questions:

          1. Did you benchmark/profile this? I'm a little concerned that if we're keeping every single timer event within a 30 sec interval (not downsampling), the memory and CPU overhead could become significant. The reservoir might well end up containing several million values.
          2. Did you consider using System.nanoTime() instead of System.currentTimeMillis()? For many jobs, a call to process() will hopefully take less than a millisecond, so a millisecond-resolution timer metric would be useless.
          3. Were you planning to add percentile metrics? If not, you don't really need a reservoir and snapshots (eg. the mean can be calculated just with a running sum and count).
          4. Suggestion: it would be useful to add "utilization" (aka "duty cycle") as a metric, which is the sum of all the timings divided by the window length. That can tell you how much idle time there is in the event loop (how much headroom before the job will start falling behind).
          Show
          martinkl Martin Kleppmann added a comment - Late to the party, but still got a few questions: Did you benchmark/profile this? I'm a little concerned that if we're keeping every single timer event within a 30 sec interval (not downsampling), the memory and CPU overhead could become significant. The reservoir might well end up containing several million values. Did you consider using System.nanoTime() instead of System.currentTimeMillis()? For many jobs, a call to process() will hopefully take less than a millisecond, so a millisecond-resolution timer metric would be useless. Were you planning to add percentile metrics? If not, you don't really need a reservoir and snapshots (eg. the mean can be calculated just with a running sum and count). Suggestion: it would be useful to add "utilization" (aka "duty cycle") as a metric, which is the sum of all the timings divided by the window length. That can tell you how much idle time there is in the event loop (how much headroom before the job will start falling behind).
          Hide
          closeuris Yan Fang added a comment -

          Did you benchmark/profile this? I'm a little concerned that if we're keeping every single timer event within a 30 sec interval (not downsampling), the memory and CPU overhead could become significant. The reservoir might well end up containing several million values.

          • Did not do benchmark. Maybe I should add this part.
          • Actually, default is "300s", not "30s". . Currently the metric calculates the average time one code block spends in the 300s. I have the same concern. But users can set the timer interval by themselves. Is there a way to bypass this problem? I feel the reservoir has to keep all the events in the time interval.
          • What does the "downsampling" mean here?

          Did you consider using System.nanoTime() instead of System.currentTimeMillis()? For many jobs, a call to process() will hopefully take less than a millisecond, so a millisecond-resolution timer metric would be useless.

          yes, this is an option. I choose ms is because even hello-samza takes more than 1 ms in my situation. We always can go to nanoTime if ms becomes useless.

          Were you planning to add percentile metrics? If not, you don't really need a reservoir and snapshots (eg. the mean can be calculated just with a running sum and count).

          Because not quite sure if we need percentile in the future or we need other ways of calculating time (besides sliding window), I keep these two classes for easy implementation in the future.

          Suggestion: it would be useful to add "utilization" (aka "duty cycle") as a metric, which is the sum of all the timings divided by the window length. That can tell you how much idle time there is in the event loop (how much headroom before the job will start falling behind).

          agree. this seems useful. Open SAMZA-401 to implement this.

          Show
          closeuris Yan Fang added a comment - Did you benchmark/profile this? I'm a little concerned that if we're keeping every single timer event within a 30 sec interval (not downsampling), the memory and CPU overhead could become significant. The reservoir might well end up containing several million values. Did not do benchmark. Maybe I should add this part. Actually, default is "300s", not "30s". . Currently the metric calculates the average time one code block spends in the 300s. I have the same concern. But users can set the timer interval by themselves. Is there a way to bypass this problem? I feel the reservoir has to keep all the events in the time interval. What does the "downsampling" mean here? Did you consider using System.nanoTime() instead of System.currentTimeMillis()? For many jobs, a call to process() will hopefully take less than a millisecond, so a millisecond-resolution timer metric would be useless. yes, this is an option. I choose ms is because even hello-samza takes more than 1 ms in my situation. We always can go to nanoTime if ms becomes useless. Were you planning to add percentile metrics? If not, you don't really need a reservoir and snapshots (eg. the mean can be calculated just with a running sum and count). Because not quite sure if we need percentile in the future or we need other ways of calculating time (besides sliding window), I keep these two classes for easy implementation in the future. Suggestion: it would be useful to add "utilization" (aka "duty cycle") as a metric, which is the sum of all the timings divided by the window length. That can tell you how much idle time there is in the event loop (how much headroom before the job will start falling behind). agree. this seems useful. Open SAMZA-401 to implement this.
          Hide
          martinkl Martin Kleppmann added a comment -

          Is there a way to bypass this problem? I feel the reservoir has to keep all the events in the time interval. What does the "downsampling" mean here?

          If you're willing for the metric to be approximate (which, in practice, is usually fine), then you don't need to keep every single timing within your aggregation period. It's sufficient to keep a randomly selected sample, for example up to 1,000 values. That will give you a good estimate of the metric while using much less memory. A standard algorithm for this is reservoir sampling.

          In order to expire old values, you don't need to keep a timestamp for every single value. If you want to aggregate over the last 5 minutes, a simple approach is to keep a separate reservoir for every minute. To calculate the metric, you can combine the samples from the last 5 minutely reservoirs. Once a minute, you throw away the oldest reservoir which is no longer needed.

          Show
          martinkl Martin Kleppmann added a comment - Is there a way to bypass this problem? I feel the reservoir has to keep all the events in the time interval. What does the "downsampling" mean here? If you're willing for the metric to be approximate (which, in practice, is usually fine), then you don't need to keep every single timing within your aggregation period. It's sufficient to keep a randomly selected sample, for example up to 1,000 values. That will give you a good estimate of the metric while using much less memory. A standard algorithm for this is reservoir sampling . In order to expire old values, you don't need to keep a timestamp for every single value. If you want to aggregate over the last 5 minutes, a simple approach is to keep a separate reservoir for every minute. To calculate the metric, you can combine the samples from the last 5 minutely reservoirs. Once a minute, you throw away the oldest reservoir which is no longer needed.
          Hide
          closeuris Yan Fang added a comment -

          If you're willing for the metric to be approximate (which, in practice, is usually fine), then you don't need to keep every single timing within your aggregation period. It's sufficient to keep a randomly selected sample, for example up to 1,000 values.

          This is good enough to give an approximate result. I think we can keep this as an option as the default implementation. I am thinking having the option of getting metric in a time interval, such, 5 mins/30 seconds, is helpful as well. Prefer to leave this API there and users may want to use it. What do you think?

          In order to expire old values, you don't need to keep a timestamp for every single value. If you want to aggregate over the last 5 minutes, a simple approach is to keep a separate reservoir for every minute. To calculate the metric, you can combine the samples from the last 5 minutely reservoirs. Once a minute, you throw away the oldest reservoir which is no longer needed.

          Yes, this is true. Because we report the metrics every 1 min and I set the timer to 5 mins as the default. But when the the metrics are reported in a smaller interval, such as 30s or 10s, this implementation may have a problem. It turns out that we still need to expire the old values based on timestamp. Correct me if I understand it wrong.

          Show
          closeuris Yan Fang added a comment - If you're willing for the metric to be approximate (which, in practice, is usually fine), then you don't need to keep every single timing within your aggregation period. It's sufficient to keep a randomly selected sample, for example up to 1,000 values. This is good enough to give an approximate result. I think we can keep this as an option as the default implementation. I am thinking having the option of getting metric in a time interval, such, 5 mins/30 seconds, is helpful as well. Prefer to leave this API there and users may want to use it. What do you think? In order to expire old values, you don't need to keep a timestamp for every single value. If you want to aggregate over the last 5 minutes, a simple approach is to keep a separate reservoir for every minute. To calculate the metric, you can combine the samples from the last 5 minutely reservoirs. Once a minute, you throw away the oldest reservoir which is no longer needed. Yes, this is true. Because we report the metrics every 1 min and I set the timer to 5 mins as the default. But when the the metrics are reported in a smaller interval, such as 30s or 10s, this implementation may have a problem. It turns out that we still need to expire the old values based on timestamp. Correct me if I understand it wrong.
          Hide
          martinkl Martin Kleppmann added a comment -

          I think we don't need to change anything right now — the timer metric is useful as it is. However, if in future we want to turn it on by default (e.g. include the duration of StreamTask.process() calls in the default metrics), we may need to do some work to reduce memory use.

          But when the the metrics are reported in a smaller interval, such as 30s or 10s, this implementation may have a problem.

          Not necessarily — it would just mean that the aggregation window length is approximate. If you're reporting a metric every 30 seconds, and averaging over 5 or 6 one-minute buckets, then the window length may sometimes be 5 mins 30 secs instead of the intended 5 mins. In most circumstances that wouldn't have any noticeable effect on the numbers.

          Show
          martinkl Martin Kleppmann added a comment - I think we don't need to change anything right now — the timer metric is useful as it is. However, if in future we want to turn it on by default (e.g. include the duration of StreamTask.process() calls in the default metrics), we may need to do some work to reduce memory use. But when the the metrics are reported in a smaller interval, such as 30s or 10s, this implementation may have a problem. Not necessarily — it would just mean that the aggregation window length is approximate. If you're reporting a metric every 30 seconds, and averaging over 5 or 6 one-minute buckets, then the window length may sometimes be 5 mins 30 secs instead of the intended 5 mins. In most circumstances that wouldn't have any noticeable effect on the numbers.
          Hide
          closeuris Yan Fang added a comment -

          However, if in future we want to turn it on by default (e.g. include the duration of StreamTask.process() calls in the default metrics)

          Oppes, I guess we already turn it on by default...It's in RunLoop class...Maybe I should make the change and implement another sampling reservoir currently...

          Show
          closeuris Yan Fang added a comment - However, if in future we want to turn it on by default (e.g. include the duration of StreamTask.process() calls in the default metrics) Oppes, I guess we already turn it on by default...It's in RunLoop class...Maybe I should make the change and implement another sampling reservoir currently...
          Hide
          martinkl Martin Kleppmann added a comment -

          I guess we already turn it on by default...It's in RunLoop class

          Oh, I completely missed that (I only looked at this patch and didn't see SAMZA-251). In that case I'd definitely recommend a performance test as a minimum — pump 100m messages through a no-op Samza job, and compare CPU and memory use with and without the instrumentation in RunLoop. Perhaps it's not a problem, but worth testing.

          Show
          martinkl Martin Kleppmann added a comment - I guess we already turn it on by default...It's in RunLoop class Oh, I completely missed that (I only looked at this patch and didn't see SAMZA-251 ). In that case I'd definitely recommend a performance test as a minimum — pump 100m messages through a no-op Samza job, and compare CPU and memory use with and without the instrumentation in RunLoop. Perhaps it's not a problem, but worth testing.

            People

            • Assignee:
              closeuris Yan Fang
              Reporter:
              closeuris Yan Fang
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development