ZooKeeper
  1. ZooKeeper
  2. ZOOKEEPER-816

Detecting and diagnosing elusive bugs and faults in Zookeeper

    Details

    • Type: New Feature New Feature
    • Status: Open
    • Priority: Minor Minor
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      Complex distributed systems like Zookeeper tend to fail in strange ways that are hard to diagnose. The objective is to build a tool that helps understand when and where these problems occurred based on Zookeeper's traces (i.e., logs in TRACE level). Minor changes to the server code will be needed.

        Activity

        Hide
        Miguel Correia added a comment -

        Let me give a longer explanation of the project. Practical experience with Zookeeper has shown that sometimes there are failures whose causes are hard to understand. Some of these failures may be caused by elusive bugs in the code; others may be due to failures rarer than crashes, say corruptions of data somewhere in a server.

        Zookeeper's traces (i.e., logs in TRACE level) provide some information that can be helpful to understand what happened. For instance, they contain information about the clients that are connected, the operations issued, etc. However, in real deployments with many clients (say, hundreds), traces are typically turned off to avoid the high overhead that they cause. Furthermore, the data in the traces is probably not enough for our purposes because it does not include, e.g., the replies to operations or the data values.

        The project involves 3 subtasks:

        1- improve the efficiency of logging

        2- improve the traces with additional information needed

        3- build the checking tool

        Show
        Miguel Correia added a comment - Let me give a longer explanation of the project. Practical experience with Zookeeper has shown that sometimes there are failures whose causes are hard to understand. Some of these failures may be caused by elusive bugs in the code; others may be due to failures rarer than crashes, say corruptions of data somewhere in a server. Zookeeper's traces (i.e., logs in TRACE level) provide some information that can be helpful to understand what happened. For instance, they contain information about the clients that are connected, the operations issued, etc. However, in real deployments with many clients (say, hundreds), traces are typically turned off to avoid the high overhead that they cause. Furthermore, the data in the traces is probably not enough for our purposes because it does not include, e.g., the replies to operations or the data values. The project involves 3 subtasks: 1- improve the efficiency of logging 2- improve the traces with additional information needed 3- build the checking tool
        Hide
        Patrick Hunt added a comment -

        Typically log4j logs are expected to be consumed by users - i.e. debugging, here you're really talking about output that will be machine processed. Is this really logging in the log4j typical sense, perhaps a separate mechanism should be used? Another problem with log4j logging is that changes to the message format, say to make it more "readable", can cause problems for the down stream processor. Not to mention that a separate mechanism could be made much more efficient than the general log4j log mechanism. Perhaps ZK's own WAL could be used for this - reuse the WAL code or write the information directly to the ZK transaction log (might not be such a good idea, but should be considered as an option).

        Additionally you should consider something like Aspects for this project. This is a cross-cutting feature, something that aspects are well suited for. A benefit of this approach is that it would allow those interested in the feature to enable it while those uninterested would incur zero performance penalty.

        Show
        Patrick Hunt added a comment - Typically log4j logs are expected to be consumed by users - i.e. debugging, here you're really talking about output that will be machine processed. Is this really logging in the log4j typical sense, perhaps a separate mechanism should be used? Another problem with log4j logging is that changes to the message format, say to make it more "readable", can cause problems for the down stream processor. Not to mention that a separate mechanism could be made much more efficient than the general log4j log mechanism. Perhaps ZK's own WAL could be used for this - reuse the WAL code or write the information directly to the ZK transaction log (might not be such a good idea, but should be considered as an option). Additionally you should consider something like Aspects for this project. This is a cross-cutting feature, something that aspects are well suited for. A benefit of this approach is that it would allow those interested in the feature to enable it while those uninterested would incur zero performance penalty.
        Hide
        Miguel Correia added a comment -

        The idea behind using ZK's log4j logs was to modify ZK's server-side code as less as possible. ZK is used in critical applications so a design principle that we are using is to avoid anything that may impact its reliability. In that sense, ZK's WAL is probably not a good option as it is specially critical for the correct operation of ZK.

        Another option that occurred to me was to create another log4j logger, which would be used specifically for logging the traces we need. Either if we use the original log4j logger or a new one, we can improve the performance using the same mechanisms that are used to make the WAL efficient.

        In relation to Aspects, I didn't understand what do you mean. Can you give me some pointers?

        Thanks!

        Show
        Miguel Correia added a comment - The idea behind using ZK's log4j logs was to modify ZK's server-side code as less as possible. ZK is used in critical applications so a design principle that we are using is to avoid anything that may impact its reliability. In that sense, ZK's WAL is probably not a good option as it is specially critical for the correct operation of ZK. Another option that occurred to me was to create another log4j logger, which would be used specifically for logging the traces we need. Either if we use the original log4j logger or a new one, we can improve the performance using the same mechanisms that are used to make the WAL efficient. In relation to Aspects, I didn't understand what do you mean. Can you give me some pointers? Thanks!
        Hide
        Ivan Kelly added a comment -

        (Assumed JIRA picked up email replies. Seems not >:/)

        As far as I've seen, this overhead comes in two forms, CPU and disk.
        CPU overhead is mostly due to formatting. Disk obviously because
        tracing will fill your disk fairly quickly. Perhaps something could be
        done to combat both of these. To fix the formatting problem we could
        use a binary log format. I've seen this done in C++ but not in java.
        The basic idea is that if you have TRACE("operation %x happened to %s
        %p", obj1, obj2, obj3); a preprocessor replaces this with
        TRACE(0x1234, obj1, obj2, obj3) where 0x1234 is an identifier for the
        trace. Then when the trace occurs a binary blob [0x1234, value of
        obj1, value of obj2, value of obj3] is logged. Then when the logs are
        pulled of the machine you run a post processor to do all the
        formatting and you get your full trace.

        Regarding the disk overhead, traces are usually only interesting in
        the run up to a failure. We could have a ring buffer in memory that is
        constantly traced to, old traces being overwritten when the ring
        buffer reaches it's limit. These traces should only be dumped to the
        filesystem when an error or fatal level event occurs, thereby giving
        you a trace of what was happening before you fell over.

        -Ivan

        Show
        Ivan Kelly added a comment - (Assumed JIRA picked up email replies. Seems not >:/) As far as I've seen, this overhead comes in two forms, CPU and disk. CPU overhead is mostly due to formatting. Disk obviously because tracing will fill your disk fairly quickly. Perhaps something could be done to combat both of these. To fix the formatting problem we could use a binary log format. I've seen this done in C++ but not in java. The basic idea is that if you have TRACE("operation %x happened to %s %p", obj1, obj2, obj3); a preprocessor replaces this with TRACE(0x1234, obj1, obj2, obj3) where 0x1234 is an identifier for the trace. Then when the trace occurs a binary blob [0x1234, value of obj1, value of obj2, value of obj3] is logged. Then when the logs are pulled of the machine you run a post processor to do all the formatting and you get your full trace. Regarding the disk overhead, traces are usually only interesting in the run up to a failure. We could have a ring buffer in memory that is constantly traced to, old traces being overwritten when the ring buffer reaches it's limit. These traces should only be dumped to the filesystem when an error or fatal level event occurs, thereby giving you a trace of what was happening before you fell over. -Ivan
        Hide
        Ivan Kelly added a comment -

        Re: Aspects, Miguel take a look at AspectJ/Aspect Oriented Programming. Basically allows you to hook code into preexisting code if I understand it correctly. Sort of like auxilary methods in CLOS.

        Show
        Ivan Kelly added a comment - Re: Aspects, Miguel take a look at AspectJ/Aspect Oriented Programming. Basically allows you to hook code into preexisting code if I understand it correctly. Sort of like auxilary methods in CLOS.
        Hide
        Flavio Junqueira added a comment -

        You may consider having a look at ZOOKEEPER-512 for a reference within the context of ZooKeeper.

        Show
        Flavio Junqueira added a comment - You may consider having a look at ZOOKEEPER-512 for a reference within the context of ZooKeeper.
        Hide
        Patrick Hunt added a comment -

        http://www.eclipse.org/aspectj/
        http://en.wikipedia.org/wiki/AspectJ

        I use this for network fault injection testing - instrumenting network operations with random failures. Works extremely well and I don't need to modify the original source at all.

        Show
        Patrick Hunt added a comment - http://www.eclipse.org/aspectj/ http://en.wikipedia.org/wiki/AspectJ I use this for network fault injection testing - instrumenting network operations with random failures. Works extremely well and I don't need to modify the original source at all.
        Hide
        Miguel Correia added a comment -

        Several comments:

        Aspects: ah, these aspects. I always though of it as a software development process, not as providing a solution for compiling different versions of the code. I actually looked if Java already had something similar like the C preprocessor but didn't find it (I'm afraid I'm still from the pre-Java era). But it seems like a great solution.

        Binary traces: I thought about that solution but was forgetting it due to idea of not messing with the server code thus using the actual ZK traces. However, together with the need of extending those traces and the idea of using AspectJ, it seems a neat solution.

        Disk overhead: I was forgetting this problem. Using some kind of ring buffer seems to be a solution but we are actually interested in understanding failures, so keeping the info in memory doesn't seem to be a good idea (the failure may be too radical and we lose the info).

        Show
        Miguel Correia added a comment - Several comments: Aspects: ah, these aspects. I always though of it as a software development process, not as providing a solution for compiling different versions of the code. I actually looked if Java already had something similar like the C preprocessor but didn't find it (I'm afraid I'm still from the pre-Java era). But it seems like a great solution. Binary traces: I thought about that solution but was forgetting it due to idea of not messing with the server code thus using the actual ZK traces. However, together with the need of extending those traces and the idea of using AspectJ, it seems a neat solution. Disk overhead: I was forgetting this problem. Using some kind of ring buffer seems to be a solution but we are actually interested in understanding failures, so keeping the info in memory doesn't seem to be a good idea (the failure may be too radical and we lose the info).
        Hide
        Patrick Hunt added a comment -

        You should also consider using Avro for the marshalling/unmarshal of the records.
        http://avro.apache.org/

        Lots of benefits - in particular it's cross-language.

        Re writing to disk - perhaps just re-use the ZK WAL code and write to a disk that's not storing the transactional log.

        Show
        Patrick Hunt added a comment - You should also consider using Avro for the marshalling/unmarshal of the records. http://avro.apache.org/ Lots of benefits - in particular it's cross-language. Re writing to disk - perhaps just re-use the ZK WAL code and write to a disk that's not storing the transactional log.

          People

          • Assignee:
            Unassigned
            Reporter:
            Miguel Correia
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:

              Development