Details

    • Type: Sub-task Sub-task
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.5.0
    • Fix Version/s: 0.5.0
    • Component/s: None
    • Labels:
      None

      Description

      Suraj and I had a bit of discussion about incoming and outgoing message buffering and scalability.

      Currently everything lies on the heap, causing huge amounts of GC and waste of memory. We can do better.
      Therefore we need to extract an abstract Messenger class which is directly under the interface but over the compressor class.
      It should abstract the use of the queues in the back (currently lot of duplicated code) and it should be backed by a sequencefile on local disk.
      Once sync() starts it should return a message iterator for combining and then gets put into a message bundle which is send over RPC.

      On the other side we get a bundle and looping over it putting everything into the heap making it much larger than it needs to be. Here we can also flush on disk because we are just using a queue-like method to the user-side.

      Plus points:
      In case we have enough heap (see our new metric system), we can also implement a buffering technology that is not flushing everything to disk.

      Open questions:
      I don't know how much slower the whole system gets, but it would save alot of memory. Maybe we should first evaluate if it is really needed.
      In any case, the refactoring of the duplicate code in the messengers is needed.

      1. mytest.patch
        55 kB
        Edward J. Yoon
      2. HAMA-521.patch
        14 kB
        Thomas Jungblut
      3. HAMA-521_final.patch
        27 kB
        Thomas Jungblut
      4. HAMA-521_final_2.patch
        52 kB
        Thomas Jungblut
      5. HAMA-521_3.patch
        53 kB
        Thomas Jungblut
      6. HAMA-521_2.patch
        49 kB
        Thomas Jungblut
      7. HAMA-521_1.patch
        30 kB
        Thomas Jungblut

        Issue Links

          Activity

          Hide
          Thomas Jungblut added a comment -

          First scratch refactoring.

          I plan to extract an interface of the queues and implement a RAM based version (current one) and a disk based one.
          We can later replace it by a hybrid version.
          But I guess I'm going to do this tomorrow or later this evening.

          Show
          Thomas Jungblut added a comment - First scratch refactoring. I plan to extract an interface of the queues and implement a RAM based version (current one) and a disk based one. We can later replace it by a hybrid version. But I guess I'm going to do this tomorrow or later this evening.
          Hide
          Thomas Jungblut added a comment -

          Okay further refactoring.

          I've extracted two types of queue:
          Memory- and DiskQueue.

          I have just implemented the MemoryQueue because I think we need a bit of discussion about what we need to do.

          Where should we put sequencefiles? Local-/HD-FS? Which path?

          And when we receive messages, how should we handle synchronization? Just a global mutex?

          And I guess we need to put methods for init/close into the queue interface, otherwise the DiskQueue will be a whole mess.

          Someone please review and give me your opinion

          Show
          Thomas Jungblut added a comment - Okay further refactoring. I've extracted two types of queue: Memory- and DiskQueue. I have just implemented the MemoryQueue because I think we need a bit of discussion about what we need to do. Where should we put sequencefiles? Local-/HD-FS? Which path? And when we receive messages, how should we handle synchronization? Just a global mutex? And I guess we need to put methods for init/close into the queue interface, otherwise the DiskQueue will be a whole mess. Someone please review and give me your opinion
          Hide
          ChiaHung Lin added a comment -

          MemoryQueue is similar to the original way by storing messages in LinkedList that looks ok.

          Regarding to memory footprint, how if storing messages remotely e.g. hdfs or spilling messages to the target server? Or storing in e.g. memcache may be an option.

          Each has its own pros and cons. Storing remote allows messages available, but may be slow; spilled messages may lost if the target server fails, but speed may be increased because of local read; memcache may increase dependency/ overhead in lots of setup, etc.

          Show
          ChiaHung Lin added a comment - MemoryQueue is similar to the original way by storing messages in LinkedList that looks ok. Regarding to memory footprint, how if storing messages remotely e.g. hdfs or spilling messages to the target server? Or storing in e.g. memcache may be an option. Each has its own pros and cons. Storing remote allows messages available, but may be slow; spilled messages may lost if the target server fails, but speed may be increased because of local read; memcache may increase dependency/ overhead in lots of setup, etc.
          Hide
          Suraj Menon added a comment -

          Nice implementation! .. I have the following design questions and would like to propose few changes in design after these questions:

          1. Should MessageManager hold socket address information? On failure, socket address of few peers would change as they would get scheduled on different machine. If MessageManager holds the socket address, then it has to be updated on failure of peers.
          2. Should we have identifier for each message? In my opinion we should. This would help to remove duplicates in messages while cleanup on recovery. If that is the case, we need to implement queue as Set (LinkedHashSet?). This would also help us implement sorting in the message buffer. We can have TreeSet implementation underneath.
          3. For that matter should we have header <id, source peer , destination peer> ?
          4. There should be a simple reliable transactional protocol between two peers. When the transaction is completed, the sender is acknowledged that the receiver has completely received all the messages.

          During transfer, the sender should send a BEGIN-TRANSACTION flag. Send all its messages. Send COMMIT. The transaction is over only once sender gets an ACK on COMMIT. With this protocol, it does not matter where we write the messages in file, HDFS or on remote machine. If the transaction fails, the sender can cleanup its own side and re-attempt after getting new destination peer address. On sender failure, the receiver can cleanup and remove the duplicate messages. We have to figure out how to send Transaction commands. Probably, this is where the headers would be helpful. For synchronous checkpointing, we can make sure that sender sends COMMIT only after checkpointing all the messages.

          > init/close into the queue interface, otherwise the DiskQueue will be a whole mess

          We are sure of reading all the messages from the DiskQueue. Can we have an Iterator that would close the file once the last record is read?

          Show
          Suraj Menon added a comment - Nice implementation! .. I have the following design questions and would like to propose few changes in design after these questions: 1. Should MessageManager hold socket address information? On failure, socket address of few peers would change as they would get scheduled on different machine. If MessageManager holds the socket address, then it has to be updated on failure of peers. 2. Should we have identifier for each message? In my opinion we should. This would help to remove duplicates in messages while cleanup on recovery. If that is the case, we need to implement queue as Set (LinkedHashSet?). This would also help us implement sorting in the message buffer. We can have TreeSet implementation underneath. 3. For that matter should we have header <id, source peer , destination peer> ? 4. There should be a simple reliable transactional protocol between two peers. When the transaction is completed, the sender is acknowledged that the receiver has completely received all the messages. During transfer, the sender should send a BEGIN-TRANSACTION flag. Send all its messages. Send COMMIT. The transaction is over only once sender gets an ACK on COMMIT. With this protocol, it does not matter where we write the messages in file, HDFS or on remote machine. If the transaction fails, the sender can cleanup its own side and re-attempt after getting new destination peer address. On sender failure, the receiver can cleanup and remove the duplicate messages. We have to figure out how to send Transaction commands. Probably, this is where the headers would be helpful. For synchronous checkpointing, we can make sure that sender sends COMMIT only after checkpointing all the messages. > init/close into the queue interface, otherwise the DiskQueue will be a whole mess We are sure of reading all the messages from the DiskQueue. Can we have an Iterator that would close the file once the last record is read?
          Hide
          Thomas Jungblut added a comment -

          Regarding to memory footprint, how if storing messages remotely e.g. hdfs or spilling messages to the target server? Or storing in e.g. memcache may be an option.

          Yes. You are correct. However I think adding to HDFS will have too much overhead. We can add some memchache behaviour later, it is quite easy to implement for ourselfs.

          Okay Suraj, that are really deep design thoughts. I don't really know if they should belong here, but let's talk about them.

          1. Should MessageManager hold socket address information? On failure, socket address of few peers would change as they would get scheduled on different machine. If MessageManager holds the socket address, then it has to be updated on failure of peers.

          Yes totally. Each triggered send will check if the peer already exists. We can check within the barrier sync if we need to evict our cache or not since the info is stored in ZK.

          2. Should we have identifier for each message? In my opinion we should. This would help to remove duplicates in messages while cleanup on recovery. If that is the case, we need to implement queue as Set (LinkedHashSet?). This would also help us implement sorting in the message buffer. We can have TreeSet implementation underneath.

          Currently I think this is huge overhead in network communication. You only get duplicate messages when you have speculative task execution, we haven't yet, so let's discuss this separated.

          I'm totally +1 for the sorting, I personally thought this could be done by just replacing the MemoryQueue by a Comparator-backed version like a insertion sorted list. This just adds no overhead at all and it is still a queue. No need for a tree here. However this is just memory based, so it may not scale well.

          3. For that matter should we have header <id, source peer , destination peer> ?

          This totally reminds me of TCP. But especially when we have speculative execution, this is a must-have.

          4. There should be a simple reliable transactional protocol between two peers. When the transaction is completed, the sender is acknowledged that the receiver has completely received all the messages.

          Transactions are fine, a very simple thing could be that we make a SHA-1 hash of the messagebundle and check it on the other side. We are just batching transfers as a huge one rather than having many small transfers that need to be transacted.

          We are sure of reading all the messages from the DiskQueue. Can we have an Iterator that would close the file once the last record is read?

          Well, it is not guranteed that the user consumes all the messages, leaving the file open would be a no-op. So let's just add finally close functionality. It doesn't really hurt anyone.

          I think you should open a "Speculative task execution" issue and put your thoughts into it I think this transactional behaviour can be improved quite well, so it has negligible overhead. Let's discuss it in another context.

          Thanks you two! I have a bit of time tomorrow and I'll update the patch accordingly.

          Show
          Thomas Jungblut added a comment - Regarding to memory footprint, how if storing messages remotely e.g. hdfs or spilling messages to the target server? Or storing in e.g. memcache may be an option. Yes. You are correct. However I think adding to HDFS will have too much overhead. We can add some memchache behaviour later, it is quite easy to implement for ourselfs. Okay Suraj, that are really deep design thoughts. I don't really know if they should belong here, but let's talk about them. 1. Should MessageManager hold socket address information? On failure, socket address of few peers would change as they would get scheduled on different machine. If MessageManager holds the socket address, then it has to be updated on failure of peers. Yes totally. Each triggered send will check if the peer already exists. We can check within the barrier sync if we need to evict our cache or not since the info is stored in ZK. 2. Should we have identifier for each message? In my opinion we should. This would help to remove duplicates in messages while cleanup on recovery. If that is the case, we need to implement queue as Set (LinkedHashSet?). This would also help us implement sorting in the message buffer. We can have TreeSet implementation underneath. Currently I think this is huge overhead in network communication. You only get duplicate messages when you have speculative task execution, we haven't yet, so let's discuss this separated. I'm totally +1 for the sorting, I personally thought this could be done by just replacing the MemoryQueue by a Comparator-backed version like a insertion sorted list. This just adds no overhead at all and it is still a queue. No need for a tree here. However this is just memory based, so it may not scale well. 3. For that matter should we have header <id, source peer , destination peer> ? This totally reminds me of TCP. But especially when we have speculative execution, this is a must-have. 4. There should be a simple reliable transactional protocol between two peers. When the transaction is completed, the sender is acknowledged that the receiver has completely received all the messages. Transactions are fine, a very simple thing could be that we make a SHA-1 hash of the messagebundle and check it on the other side. We are just batching transfers as a huge one rather than having many small transfers that need to be transacted. We are sure of reading all the messages from the DiskQueue. Can we have an Iterator that would close the file once the last record is read? Well, it is not guranteed that the user consumes all the messages, leaving the file open would be a no-op. So let's just add finally close functionality. It doesn't really hurt anyone. I think you should open a "Speculative task execution" issue and put your thoughts into it I think this transactional behaviour can be improved quite well, so it has negligible overhead. Let's discuss it in another context. Thanks you two! I have a bit of time tomorrow and I'll update the patch accordingly.
          Hide
          Suraj Menon added a comment -

          We can get duplicate messages even with today's design. Receiver should know if the messages it has received has any duplicates. So if the receiver got messages from both original and its recovered task. There should be a way the receiver could filter out the duplicate messages. However this is a part of the solution of HAMA-440 . So hope we can take a decision while we commit for the purpose. Let's ignore the fault tolerance part from this issue now.

          Show
          Suraj Menon added a comment - We can get duplicate messages even with today's design. Receiver should know if the messages it has received has any duplicates. So if the receiver got messages from both original and its recovered task. There should be a way the receiver could filter out the duplicate messages. However this is a part of the solution of HAMA-440 . So hope we can take a decision while we commit for the purpose. Let's ignore the fault tolerance part from this issue now.
          Hide
          Thomas Jungblut added a comment -

          Well you are right.

          Show
          Thomas Jungblut added a comment - Well you are right.
          Hide
          Thomas Jungblut added a comment -

          I know the solution:
          We have to add the TaskAttemptId to the RPC call. We can then keep the bundle for a specific source host.
          This is really small overhead and would solve this problem.
          But let's solve this in Hama-440

          Show
          Thomas Jungblut added a comment - I know the solution: We have to add the TaskAttemptId to the RPC call. We can then keep the bundle for a specific source host. This is really small overhead and would solve this problem. But let's solve this in Hama-440
          Hide
          Thomas Jungblut added a comment -

          Do you think we can combine the message buffering and checkpointing?

          Show
          Thomas Jungblut added a comment - Do you think we can combine the message buffering and checkpointing?
          Hide
          Thomas Jungblut added a comment -

          Okay I have done it. And it works (build is still ok), so I have added disk queue as a default and added a new configuration property to default.xml.

          I've coded after 10h of work, 2h on this patch and I'm just tired.

          But please review it and tell me your opinion...

          Show
          Thomas Jungblut added a comment - Okay I have done it. And it works (build is still ok), so I have added disk queue as a default and added a new configuration property to default.xml. I've coded after 10h of work, 2h on this patch and I'm just tired. But please review it and tell me your opinion...
          Hide
          Thomas Jungblut added a comment -

          BTW when implementing a queue which offers heap capability we can sort the messages based on what is defined in WritableComparable.

          Can someone review this patch please?

          Show
          Thomas Jungblut added a comment - BTW when implementing a queue which offers heap capability we can sort the messages based on what is defined in WritableComparable. Can someone review this patch please?
          Hide
          Edward J. Yoon added a comment -

          +1

          BTW, can this be added to 0.5-incubating?

          Show
          Edward J. Yoon added a comment - +1 BTW, can this be added to 0.5-incubating?
          Hide
          Thomas Jungblut added a comment -

          I would love to add this, however I don't know how it behaves outside of the testcases.
          If you can undergo this a bit of testing, I would be very glad.

          Show
          Thomas Jungblut added a comment - I would love to add this, however I don't know how it behaves outside of the testcases. If you can undergo this a bit of testing, I would be very glad.
          Hide
          Edward J. Yoon added a comment -

          Let's schedule this to 0.6.

          During run sssp job on 32 node cluster, I've received error message as below:

          12/04/16 19:15:20 DEBUG graph.GraphJobRunner: 459379, 2147483647
          12/04/16 19:15:20 DEBUG graph.GraphJobRunner: 274699, 2147483647
          12/04/16 19:15:20 DEBUG graph.GraphJobRunner: 488920, 2147483647
          12/04/16 19:15:20 ERROR bsp.BSPTask: Error closing BSP Peer.
          java.lang.NullPointerException
          	at org.apache.hama.bsp.message.DiskQueue.add(DiskQueue.java:185)
          	at org.apache.hama.bsp.message.DiskQueue.addAll(DiskQueue.java:177)
          	at org.apache.hama.bsp.message.AbstractMessageManager.clearOutgoingQueues(AbstractMessageManager.java:101)
          	at org.apache.hama.bsp.BSPPeerImpl.clear(BSPPeerImpl.java:378)
          	at org.apache.hama.bsp.BSPPeerImpl.close(BSPPeerImpl.java:370)
          	at org.apache.hama.bsp.BSPTask.runBSP(BSPTask.java:181)
          	at org.apache.hama.bsp.BSPTask.run(BSPTask.java:144)
          	at org.apache.hama.bsp.GroomServer$BSPPeerChild.main(GroomServer.java:1097)
          12/04/16 19:15:20 ERROR bsp.BSPTask: Shutting down ping service.
          12/04/16 19:15:20 FATAL bsp.GroomServer: Error running child
          java.lang.NullPointerException
          	at org.apache.hama.bsp.message.DiskQueue.add(DiskQueue.java:185)
          	at org.apache.hama.bsp.message.DiskQueue.addAll(DiskQueue.java:177)
          	at org.apache.hama.bsp.message.AbstractMessageManager.clearOutgoingQueues(AbstractMessageManager.java:101)
          	at org.apache.hama.bsp.BSPPeerImpl.sync(BSPPeerImpl.java:337)
          	at org.apache.hama.graph.GraphJobRunner.bsp(GraphJobRunner.java:65)
          	at org.apache.hama.bsp.BSPTask.runBSP(BSPTask.java:167)
          	at org.apache.hama.bsp.BSPTask.run(BSPTask.java:144)
          	at org.apache.hama.bsp.GroomServer$BSPPeerChild.main(GroomServer.java:1097)
          java.lang.NullPointerException
          	at org.apache.hama.bsp.message.DiskQueue.add(DiskQueue.java:185)
          	at org.apache.hama.bsp.message.DiskQueue.addAll(DiskQueue.java:177)
          	at org.apache.hama.bsp.message.AbstractMessageManager.clearOutgoingQueues(AbstractMessageManager.java:101)
          	at org.apache.hama.bsp.BSPPeerImpl.sync(BSPPeerImpl.java:337)
          	at org.apache.hama.graph.GraphJobRunner.bsp(GraphJobRunner.java:65)
          	at org.apache.hama.bsp.BSPTask.runBSP(BSPTask.java:167)
          	at org.apache.hama.bsp.BSPTask.run(BSPTask.java:144)
          	at org.apache.hama.bsp.GroomServer$BSPPeerChild.main(GroomServer.java:1097)
          
          Show
          Edward J. Yoon added a comment - Let's schedule this to 0.6. During run sssp job on 32 node cluster, I've received error message as below: 12/04/16 19:15:20 DEBUG graph.GraphJobRunner: 459379, 2147483647 12/04/16 19:15:20 DEBUG graph.GraphJobRunner: 274699, 2147483647 12/04/16 19:15:20 DEBUG graph.GraphJobRunner: 488920, 2147483647 12/04/16 19:15:20 ERROR bsp.BSPTask: Error closing BSP Peer. java.lang.NullPointerException at org.apache.hama.bsp.message.DiskQueue.add(DiskQueue.java:185) at org.apache.hama.bsp.message.DiskQueue.addAll(DiskQueue.java:177) at org.apache.hama.bsp.message.AbstractMessageManager.clearOutgoingQueues(AbstractMessageManager.java:101) at org.apache.hama.bsp.BSPPeerImpl.clear(BSPPeerImpl.java:378) at org.apache.hama.bsp.BSPPeerImpl.close(BSPPeerImpl.java:370) at org.apache.hama.bsp.BSPTask.runBSP(BSPTask.java:181) at org.apache.hama.bsp.BSPTask.run(BSPTask.java:144) at org.apache.hama.bsp.GroomServer$BSPPeerChild.main(GroomServer.java:1097) 12/04/16 19:15:20 ERROR bsp.BSPTask: Shutting down ping service. 12/04/16 19:15:20 FATAL bsp.GroomServer: Error running child java.lang.NullPointerException at org.apache.hama.bsp.message.DiskQueue.add(DiskQueue.java:185) at org.apache.hama.bsp.message.DiskQueue.addAll(DiskQueue.java:177) at org.apache.hama.bsp.message.AbstractMessageManager.clearOutgoingQueues(AbstractMessageManager.java:101) at org.apache.hama.bsp.BSPPeerImpl.sync(BSPPeerImpl.java:337) at org.apache.hama.graph.GraphJobRunner.bsp(GraphJobRunner.java:65) at org.apache.hama.bsp.BSPTask.runBSP(BSPTask.java:167) at org.apache.hama.bsp.BSPTask.run(BSPTask.java:144) at org.apache.hama.bsp.GroomServer$BSPPeerChild.main(GroomServer.java:1097) java.lang.NullPointerException at org.apache.hama.bsp.message.DiskQueue.add(DiskQueue.java:185) at org.apache.hama.bsp.message.DiskQueue.addAll(DiskQueue.java:177) at org.apache.hama.bsp.message.AbstractMessageManager.clearOutgoingQueues(AbstractMessageManager.java:101) at org.apache.hama.bsp.BSPPeerImpl.sync(BSPPeerImpl.java:337) at org.apache.hama.graph.GraphJobRunner.bsp(GraphJobRunner.java:65) at org.apache.hama.bsp.BSPTask.runBSP(BSPTask.java:167) at org.apache.hama.bsp.BSPTask.run(BSPTask.java:144) at org.apache.hama.bsp.GroomServer$BSPPeerChild.main(GroomServer.java:1097)
          Hide
          Thomas Jungblut added a comment -

          Thanks, yes that was clear.
          I take a closer look on wednesday.

          Show
          Thomas Jungblut added a comment - Thanks, yes that was clear. I take a closer look on wednesday.
          Hide
          Thomas Jungblut added a comment -

          Thanks for the stacktrace Edward, I was quite overworked back then and just hacked a few lines together. I fixed this now and it works.
          I added a testcase though.

          Please review again and maybe we can put this into 0.5.0. Maybe we can add the sorted queue as well. This is a cool feature and just a neat implementation.

          But please check on your cluster, I tested in pseudo-distributed mode.

          thomasjungblut@ubuntu:~/workspace/hama-trunk$ /usr/local/hama/bin/hama jar /usr/local/hama/hama-examples-0.5.0-incubating.jar sssp 1 /tmp/sssp-in /tmp/sssp-out
          12/04/18 19:55:30 INFO bsp.FileInputFormat: Total input paths to process : 1
          12/04/18 19:55:31 INFO bsp.FileInputFormat: Total # of splits: 8
          12/04/18 19:55:33 INFO bsp.FileInputFormat: Total input paths to process : 8
          12/04/18 19:55:34 INFO bsp.BSPJobClient: Running job: job_201204181918_0003
          12/04/18 19:55:37 INFO bsp.BSPJobClient: Current supersteps number: 0
          12/04/18 19:55:40 INFO bsp.BSPJobClient: Current supersteps number: 6
          12/04/18 19:55:43 INFO bsp.BSPJobClient: Current supersteps number: 96
          12/04/18 19:55:46 INFO bsp.BSPJobClient: Current supersteps number: 243
          12/04/18 19:55:46 INFO bsp.BSPJobClient: The total number of supersteps: 243
          12/04/18 19:55:46 INFO bsp.BSPJobClient: Counters: 10
          12/04/18 19:55:46 INFO bsp.BSPJobClient:   org.apache.hama.bsp.JobInProgress$JobCounter
          12/04/18 19:55:46 INFO bsp.BSPJobClient:     LAUNCHED_TASKS=8
          12/04/18 19:55:46 INFO bsp.BSPJobClient:   org.apache.hama.bsp.BSPPeerImpl$PeerCounter
          12/04/18 19:55:46 INFO bsp.BSPJobClient:     SUPERSTEPS=243
          12/04/18 19:55:46 INFO bsp.BSPJobClient:     SUPERSTEP_SUM=1944
          12/04/18 19:55:46 INFO bsp.BSPJobClient:     MESSAGE_BYTES_TRANSFERED=158336
          12/04/18 19:55:46 INFO bsp.BSPJobClient:     TIME_IN_SYNC_MS=37868
          12/04/18 19:55:46 INFO bsp.BSPJobClient:     IO_BYTES_READ=3411067
          12/04/18 19:55:46 INFO bsp.BSPJobClient:     TOTAL_MESSAGES_SENT=2184
          12/04/18 19:55:46 INFO bsp.BSPJobClient:     TASK_INPUT_RECORDS=100000
          12/04/18 19:55:46 INFO bsp.BSPJobClient:     TOTAL_MESSAGES_RECEIVED=2184
          12/04/18 19:55:46 INFO bsp.BSPJobClient:     MESSAGE_BYTES_RECEIVED=158336
          Job Finished in 15.236 seconds
          
          Show
          Thomas Jungblut added a comment - Thanks for the stacktrace Edward, I was quite overworked back then and just hacked a few lines together. I fixed this now and it works. I added a testcase though. Please review again and maybe we can put this into 0.5.0. Maybe we can add the sorted queue as well. This is a cool feature and just a neat implementation. But please check on your cluster, I tested in pseudo-distributed mode. thomasjungblut@ubuntu:~/workspace/hama-trunk$ /usr/local/hama/bin/hama jar /usr/local/hama/hama-examples-0.5.0-incubating.jar sssp 1 /tmp/sssp-in /tmp/sssp-out 12/04/18 19:55:30 INFO bsp.FileInputFormat: Total input paths to process : 1 12/04/18 19:55:31 INFO bsp.FileInputFormat: Total # of splits: 8 12/04/18 19:55:33 INFO bsp.FileInputFormat: Total input paths to process : 8 12/04/18 19:55:34 INFO bsp.BSPJobClient: Running job: job_201204181918_0003 12/04/18 19:55:37 INFO bsp.BSPJobClient: Current supersteps number: 0 12/04/18 19:55:40 INFO bsp.BSPJobClient: Current supersteps number: 6 12/04/18 19:55:43 INFO bsp.BSPJobClient: Current supersteps number: 96 12/04/18 19:55:46 INFO bsp.BSPJobClient: Current supersteps number: 243 12/04/18 19:55:46 INFO bsp.BSPJobClient: The total number of supersteps: 243 12/04/18 19:55:46 INFO bsp.BSPJobClient: Counters: 10 12/04/18 19:55:46 INFO bsp.BSPJobClient: org.apache.hama.bsp.JobInProgress$JobCounter 12/04/18 19:55:46 INFO bsp.BSPJobClient: LAUNCHED_TASKS=8 12/04/18 19:55:46 INFO bsp.BSPJobClient: org.apache.hama.bsp.BSPPeerImpl$PeerCounter 12/04/18 19:55:46 INFO bsp.BSPJobClient: SUPERSTEPS=243 12/04/18 19:55:46 INFO bsp.BSPJobClient: SUPERSTEP_SUM=1944 12/04/18 19:55:46 INFO bsp.BSPJobClient: MESSAGE_BYTES_TRANSFERED=158336 12/04/18 19:55:46 INFO bsp.BSPJobClient: TIME_IN_SYNC_MS=37868 12/04/18 19:55:46 INFO bsp.BSPJobClient: IO_BYTES_READ=3411067 12/04/18 19:55:46 INFO bsp.BSPJobClient: TOTAL_MESSAGES_SENT=2184 12/04/18 19:55:46 INFO bsp.BSPJobClient: TASK_INPUT_RECORDS=100000 12/04/18 19:55:46 INFO bsp.BSPJobClient: TOTAL_MESSAGES_RECEIVED=2184 12/04/18 19:55:46 INFO bsp.BSPJobClient: MESSAGE_BYTES_RECEIVED=158336 Job Finished in 15.236 seconds
          Hide
          Thomas Jungblut added a comment -

          Can we commit this? This is blocking many things (maybe not just for me)

          Show
          Thomas Jungblut added a comment - Can we commit this? This is blocking many things (maybe not just for me)
          Hide
          Edward J. Yoon added a comment -

          I'll test this on my machines tonight.

          Let's add this to TRUNK.

          Show
          Edward J. Yoon added a comment - I'll test this on my machines tonight. Let's add this to TRUNK.
          Hide
          Edward J. Yoon added a comment -

          always hangs at step 1.

          edward@slave:~/workspace/hama-trunk$ bin/hama jar examples/target/hama-examples-0.5.0-incubating-SNAPSHOT.jar bench 10 10 10
          12/04/25 19:40:40 DEBUG bsp.BSPJobClient: BSPJobClient.submitJobDir: hdfs://slave.udanax.org:9001/tmp/hadoop-edward/bsp/system/submit_8shd2d
          12/04/25 19:40:41 INFO bsp.BSPJobClient: Running job: job_201204251940_0001
          12/04/25 19:40:41 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing
          12/04/25 19:40:44 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing
          12/04/25 19:40:44 INFO bsp.BSPJobClient: Current supersteps number: 0
          12/04/25 19:40:44 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing
          12/04/25 19:40:47 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing
          12/04/25 19:40:47 INFO bsp.BSPJobClient: Current supersteps number: 1
          12/04/25 19:40:47 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing
          12/04/25 19:40:50 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing
          12/04/25 19:40:50 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing
          12/04/25 19:40:53 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing
          12/04/25 19:40:53 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing
          
          Show
          Edward J. Yoon added a comment - always hangs at step 1. edward@slave:~/workspace/hama-trunk$ bin/hama jar examples/target/hama-examples-0.5.0-incubating-SNAPSHOT.jar bench 10 10 10 12/04/25 19:40:40 DEBUG bsp.BSPJobClient: BSPJobClient.submitJobDir: hdfs: //slave.udanax.org:9001/tmp/hadoop-edward/bsp/system/submit_8shd2d 12/04/25 19:40:41 INFO bsp.BSPJobClient: Running job: job_201204251940_0001 12/04/25 19:40:41 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing 12/04/25 19:40:44 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing 12/04/25 19:40:44 INFO bsp.BSPJobClient: Current supersteps number: 0 12/04/25 19:40:44 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing 12/04/25 19:40:47 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing 12/04/25 19:40:47 INFO bsp.BSPJobClient: Current supersteps number: 1 12/04/25 19:40:47 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing 12/04/25 19:40:50 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing 12/04/25 19:40:50 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing 12/04/25 19:40:53 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing 12/04/25 19:40:53 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.JobInProgress$JobCounter with nothing
          Hide
          Thomas Jungblut added a comment -

          I checked this and worked in pseudo distributed mode. Can you give me a bit more Log?

          Show
          Thomas Jungblut added a comment - I checked this and worked in pseudo distributed mode. Can you give me a bit more Log?
          Hide
          Edward J. Yoon added a comment -
          12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: TASK mapping from zookeeper: 0 : slave.udanax.org:61003 at index 0
          12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: TASK mapping from zookeeper: 1 : slave.udanax.org:61002 at index 1
          12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: TASK mapping from zookeeper: 2 : slave.udanax.org:61001 at index 2
          12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: TASK mapping from zookeeper: 3 : slave2.udanax.org:61003 at index 3
          12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: TASK mapping from zookeeper: 4 : slave2.udanax.org:61001 at index 4
          12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: TASK mapping from zookeeper: 5 : slave2.udanax.org:61002 at index 5
          12/04/25 20:00:44 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.BSPPeerImpl$PeerCounter with nothing
          12/04/25 20:00:44 DEBUG bsp.Counters: Adding TOTAL_MESSAGES_SENT
          12/04/25 20:00:44 DEBUG message.AbstractMessageManager: Send message (3.1588) to slave2.udanax.org:61003
          12/04/25 20:00:44 DEBUG bsp.Counters: Adding SUPERSTEP_SUM
          12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: [slave2.udanax.org:61002] enter the enterbarrier: 0
          12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: ===> at superstep :0 current znode size: 6 current znodes:[attempt_201204252000_0001_000002_0, attempt_201204252000_0001_000000_0, attempt_201204252000_0001_000004_0, attempt_201204252000_0001_000001_0, attempt_201204252000_0001_000005_0, attempt_201204252000_0001_000003_0]
          12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: enterBarrier() znode size within /bsp/job_201204252000_0001/0 is 6. Znodes include [attempt_201204252000_0001_000002_0, attempt_201204252000_0001_000000_0, attempt_201204252000_0001_000004_0, attempt_201204252000_0001_000001_0, attempt_201204252000_0001_000005_0, attempt_201204252000_0001_000003_0]
          12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: ---> at superstep: 0 task that is creating /ready znode:attempt_201204252000_0001_000005_0
          12/04/25 20:00:45 DEBUG bsp.BSPPeerImpl: Enabled = false checkPointInterval = 1 lastCheckPointStep = 0 getSuperstepCount() = 0
          12/04/25 20:00:45 DEBUG bsp.Counters: Adding COMPRESSED_BYTES_SENT
          12/04/25 20:00:45 INFO ipc.NettyTransceiver: Connecting to slave2.udanax.org/192.168.123.138:61003
          12/04/25 20:00:45 INFO ipc.NettyTransceiver: [id: 0x25786286] OPEN
          12/04/25 20:00:45 INFO ipc.NettyTransceiver: [id: 0x25786286, /192.168.123.138:33600 => slave2.udanax.org/192.168.123.138:61003] BOUND: /192.168.123.138:33600
          12/04/25 20:00:45 INFO ipc.NettyTransceiver: [id: 0x25786286, /192.168.123.138:33600 => slave2.udanax.org/192.168.123.138:61003] CONNECTED: slave2.udanax.org/192.168.123.138:61003
          12/04/25 20:00:45 DEBUG sync.ZooKeeperSyncClientImpl: leaveBarrier() !!! checking znodes contnains /ready node or not: at superstep:0 znode:[attempt_201204252000_0001_000004_0, attempt_201204252000_0001_000003_0, attempt_201204252000_0001_000000_0, attempt_201204252000_0001_000005_0, ready]
          12/04/25 20:00:45 DEBUG sync.ZooKeeperSyncClientImpl: leaveBarrier() at superstep:0 znode size: (4) znodes:[attempt_201204252000_0001_000004_0, attempt_201204252000_0001_000003_0, attempt_201204252000_0001_000000_0, attempt_201204252000_0001_000005_0]
          12/04/25 20:00:45 DEBUG sync.ZooKeeperSyncClientImpl: leaveBarrier(): superstep:0 taskid:attempt_201204252000_0001_000005_0 wait for lowest notify.
          12/04/25 20:00:45 DEBUG sync.ZooKeeperSyncClientImpl: leaveBarrier() at superstep: 0 taskid:attempt_201204252000_0001_000005_0 lowest notify other nodes.
          12/04/25 20:00:45 DEBUG sync.ZooKeeperSyncClientImpl: leaveBarrier() !!! checking znodes contnains /ready node or not: at superstep:0 znode:[ready]
          12/04/25 20:00:45 DEBUG sync.ZooKeeperSyncClientImpl: leaveBarrier() at superstep:0 znode size: (0) znodes:[]
          12/04/25 20:00:45 DEBUG bsp.Counters: Adding TIME_IN_SYNC_MS
          12/04/25 20:00:47 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351647316,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Asia/Seoul",offset=32400000,dstSavings=0,useDaylight=false,transitions=14,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=0,SECOND=47,MILLISECOND=316,ZONE_OFFSET=32400000,DST_OFFSET=0]
          12/04/25 20:00:49 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351649817,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Asia/Seoul",offset=32400000,dstSavings=0,useDaylight=false,transitions=14,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=0,SECOND=49,MILLISECOND=817,ZONE_OFFSET=32400000,DST_OFFSET=0]
          12/04/25 20:00:52 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351652318,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Asia/Seoul",offset=32400000,dstSavings=0,useDaylight=false,transitions=14,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=0,SECOND=52,MILLISECOND=318,ZONE_OFFSET=32400000,DST_OFFSET=0]
          12/04/25 20:00:54 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351654819,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Asia/Seoul",offset=32400000,dstSavings=0,useDaylight=false,transitions=14,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=0,SECOND=54,MILLISECOND=819,ZONE_OFFSET=32400000,DST_OFFSET=0]
          12/04/25 20:00:57 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351657320,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Asia/Seoul",offset=32400000,dstSavings=0,useDaylight=false,transitions=14,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=0,SECOND=57,MILLISECOND=320,ZONE_OFFSET=32400000,DST_OFFSET=0]
          12/04/25 20:00:59 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351659821,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Asia/Seoul",offset=32400000,dstSavings=0,useDaylight=false,transitions=14,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=0,SECOND=59,MILLISECOND=821,ZONE_OFFSET=32400000,DST_OFFSET=0]
          12/04/25 20:01:02 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351662322,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Asia/Seoul",offset=32400000,dstSavings=0,useDaylight=false,transitions=14,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=1,SECOND=2,MILLISECOND=322,ZONE_OFFSET=32400000,DST_OFFSET=0]
          12/04/25 20:01:04 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351664823,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Asia/Seoul",offset=32400000,dstSavings=0,useDaylight=false,transitions=14,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=1,SECOND=4,MILLISECOND=823,ZONE_OFFSET=32400000,DST_OFFSET=0]
          12/04/25 20:01:07 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351667324,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Asia/Seoul",offset=32400000,dstSavings=0,useDaylight=false,transitions=14,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=1,SECOND=7,MILLISECOND=324,ZONE_OFFSET=32400000,DST_OFFSET=0]
          12/04/25 20:01:09 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351669825,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Asia/Seoul",offset=32400000,dstSavings=0,useDaylight=false,transitions=14,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=1,SECOND=9,MILLISECOND=825,ZONE_OFFSET=32400000,DST_OFFSET=0]
          12/04/25 20:01:12 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351672326,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Asia/Seoul",offset=32400000,dstSavings=0,useDaylight=false,transitions=14,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=1,SECOND=12,MILLISECOND=326,ZONE_OFFSET=32400000,DST_OFFSET=0]
          12/04/25 20:01:14 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351674827,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="Asia/Seoul",offset=32400000,dstSavings=0,useDaylight=false,transitions=14,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=1,SECOND=14,MILLISECOND=827,ZONE_OFFSET=32400000,DST_OFFSET=0]
          
          Show
          Edward J. Yoon added a comment - 12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: TASK mapping from zookeeper: 0 : slave.udanax.org:61003 at index 0 12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: TASK mapping from zookeeper: 1 : slave.udanax.org:61002 at index 1 12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: TASK mapping from zookeeper: 2 : slave.udanax.org:61001 at index 2 12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: TASK mapping from zookeeper: 3 : slave2.udanax.org:61003 at index 3 12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: TASK mapping from zookeeper: 4 : slave2.udanax.org:61001 at index 4 12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: TASK mapping from zookeeper: 5 : slave2.udanax.org:61002 at index 5 12/04/25 20:00:44 DEBUG bsp.Counters: Creating group org.apache.hama.bsp.BSPPeerImpl$PeerCounter with nothing 12/04/25 20:00:44 DEBUG bsp.Counters: Adding TOTAL_MESSAGES_SENT 12/04/25 20:00:44 DEBUG message.AbstractMessageManager: Send message (3.1588) to slave2.udanax.org:61003 12/04/25 20:00:44 DEBUG bsp.Counters: Adding SUPERSTEP_SUM 12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: [slave2.udanax.org:61002] enter the enterbarrier: 0 12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: ===> at superstep :0 current znode size: 6 current znodes:[attempt_201204252000_0001_000002_0, attempt_201204252000_0001_000000_0, attempt_201204252000_0001_000004_0, attempt_201204252000_0001_000001_0, attempt_201204252000_0001_000005_0, attempt_201204252000_0001_000003_0] 12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: enterBarrier() znode size within /bsp/job_201204252000_0001/0 is 6. Znodes include [attempt_201204252000_0001_000002_0, attempt_201204252000_0001_000000_0, attempt_201204252000_0001_000004_0, attempt_201204252000_0001_000001_0, attempt_201204252000_0001_000005_0, attempt_201204252000_0001_000003_0] 12/04/25 20:00:44 DEBUG sync.ZooKeeperSyncClientImpl: ---> at superstep: 0 task that is creating /ready znode:attempt_201204252000_0001_000005_0 12/04/25 20:00:45 DEBUG bsp.BSPPeerImpl: Enabled = false checkPointInterval = 1 lastCheckPointStep = 0 getSuperstepCount() = 0 12/04/25 20:00:45 DEBUG bsp.Counters: Adding COMPRESSED_BYTES_SENT 12/04/25 20:00:45 INFO ipc.NettyTransceiver: Connecting to slave2.udanax.org/192.168.123.138:61003 12/04/25 20:00:45 INFO ipc.NettyTransceiver: [id: 0x25786286] OPEN 12/04/25 20:00:45 INFO ipc.NettyTransceiver: [id: 0x25786286, /192.168.123.138:33600 => slave2.udanax.org/192.168.123.138:61003] BOUND: /192.168.123.138:33600 12/04/25 20:00:45 INFO ipc.NettyTransceiver: [id: 0x25786286, /192.168.123.138:33600 => slave2.udanax.org/192.168.123.138:61003] CONNECTED: slave2.udanax.org/192.168.123.138:61003 12/04/25 20:00:45 DEBUG sync.ZooKeeperSyncClientImpl: leaveBarrier() !!! checking znodes contnains /ready node or not: at superstep:0 znode:[attempt_201204252000_0001_000004_0, attempt_201204252000_0001_000003_0, attempt_201204252000_0001_000000_0, attempt_201204252000_0001_000005_0, ready] 12/04/25 20:00:45 DEBUG sync.ZooKeeperSyncClientImpl: leaveBarrier() at superstep:0 znode size: (4) znodes:[attempt_201204252000_0001_000004_0, attempt_201204252000_0001_000003_0, attempt_201204252000_0001_000000_0, attempt_201204252000_0001_000005_0] 12/04/25 20:00:45 DEBUG sync.ZooKeeperSyncClientImpl: leaveBarrier(): superstep:0 taskid:attempt_201204252000_0001_000005_0 wait for lowest notify. 12/04/25 20:00:45 DEBUG sync.ZooKeeperSyncClientImpl: leaveBarrier() at superstep: 0 taskid:attempt_201204252000_0001_000005_0 lowest notify other nodes. 12/04/25 20:00:45 DEBUG sync.ZooKeeperSyncClientImpl: leaveBarrier() !!! checking znodes contnains /ready node or not: at superstep:0 znode:[ready] 12/04/25 20:00:45 DEBUG sync.ZooKeeperSyncClientImpl: leaveBarrier() at superstep:0 znode size: (0) znodes:[] 12/04/25 20:00:45 DEBUG bsp.Counters: Adding TIME_IN_SYNC_MS 12/04/25 20:00:47 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351647316,areFieldsSet= true ,areAllFieldsSet= true ,lenient= true ,zone=sun.util.calendar.ZoneInfo[id= "Asia/Seoul" ,offset=32400000,dstSavings=0,useDaylight= false ,transitions=14,lastRule= null ],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=0,SECOND=47,MILLISECOND=316,ZONE_OFFSET=32400000,DST_OFFSET=0] 12/04/25 20:00:49 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351649817,areFieldsSet= true ,areAllFieldsSet= true ,lenient= true ,zone=sun.util.calendar.ZoneInfo[id= "Asia/Seoul" ,offset=32400000,dstSavings=0,useDaylight= false ,transitions=14,lastRule= null ],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=0,SECOND=49,MILLISECOND=817,ZONE_OFFSET=32400000,DST_OFFSET=0] 12/04/25 20:00:52 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351652318,areFieldsSet= true ,areAllFieldsSet= true ,lenient= true ,zone=sun.util.calendar.ZoneInfo[id= "Asia/Seoul" ,offset=32400000,dstSavings=0,useDaylight= false ,transitions=14,lastRule= null ],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=0,SECOND=52,MILLISECOND=318,ZONE_OFFSET=32400000,DST_OFFSET=0] 12/04/25 20:00:54 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351654819,areFieldsSet= true ,areAllFieldsSet= true ,lenient= true ,zone=sun.util.calendar.ZoneInfo[id= "Asia/Seoul" ,offset=32400000,dstSavings=0,useDaylight= false ,transitions=14,lastRule= null ],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=0,SECOND=54,MILLISECOND=819,ZONE_OFFSET=32400000,DST_OFFSET=0] 12/04/25 20:00:57 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351657320,areFieldsSet= true ,areAllFieldsSet= true ,lenient= true ,zone=sun.util.calendar.ZoneInfo[id= "Asia/Seoul" ,offset=32400000,dstSavings=0,useDaylight= false ,transitions=14,lastRule= null ],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=0,SECOND=57,MILLISECOND=320,ZONE_OFFSET=32400000,DST_OFFSET=0] 12/04/25 20:00:59 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351659821,areFieldsSet= true ,areAllFieldsSet= true ,lenient= true ,zone=sun.util.calendar.ZoneInfo[id= "Asia/Seoul" ,offset=32400000,dstSavings=0,useDaylight= false ,transitions=14,lastRule= null ],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=0,SECOND=59,MILLISECOND=821,ZONE_OFFSET=32400000,DST_OFFSET=0] 12/04/25 20:01:02 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351662322,areFieldsSet= true ,areAllFieldsSet= true ,lenient= true ,zone=sun.util.calendar.ZoneInfo[id= "Asia/Seoul" ,offset=32400000,dstSavings=0,useDaylight= false ,transitions=14,lastRule= null ],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=1,SECOND=2,MILLISECOND=322,ZONE_OFFSET=32400000,DST_OFFSET=0] 12/04/25 20:01:04 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351664823,areFieldsSet= true ,areAllFieldsSet= true ,lenient= true ,zone=sun.util.calendar.ZoneInfo[id= "Asia/Seoul" ,offset=32400000,dstSavings=0,useDaylight= false ,transitions=14,lastRule= null ],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=1,SECOND=4,MILLISECOND=823,ZONE_OFFSET=32400000,DST_OFFSET=0] 12/04/25 20:01:07 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351667324,areFieldsSet= true ,areAllFieldsSet= true ,lenient= true ,zone=sun.util.calendar.ZoneInfo[id= "Asia/Seoul" ,offset=32400000,dstSavings=0,useDaylight= false ,transitions=14,lastRule= null ],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=1,SECOND=7,MILLISECOND=324,ZONE_OFFSET=32400000,DST_OFFSET=0] 12/04/25 20:01:09 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351669825,areFieldsSet= true ,areAllFieldsSet= true ,lenient= true ,zone=sun.util.calendar.ZoneInfo[id= "Asia/Seoul" ,offset=32400000,dstSavings=0,useDaylight= false ,transitions=14,lastRule= null ],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=1,SECOND=9,MILLISECOND=825,ZONE_OFFSET=32400000,DST_OFFSET=0] 12/04/25 20:01:12 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351672326,areFieldsSet= true ,areAllFieldsSet= true ,lenient= true ,zone=sun.util.calendar.ZoneInfo[id= "Asia/Seoul" ,offset=32400000,dstSavings=0,useDaylight= false ,transitions=14,lastRule= null ],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=1,SECOND=12,MILLISECOND=326,ZONE_OFFSET=32400000,DST_OFFSET=0] 12/04/25 20:01:14 DEBUG bsp.BSPTask: Pinging at time java.util.GregorianCalendar[time=1335351674827,areFieldsSet= true ,areAllFieldsSet= true ,lenient= true ,zone=sun.util.calendar.ZoneInfo[id= "Asia/Seoul" ,offset=32400000,dstSavings=0,useDaylight= false ,transitions=14,lastRule= null ],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2012,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=25,DAY_OF_YEAR=116,DAY_OF_WEEK=4,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=8,HOUR_OF_DAY=20,MINUTE=1,SECOND=14,MILLISECOND=827,ZONE_OFFSET=32400000,DST_OFFSET=0]
          Hide
          Thomas Jungblut added a comment -

          It just hangs? This seems not very reasonable..

          Show
          Thomas Jungblut added a comment - It just hangs? This seems not very reasonable..
          Hide
          Edward J. Yoon added a comment -

          This patch is my local changes.

          Show
          Edward J. Yoon added a comment - This patch is my local changes.
          Hide
          Thomas Jungblut added a comment -

          Where is $

          {hama.tmp.dir}

          defined?

          Show
          Thomas Jungblut added a comment - Where is $ {hama.tmp.dir} defined?
          Hide
          Edward J. Yoon added a comment -
            <property>
              <name>hama.tmp.dir</name>
              <value>/tmp/hama-${user.name}</value>
              <description>Temporary directory on the local filesystem.</description>
            </property>
            <property>
              <name>bsp.disk.queue.dir</name>
              <value>${hama.tmp.dir}/messages/</value>
              <description>Temporary directory on the local message buffer on disk.</description>
            </property>
          

          But, I can't find messages directory.

          edward@slave:~/workspace/hama-trunk$ ls -al /tmp/hama-edward/
          drwxr-xr-x  3 edward edward  4096 2012-04-25 19:36 .
          drwxrwxrwt 25 root   root   20480 2012-04-25 20:14 ..
          drwxr-xr-x  3 edward edward  4096 2012-04-25 19:36 zookeeper
          edward@slave:~/workspace/hama-trunk$ ls -al /tmp/message
          messageQueue/   messageStorage/ 
          edward@slave:~/workspace/hama-trunk$ ls -al /tmp/messageQueue/diskqueue/job_1_0001/task_1_0001_000001/
          drwxr-xr-x 2 edward edward    4096 2012-04-25 19:17 .
          drwxr-xr-x 3 edward edward    4096 2012-04-25 19:17 ..
          -rw-r--r-- 1 edward edward       0 2012-04-25 19:17 .0_messages.seq.crc
          -rw-r--r-- 1 edward edward    8752 2012-04-25 19:17 .1_messages.seq.crc
          -rw-r--r-- 1 edward edward       0 2012-04-25 19:17 .2_messages.seq.crc
          -rw-r--r-- 1 edward edward       0 2012-04-25 19:17 .4_messages.seq.crc
          -rwxrwxrwx 1 edward edward       0 2012-04-25 19:17 0_messages.seq
          -rwxrwxrwx 1 edward edward 1118856 2012-04-25 19:17 1_messages.seq
          -rwxrwxrwx 1 edward edward       0 2012-04-25 19:17 2_messages.seq
          -rwxrwxrwx 1 edward edward       0 2012-04-25 19:17 4_messages.seq
          
          Show
          Edward J. Yoon added a comment - <property> <name>hama.tmp.dir</name> <value>/tmp/hama-${user.name}</value> <description>Temporary directory on the local filesystem.</description> </property> <property> <name>bsp.disk.queue.dir</name> <value>${hama.tmp.dir}/messages/</value> <description>Temporary directory on the local message buffer on disk.</description> </property> But, I can't find messages directory. edward@slave:~/workspace/hama-trunk$ ls -al /tmp/hama-edward/ drwxr-xr-x 3 edward edward 4096 2012-04-25 19:36 . drwxrwxrwt 25 root root 20480 2012-04-25 20:14 .. drwxr-xr-x 3 edward edward 4096 2012-04-25 19:36 zookeeper edward@slave:~/workspace/hama-trunk$ ls -al /tmp/message messageQueue/ messageStorage/ edward@slave:~/workspace/hama-trunk$ ls -al /tmp/messageQueue/diskqueue/job_1_0001/task_1_0001_000001/ drwxr-xr-x 2 edward edward 4096 2012-04-25 19:17 . drwxr-xr-x 3 edward edward 4096 2012-04-25 19:17 .. -rw-r--r-- 1 edward edward 0 2012-04-25 19:17 .0_messages.seq.crc -rw-r--r-- 1 edward edward 8752 2012-04-25 19:17 .1_messages.seq.crc -rw-r--r-- 1 edward edward 0 2012-04-25 19:17 .2_messages.seq.crc -rw-r--r-- 1 edward edward 0 2012-04-25 19:17 .4_messages.seq.crc -rwxrwxrwx 1 edward edward 0 2012-04-25 19:17 0_messages.seq -rwxrwxrwx 1 edward edward 1118856 2012-04-25 19:17 1_messages.seq -rwxrwxrwx 1 edward edward 0 2012-04-25 19:17 2_messages.seq -rwxrwxrwx 1 edward edward 0 2012-04-25 19:17 4_messages.seq
          Hide
          Thomas Jungblut added a comment -

          + fs = FileSystem.get(conf);

          Maybe in HDFS?

          Show
          Thomas Jungblut added a comment - + fs = FileSystem.get(conf); Maybe in HDFS?
          Hide
          Edward J. Yoon added a comment -

          Oh, OK.

          But, not works. :/

          Show
          Edward J. Yoon added a comment - Oh, OK. But, not works. :/
          Hide
          Thomas Jungblut added a comment -

          Kk thanks for your time. I have to setup a distributed mode for myself then.

          Show
          Thomas Jungblut added a comment - Kk thanks for your time. I have to setup a distributed mode for myself then.
          Hide
          Thomas Jungblut added a comment -

          Very strange though that the testcases are working correctly. Seems like a FS problem to me.

          Show
          Thomas Jungblut added a comment - Very strange though that the testcases are working correctly. Seems like a FS problem to me.
          Hide
          Edward J. Yoon added a comment -

          I came back from dinner

          I think you should initialize only once like this:

            @Override
            public void prepareWrite() {
              try {
                if (writer == null) {
                  writer = new SequenceFile.Writer(fs, conf, queuePath,
                      ObjectWritable.class, NullWritable.class);
                }
          
          Show
          Edward J. Yoon added a comment - I came back from dinner I think you should initialize only once like this: @Override public void prepareWrite() { try { if (writer == null ) { writer = new SequenceFile.Writer(fs, conf, queuePath, ObjectWritable.class, NullWritable.class); }
          Hide
          Thomas Jungblut added a comment -

          It was headless coded, I think I will have to make a diagramm to clear out these issues.

          It is a bit confusing which is called when and such.

          Show
          Thomas Jungblut added a comment - It was headless coded, I think I will have to make a diagramm to clear out these issues. It is a bit confusing which is called when and such.
          Hide
          Edward J. Yoon added a comment -

          Anyway, the criminal is in clearOutgoingQueues().

          Show
          Edward J. Yoon added a comment - Anyway, the criminal is in clearOutgoingQueues().
          Hide
          Thomas Jungblut added a comment -

          Maybe we can commit without the disk queue and make a follow-up. The refactoring is needed more for the other issues than the disk implementation.

          What do you think?

          Show
          Thomas Jungblut added a comment - Maybe we can commit without the disk queue and make a follow-up. The refactoring is needed more for the other issues than the disk implementation. What do you think?
          Hide
          Edward J. Yoon added a comment -

          I'm good.

          Please upload new patch here.

          Show
          Edward J. Yoon added a comment - I'm good. Please upload new patch here.
          Hide
          Thomas Jungblut added a comment -

          Great, maybe some other want to have fun with a disk queue, I think my implementation sucks in any way.
          I provide you the new patch tonight.

          Show
          Thomas Jungblut added a comment - Great, maybe some other want to have fun with a disk queue, I think my implementation sucks in any way. I provide you the new patch tonight.
          Hide
          Thomas Jungblut added a comment -

          updated version, deactivated the disk queue.
          I'm going to make a follow up tomorrow.

          Show
          Thomas Jungblut added a comment - updated version, deactivated the disk queue. I'm going to make a follow up tomorrow.
          Hide
          Thomas Jungblut added a comment -

          Hey guys? Can someone review this or/and commit?

          Show
          Thomas Jungblut added a comment - Hey guys? Can someone review this or/and commit?
          Hide
          Edward J. Yoon added a comment -

          I can't build with this patch.

          Show
          Edward J. Yoon added a comment - I can't build with this patch.
          Hide
          Thomas Jungblut added a comment -

          OMG, forgot to add the things .

          build builds fine

          [INFO] ------------------------------------------------------------------------
          [INFO] Reactor Summary:
          [INFO] 
          [INFO] Apache Hama parent POM ............................ SUCCESS [30.486s]
          [INFO] core .............................................. SUCCESS [7:57.881s]
          [INFO] graph ............................................. SUCCESS [1.187s]
          [INFO] examples .......................................... SUCCESS [57.905s]
          [INFO] yarn .............................................. SUCCESS [4.007s]
          [INFO] hama-dist ......................................... SUCCESS [23.046s]
          [INFO] ------------------------------------------------------------------------
          [INFO] BUILD SUCCESS
          [INFO] ------------------------------------------------------------------------
          [INFO] Total time: 9:55.108s
          [INFO] Finished at: Sat Apr 28 10:17:38 CEST 2012
          
          
          Show
          Thomas Jungblut added a comment - OMG, forgot to add the things . build builds fine [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hama parent POM ............................ SUCCESS [30.486s] [INFO] core .............................................. SUCCESS [7:57.881s] [INFO] graph ............................................. SUCCESS [1.187s] [INFO] examples .......................................... SUCCESS [57.905s] [INFO] yarn .............................................. SUCCESS [4.007s] [INFO] hama-dist ......................................... SUCCESS [23.046s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 9:55.108s [INFO] Finished at: Sat Apr 28 10:17:38 CEST 2012
          Hide
          Edward J. Yoon added a comment -

          Thomas,

          Do you want to add this to 0.5 TRUNK?

          Show
          Edward J. Yoon added a comment - Thomas, Do you want to add this to 0.5 TRUNK?
          Hide
          Thomas Jungblut added a comment -

          Don't know, what's your opinion?

          Show
          Thomas Jungblut added a comment - Don't know, what's your opinion?
          Hide
          Edward J. Yoon added a comment -

          Builds OK, Examples and my BSP programs are all OK on 4 nodes cluster.

          Let's commit this.

          Show
          Edward J. Yoon added a comment - Builds OK, Examples and my BSP programs are all OK on 4 nodes cluster. Let's commit this.
          Hide
          Thomas Jungblut added a comment -

          Fine +1. The disk queue will be another issue.

          Show
          Thomas Jungblut added a comment - Fine +1. The disk queue will be another issue.
          Hide
          Edward J. Yoon added a comment -

          Thanks Thomas, I've committed this to TRUNK.

          Show
          Edward J. Yoon added a comment - Thanks Thomas, I've committed this to TRUNK.

            People

            • Assignee:
              Thomas Jungblut
              Reporter:
              Thomas Jungblut
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development