Solr
  1. Solr
  2. SOLR-3685

Solr Cloud sometimes skipped peersync attempt and replicated instead due to tlog flags not being cleared when no updates were buffered during a previous replication.

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: 4.0-ALPHA
    • Fix Version/s: 4.0, 6.0
    • Labels:
      None
    • Environment:

      Debian GNU/Linux Squeeze 64bit
      Solr 5.0-SNAPSHOT 1365667M - markus - 2012-07-25 19:09:43

      Description

      There's a serious problem with restarting nodes, not cleaning old or unused index directories and sudden replication and Java being killed by the OS due to excessive memory allocation. Since SOLR-1781 was fixed index directories get cleaned up when a node is being restarted cleanly, however, old or unused index directories still pile up if Solr crashes or is being killed by the OS, happening here.

      We have a six-node 64-bit Linux test cluster with each node having two shards. There's 512MB RAM available and no swap. Each index is roughly 27MB so about 50MB per node, this fits easily and works fine. However, if a node is being restarted, Solr will consistently crash because it immediately eats up all RAM. If swap is enabled Solr will eat an additional few 100MB's right after start up.

      This cannot be solved by restarting Solr, it will just crash again and leave index directories in place until the disk is full. The only way i can restart a node safely is to delete the index directories and have it replicate from another node. If i then restart the node it will crash almost consistently.

      I'll attach a log of one of the nodes.

      1. info.log
        438 kB
        Markus Jelsma
      2. oom-killer.log
        14 kB
        Markus Jelsma
      3. pmap.log
        49 kB
        Markus Jelsma

        Activity

        Hide
        Markus Jelsma added a comment -

        Here's a log for a node where the Java process is being killed by the OS. I can reproduce this consistently.

        Show
        Markus Jelsma added a comment - Here's a log for a node where the Java process is being killed by the OS. I can reproduce this consistently.
        Hide
        Markus Jelsma added a comment -

        I forgot to add that it doesn't matter if updates are sent to the cluster. A node will start to replicate on startup when it's update to date as well and crash subsequently.

        Show
        Markus Jelsma added a comment - I forgot to add that it doesn't matter if updates are sent to the cluster. A node will start to replicate on startup when it's update to date as well and crash subsequently.
        Hide
        Uwe Schindler added a comment -

        How much heap do you assign to Solr's Java process (-Xmx)? 512 MB physical RAM is very few. The Jetty default is as far as I remember larger. If the OS kills processes in its OOM process killer, we cannot do so much, as those processes are killed with a hard sigkill (-9) not sigterm.

        Show
        Uwe Schindler added a comment - How much heap do you assign to Solr's Java process (-Xmx)? 512 MB physical RAM is very few. The Jetty default is as far as I remember larger. If the OS kills processes in its OOM process killer, we cannot do so much, as those processes are killed with a hard sigkill (-9) not sigterm.
        Hide
        Markus Jelsma added a comment -

        I should have added this. I allocate just 98MB to the heap and 32 to the permgen so there just 130MB allocated.

        Show
        Markus Jelsma added a comment - I should have added this. I allocate just 98MB to the heap and 32 to the permgen so there just 130MB allocated.
        Hide
        Uwe Schindler added a comment -

        Is it 32 bit or 64 bit JVM?

        Show
        Uwe Schindler added a comment - Is it 32 bit or 64 bit JVM?
        Hide
        Markus Jelsma added a comment -

        Java 1.6.0-26 64bit, just as Linux.

        I should also note now that i made an error in the configuration. I thought i had reduced the DocumentCache size to 64 but the node it was testing on had a size of 1024 configured and redistributed the config over the cluster via config bootstrap.

        This still leaves the problem that Solr itself should run out of memory and not the OS as the cache is part of the heap. It also should clean old index directories. So this issue may consist of multiple problems.

        Show
        Markus Jelsma added a comment - Java 1.6.0-26 64bit, just as Linux. I should also note now that i made an error in the configuration. I thought i had reduced the DocumentCache size to 64 but the node it was testing on had a size of 1024 configured and redistributed the config over the cluster via config bootstrap. This still leaves the problem that Solr itself should run out of memory and not the OS as the cache is part of the heap. It also should clean old index directories. So this issue may consist of multiple problems.
        Hide
        Uwe Schindler added a comment -

        OK, I wanted to come back:
        From what I see, 96 MB of heap is very few for Solr. Tests are running with -Xmx512. But regarding memory consumtion (Java's heap OOMs), Mark Miller might know better.

        But Solr will not use all available RAM, as you are on 64 bit Java, Solr defaults to MMapDirectory - I recommend to read: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

        It will allocate from the system only heap + what Java itsself needs. Everything else is only allocated as adress space to directly acces file system cache. So the real memory usage of Solr is not what "top" reports in column "VIRT" but in column "RES" (resident memory). VIRT can be much higher (multiples of system RAM) with MMapDirectoy, as it only shows virtual address space allocated. This cannot cause Kernel-OOM to get active and kill processes, if that happens you have too few RAM for kernel, Solr + tools, sorry.

        Show
        Uwe Schindler added a comment - OK, I wanted to come back: From what I see, 96 MB of heap is very few for Solr. Tests are running with -Xmx512. But regarding memory consumtion (Java's heap OOMs), Mark Miller might know better. But Solr will not use all available RAM, as you are on 64 bit Java, Solr defaults to MMapDirectory - I recommend to read: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html It will allocate from the system only heap + what Java itsself needs. Everything else is only allocated as adress space to directly acces file system cache. So the real memory usage of Solr is not what "top" reports in column "VIRT" but in column "RES" (resident memory). VIRT can be much higher (multiples of system RAM) with MMapDirectoy, as it only shows virtual address space allocated. This cannot cause Kernel-OOM to get active and kill processes, if that happens you have too few RAM for kernel, Solr + tools, sorry.
        Hide
        Markus Jelsma added a comment -

        Hi - i don't look at virtual memory but RESident memory. My Solr install here will eat up to 512MB RESIDENT MEMORY and is killed by the OS. The virtual memory will then be almost 800MB, while both indexes are just 27MB in size. This sounds a lot of VIRT and RES for a tiny index and tiny heap.

        Also, Solr will run fine and fast with just 100MB of memory, the index is still very small.

        Thanks

        Show
        Markus Jelsma added a comment - Hi - i don't look at virtual memory but RESident memory. My Solr install here will eat up to 512MB RESIDENT MEMORY and is killed by the OS. The virtual memory will then be almost 800MB, while both indexes are just 27MB in size. This sounds a lot of VIRT and RES for a tiny index and tiny heap. Also, Solr will run fine and fast with just 100MB of memory, the index is still very small. Thanks
        Hide
        Uwe Schindler added a comment -

        I have no idea what libraries bundled by Solr do outside, but as you poroblem seems to be related to cloud, it might be another thing in JVMs: DirectMemory (allocated by ByteBuffer.allocateDirect()). By default the JVM allows up to the heap size to be allocated on this space external to heap, so your -Xmx is only half of the truth. Solr by itsself does not use direct memory (only mmapped memory, but that is not resident), but I am not sure about Zookeeper and all that cloud stuff (and maybe plugins like TIKA-extraction).

        You can limit direct memory with: -XX:MaxDirectMemorySize=<size>

        The VIRT column can contain aditionally 2-3 times your index size depending on pending commits, merges,...

        Please report back what this changes!

        Show
        Uwe Schindler added a comment - I have no idea what libraries bundled by Solr do outside, but as you poroblem seems to be related to cloud, it might be another thing in JVMs: DirectMemory (allocated by ByteBuffer.allocateDirect()). By default the JVM allows up to the heap size to be allocated on this space external to heap, so your -Xmx is only half of the truth. Solr by itsself does not use direct memory (only mmapped memory, but that is not resident), but I am not sure about Zookeeper and all that cloud stuff (and maybe plugins like TIKA-extraction). You can limit direct memory with: -XX:MaxDirectMemorySize=<size> The VIRT column can contain aditionally 2-3 times your index size depending on pending commits, merges,... Please report back what this changes!
        Hide
        Markus Jelsma added a comment -

        Ok, i have increased my DocumentCache again to reproduce the problem and configured from -XX:MaxDirectMemorySize=100m to 10m but RES is still climbing at the same rate as before so no change. We don't use Tika only Zookeeper.

        About virtual memory. That also climbes to ~800Mb which is many times more than the index size. There are no pending commits or merges right after start up.

        There may be some cloud replication related process that eats the RAM.

        Thanks

        Show
        Markus Jelsma added a comment - Ok, i have increased my DocumentCache again to reproduce the problem and configured from -XX:MaxDirectMemorySize=100m to 10m but RES is still climbing at the same rate as before so no change. We don't use Tika only Zookeeper. About virtual memory. That also climbes to ~800Mb which is many times more than the index size. There are no pending commits or merges right after start up. There may be some cloud replication related process that eats the RAM. Thanks
        Hide
        Mark Miller added a comment -

        Seems this is perhaps two or three issues here.

        1. The resource usage. This may just be because replication causes two searchers to be open at the same time briefly? I really don't have any guesses at the moment.

        2. On a non graceful shutdown, old index dirs may end up left behind. We could look at cleaning them up on startup, but that should be it's own issue.

        3. You claim you are replicating on startup even though the shards should be in sync. You should not be replicating in that case.

        Show
        Mark Miller added a comment - Seems this is perhaps two or three issues here. 1. The resource usage. This may just be because replication causes two searchers to be open at the same time briefly? I really don't have any guesses at the moment. 2. On a non graceful shutdown, old index dirs may end up left behind. We could look at cleaning them up on startup, but that should be it's own issue. 3. You claim you are replicating on startup even though the shards should be in sync. You should not be replicating in that case.
        Hide
        Markus Jelsma added a comment -

        Hi,

        1. Yes, but we allow only one searcher at the same time to be warmed. This resource usage also belongs to the Java heap, it cannot cause 5x as much heap being allocated.

        2. Yes, i'll open a new issue and refer to this.

        3. Well, in some logs i clearly see a core is attempting to download and judging from the multiple index directories it's true. I am very sure no updates have been added to the cluster for a long time yet it still attempts to recover. Below is a core recovering.

        2012-07-30 09:48:36,970 INFO [solr.cloud.ZkController] - [main] - : We are http://nl2.index.openindex.io:8080/solr/openindex_a/ and leader is http://nl1.index.openindex.io:8080/solr/openindex_a/
        2012-07-30 09:48:36,970 INFO [solr.cloud.ZkController] - [main] - : No LogReplay needed for core=openindex_a baseURL=http://nl2.index.openindex.io:8080/solr
        2012-07-30 09:48:36,970 INFO [solr.cloud.ZkController] - [main] - : Core needs to recover:openindex_a
        

        Something noteworthy may be that for some reasons the index versions of all cores and their replica's don't match. After a restart the generation of a core is also different while it shouldn't have changed. The size in bytes is also slightly different (~20 bytes).

        The main thing that's concerning that Solr consumes 5x the allocated heap space in the RESident memory. Caches and such are in the heap and the MMapped index dir should be in VIRTual memory and not cause the kernel to kill the process. I'm not yet sure what's going on here. Also, according to Uwe virtual memory should not be more than 2-3 times index size. In our case we see ~800Mb virtual memory for two 26Mb cores right after start up.

        We have only allocated 98Mb to the heap for now and this is enough for such a small index.

        Show
        Markus Jelsma added a comment - Hi, 1. Yes, but we allow only one searcher at the same time to be warmed. This resource usage also belongs to the Java heap, it cannot cause 5x as much heap being allocated. 2. Yes, i'll open a new issue and refer to this. 3. Well, in some logs i clearly see a core is attempting to download and judging from the multiple index directories it's true. I am very sure no updates have been added to the cluster for a long time yet it still attempts to recover. Below is a core recovering. 2012-07-30 09:48:36,970 INFO [solr.cloud.ZkController] - [main] - : We are http: //nl2.index.openindex.io:8080/solr/openindex_a/ and leader is http://nl1.index.openindex.io:8080/solr/openindex_a/ 2012-07-30 09:48:36,970 INFO [solr.cloud.ZkController] - [main] - : No LogReplay needed for core=openindex_a baseURL=http: //nl2.index.openindex.io:8080/solr 2012-07-30 09:48:36,970 INFO [solr.cloud.ZkController] - [main] - : Core needs to recover:openindex_a Something noteworthy may be that for some reasons the index versions of all cores and their replica's don't match. After a restart the generation of a core is also different while it shouldn't have changed. The size in bytes is also slightly different (~20 bytes). The main thing that's concerning that Solr consumes 5x the allocated heap space in the RESident memory. Caches and such are in the heap and the MMapped index dir should be in VIRTual memory and not cause the kernel to kill the process. I'm not yet sure what's going on here. Also, according to Uwe virtual memory should not be more than 2-3 times index size. In our case we see ~800Mb virtual memory for two 26Mb cores right after start up. We have only allocated 98Mb to the heap for now and this is enough for such a small index.
        Hide
        Mark Miller added a comment -

        Is it 2 or 3 cores you have? One thing is that it won't be just one extra searcher and index - it will be that times the number of cores. All of them will attempt to recover at the same time. So you will see a bump in RAM reqs. You are talking about off heap RAM though - I don't think SolrCloud will have much to do with that.

        Looking at your logs, it appears that you are replicating because the transaction logs look suspect - probably because of a hard power down. If you shutdown gracefully, you would get a peer sync instead which should determine you are up to date.

        The comment for the path you are taking says:

        // last operation at the time of startup had the GAP flag set...
        // this means we were previously doing a full index replication
        // that probably didn't complete and buffering updates in the meantime.

        Show
        Mark Miller added a comment - Is it 2 or 3 cores you have? One thing is that it won't be just one extra searcher and index - it will be that times the number of cores. All of them will attempt to recover at the same time. So you will see a bump in RAM reqs. You are talking about off heap RAM though - I don't think SolrCloud will have much to do with that. Looking at your logs, it appears that you are replicating because the transaction logs look suspect - probably because of a hard power down. If you shutdown gracefully, you would get a peer sync instead which should determine you are up to date. The comment for the path you are taking says: // last operation at the time of startup had the GAP flag set... // this means we were previously doing a full index replication // that probably didn't complete and buffering updates in the meantime.
        Hide
        Mark Miller added a comment -

        Looking at your logs, it appears that you are replicating because the transaction logs look suspect - probably because of a hard power down. If you shutdown gracefully, you would get a peer sync instead which should determine you are up to date.

        Alright, I just saw a similar thing happen even shutting everyone down gracefully.

        I think it's likely our kind of un-orderly cluster shutdown. If you shutdown all the nodes at once, depending on some timing differences, some recoveries may trigger as the leader goes down. Then the replica would go down.

        In my case though, I was working with a 3 shard 2 replica cluster - so I don't think that was likely the issue. If one node goes down, there is no one to recover from.

        We need to investigate a bit more.

        Show
        Mark Miller added a comment - Looking at your logs, it appears that you are replicating because the transaction logs look suspect - probably because of a hard power down. If you shutdown gracefully, you would get a peer sync instead which should determine you are up to date. Alright, I just saw a similar thing happen even shutting everyone down gracefully. I think it's likely our kind of un-orderly cluster shutdown. If you shutdown all the nodes at once, depending on some timing differences, some recoveries may trigger as the leader goes down. Then the replica would go down. In my case though, I was working with a 3 shard 2 replica cluster - so I don't think that was likely the issue. If one node goes down, there is no one to recover from. We need to investigate a bit more.
        Hide
        Markus Jelsma added a comment -

        Each node has two cores and allow only one warming searcher at any time. The problem is triggered on start up after graceful shutdown as well as a hard power off. I've seen it happening not only when the whole cluster if restarted (i don't think i've ever done that) but just one node of the 6 shard 2 replica test cluster.

        The attached log is of one node being restarted out of the whole cluster.

        Could the off-heap RAM be part of data being sent over the wire?

        We've worked around the problem for now by getting more RAM.

        Show
        Markus Jelsma added a comment - Each node has two cores and allow only one warming searcher at any time. The problem is triggered on start up after graceful shutdown as well as a hard power off. I've seen it happening not only when the whole cluster if restarted (i don't think i've ever done that) but just one node of the 6 shard 2 replica test cluster. The attached log is of one node being restarted out of the whole cluster. Could the off-heap RAM be part of data being sent over the wire? We've worked around the problem for now by getting more RAM.
        Hide
        Mark Miller added a comment -

        I was off a bit - even a non graceful shutdown should not cause this - if you are not indexing when you shutdown, at worst nodes should sync - not replicate.

        In my testing, I could easily replicate this though - replication recoveries when it should be a sync.

        Yonik recently committed a fix to this on trunk.

        Show
        Mark Miller added a comment - I was off a bit - even a non graceful shutdown should not cause this - if you are not indexing when you shutdown, at worst nodes should sync - not replicate. In my testing, I could easily replicate this though - replication recoveries when it should be a sync. Yonik recently committed a fix to this on trunk.
        Hide
        Markus Jelsma added a comment -

        When exactly? Do you have an issue?

        Show
        Markus Jelsma added a comment - When exactly? Do you have an issue?
        Hide
        Mark Miller added a comment -

        It was tagged to this issue number:

        +* SOLR-3685: Solr Cloud sometimes skipped peersync attempt and replicated instead due
        + to tlog flags not being cleared when no updates were buffered during a previous
        + replication. (Markus Jelsma, Mark Miller, yonik)

        Show
        Mark Miller added a comment - It was tagged to this issue number: +* SOLR-3685 : Solr Cloud sometimes skipped peersync attempt and replicated instead due + to tlog flags not being cleared when no updates were buffered during a previous + replication. (Markus Jelsma, Mark Miller, yonik)
        Hide
        Mark Miller added a comment -

        I think we still need to make an issue for cleaning up replication directories on non graceful shutdown.

        I'll rename this issue to match the recovery issue.

        And we can create a new issue for the memory thing (I tried to spot that locally, but have not yet).

        Show
        Mark Miller added a comment - I think we still need to make an issue for cleaning up replication directories on non graceful shutdown. I'll rename this issue to match the recovery issue. And we can create a new issue for the memory thing (I tried to spot that locally, but have not yet).
        Hide
        Markus Jelsma added a comment -

        Finally! Two nodes failed again and got killed by the OS. All nodes have a lot of off-heap RES memory, sometimes 3x higher than the heap which is a meager 256MB.

        Got a name suggestion for the memory issue? I'll open one tomorrow and link to this one.

        Show
        Markus Jelsma added a comment - Finally! Two nodes failed again and got killed by the OS. All nodes have a lot of off-heap RES memory, sometimes 3x higher than the heap which is a meager 256MB. Got a name suggestion for the memory issue? I'll open one tomorrow and link to this one.
        Hide
        Mark Miller added a comment -

        Are there any crash dump files? I don't think I've seen a java process crash without seeing one of these.

        Show
        Mark Miller added a comment - Are there any crash dump files? I don't think I've seen a java process crash without seeing one of these.
        Hide
        Markus Jelsma added a comment -

        One node also got rsyslogd killed but the other survived. I assume the OOMkiller output of Linux is what you refer to?

        Show
        Markus Jelsma added a comment - One node also got rsyslogd killed but the other survived. I assume the OOMkiller output of Linux is what you refer to?
        Hide
        Markus Jelsma added a comment -

        Here's the relevant part of syslog for a node where Tomcat is killed by the OS. There is 1G or available RAM, no configured swap and the heap size is 256MB. The node has two running cores.

        The off-heap RES memory for the Java process sometimes gets so large that Linux decides to kill it.

        Show
        Markus Jelsma added a comment - Here's the relevant part of syslog for a node where Tomcat is killed by the OS. There is 1G or available RAM, no configured swap and the heap size is 256MB. The node has two running cores. The off-heap RES memory for the Java process sometimes gets so large that Linux decides to kill it.
        Hide
        Yonik Seeley added a comment -

        May also want to try specifying NIOFSDirectoryFactory in solrconfig.xml to see if it's related to mmap?

        Show
        Yonik Seeley added a comment - May also want to try specifying NIOFSDirectoryFactory in solrconfig.xml to see if it's related to mmap?
        Hide
        Markus Jelsma added a comment - - edited

        We didn't think mmap could be the cause but nevertheless we tried that once on a smaller cluster and got a lot of memory consumption again, after which it got killed.
        I can see if i can run one or two of the nodes with NIOFS but let the other run with mmap. We don't automatically restart cores so it should run fine if we temporarily change the config in zookeeper and restart two nodes.

        edit: each core has a ~2.5GB index.

        Show
        Markus Jelsma added a comment - - edited We didn't think mmap could be the cause but nevertheless we tried that once on a smaller cluster and got a lot of memory consumption again, after which it got killed. I can see if i can run one or two of the nodes with NIOFS but let the other run with mmap. We don't automatically restart cores so it should run fine if we temporarily change the config in zookeeper and restart two nodes. edit: each core has a ~2.5GB index.
        Hide
        Uwe Schindler added a comment -

        Hi,
        I also don't think MMap is the reason for this, but it's good that you test it. You are saying that this happened with NIOFS, too, so my only guess is:

        As noted before (in my last comment), there seems to be something using off-heap memory (RES does not contain mmap, so if RES raises, its definitely not mmap), but other "direct memory". I am not sure about other components in solr, that might use direct memory. Maybe Zookeeper? Its hard to find those things in external libraries. Can you try to limit the -XX:MaxDirectMemorySize to zero and see if exceptions occur? Also it would be good to have the output of "pmap <pid>", this shows allocated and mapped memory, we should look at anonymous mappings and how many are there. Pmap is in procutils package.

        Show
        Uwe Schindler added a comment - Hi, I also don't think MMap is the reason for this, but it's good that you test it. You are saying that this happened with NIOFS, too, so my only guess is: As noted before (in my last comment), there seems to be something using off-heap memory (RES does not contain mmap, so if RES raises, its definitely not mmap), but other "direct memory". I am not sure about other components in solr, that might use direct memory. Maybe Zookeeper? Its hard to find those things in external libraries. Can you try to limit the -XX:MaxDirectMemorySize to zero and see if exceptions occur? Also it would be good to have the output of "pmap <pid>", this shows allocated and mapped memory, we should look at anonymous mappings and how many are there. Pmap is in procutils package.
        Hide
        Markus Jelsma added a comment -

        Here's the pmap for one node. Heap Xmx is still 256M. I also just noticed this node still has OpenJDK 6 running instead of Sun Java 6 like the other nodes. Despite that difference the memory consumption is equal.
        I'll also restart a node with NIOFS but i still expect memory to increase as with mmap.

        Show
        Markus Jelsma added a comment - Here's the pmap for one node. Heap Xmx is still 256M. I also just noticed this node still has OpenJDK 6 running instead of Sun Java 6 like the other nodes. Despite that difference the memory consumption is equal. I'll also restart a node with NIOFS but i still expect memory to increase as with mmap.
        Hide
        Markus Jelsma added a comment -

        To my surprise the RES for all nodes except the NIOFS node increased slowly over the past three days and were still increasing today. The mmapped nodes used sometimes up to three times the Xmx and, for some reason, about 1/2 Xmx of shared memory. We just restarted all nodes with 9 using mmap and one using NIO, after restart the mmapped nodes immediately start to use a lot more RES than the NIO node. The NIO node also uses much less shared memory.

        Perhaps what i've seen before with NIO also crashing was due to some other issue.

        So what we're seeing here is the mmapped nodes use more RES and SHR than the NIO node. VIRT is as expected. I'll change another node to NIO and keep them running again for the next few days and keep sending documents and firing queries.

        All nodes are using august 20th trunk from now on.

        Show
        Markus Jelsma added a comment - To my surprise the RES for all nodes except the NIOFS node increased slowly over the past three days and were still increasing today. The mmapped nodes used sometimes up to three times the Xmx and, for some reason, about 1/2 Xmx of shared memory. We just restarted all nodes with 9 using mmap and one using NIO, after restart the mmapped nodes immediately start to use a lot more RES than the NIO node. The NIO node also uses much less shared memory. Perhaps what i've seen before with NIO also crashing was due to some other issue. So what we're seeing here is the mmapped nodes use more RES and SHR than the NIO node. VIRT is as expected. I'll change another node to NIO and keep them running again for the next few days and keep sending documents and firing queries. All nodes are using august 20th trunk from now on.
        Hide
        Uwe Schindler added a comment -

        Hi,
        another thing to take into account: Java allocates on Linux for every thread a thread-local amount of memory (you can see this in the pmap output as additional "[ anon ]" mappings with a fixed size - it is only strange that you pmap does not populate the first hunman-readable column; I think you used -x instead of no option). This sums up to a lot of memory!

        See http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2012-August/008270.html (I don't think it is 64 MB, but for sure 1 MB)

        On my machine, the threads take each 1 MB. Depending on your Tomcat config and its thread pools this can take lot of memory, too.

        Show
        Uwe Schindler added a comment - Hi, another thing to take into account: Java allocates on Linux for every thread a thread-local amount of memory (you can see this in the pmap output as additional "[ anon ]" mappings with a fixed size - it is only strange that you pmap does not populate the first hunman-readable column; I think you used -x instead of no option). This sums up to a lot of memory! See http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2012-August/008270.html (I don't think it is 64 MB, but for sure 1 MB) On my machine, the threads take each 1 MB. Depending on your Tomcat config and its thread pools this can take lot of memory, too.
        Hide
        Robert Muir added a comment -

        Whats happening with this issue: is it still one? should it be critical/block 4.0?

        Show
        Robert Muir added a comment - Whats happening with this issue: is it still one? should it be critical/block 4.0?
        Hide
        Markus Jelsma added a comment -

        So what we're seeing here is the mmapped nodes use more RES and SHR than the NIO node. VIRT is as expected. I'll change another node to NIO and keep them running again for the next few days and keep sending documents and firing queries.

        there is still an issue with mmap and high RES opposed to NIOFS but the actual issue is already resolved. I'll open a new issue.

        Show
        Markus Jelsma added a comment - So what we're seeing here is the mmapped nodes use more RES and SHR than the NIO node. VIRT is as expected. I'll change another node to NIO and keep them running again for the next few days and keep sending documents and firing queries. there is still an issue with mmap and high RES opposed to NIOFS but the actual issue is already resolved. I'll open a new issue.
        Hide
        Uwe Schindler added a comment -

        Closed after release.

        Show
        Uwe Schindler added a comment - Closed after release.

          People

          • Assignee:
            Yonik Seeley
            Reporter:
            Markus Jelsma
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development