Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Invalid
    • Affects Version/s: 2.0.0
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Environment:

      2.6.18-53.el5 x86_64 GNU/Linux
      Java(TM) SE Runtime Environment (build 1.6.0_04-b12)

    • Lucene Fields:
      New

      Description

      Whilst running lucene in our QA environment we received the following exception. This problem was also reported here : http://confluence.atlassian.com/display/KB/JSP-20240+-+POSSIBLE+64+bit+JDK+1.6+update+4+may+have+HotSpot+problems.

      Is this a JVM problem or a problem in Lucene.

      #

      1. An unexpected error has been detected by Java Runtime Environment:
        #
      2. SIGSEGV (0xb) at pc=0x00002aaaaadb9e3f, pid=2275, tid=1085356352
        #
      3. Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0-b19 mixed mode linux-amd64)
      4. Problematic frame:
      5. V [libjvm.so+0x1fce3f]
        #
      6. If you would like to submit a bug report, please visit:
      7. http://java.sun.com/webapps/bugreport/crash.jsp
        #

      --------------- T H R E A D ---------------

      Current thread (0x00002aab0007f000): JavaThread "CompilerThread0" daemon [_thread_in_vm, id=2301, stack(0x0000000040a13000,0x0000000040b14000)]

      siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x0000000000000000

      Registers:
      RAX=0x0000000000000000, RBX=0x00002aab0007f000, RCX=0x0000000000000000, RDX=0x00002aab00309aa0
      RSP=0x0000000040b10f60, RBP=0x0000000040b10fb0, RSI=0x00002aaab37d1ce8, RDI=0x00002aaaaaaad000
      R8 =0x00002aaaab40cd88, R9 =0x0000000000000ffc, R10=0x00002aaaab40cd90, R11=0x00002aaaab410810
      R12=0x00002aab00ae60b0, R13=0x00002aab0a19cc30, R14=0x0000000040b112f0, R15=0x00002aab00ae60b0
      RIP=0x00002aaaaadb9e3f, EFL=0x0000000000010246, CSGSFS=0x0000000000000033, ERR=0x0000000000000004
      TRAPNO=0x000000000000000e

      Top of Stack: (sp=0x0000000040b10f60)
      0x0000000040b10f60: 00002aab0007f000 0000000000000000
      0x0000000040b10f70: 00002aab0a19cc30 0000000000000001
      0x0000000040b10f80: 00002aab0007f000 0000000000000000
      0x0000000040b10f90: 0000000040b10fe0 00002aab0a19cc30
      0x0000000040b10fa0: 00002aab0a19cc30 00002aab00ae60b0
      0x0000000040b10fb0: 0000000040b10fe0 00002aaaaae9c2e4
      0x0000000040b10fc0: 00002aaaab413210 00002aaaab413350
      0x0000000040b10fd0: 0000000040b112f0 00002aab09796260
      0x0000000040b10fe0: 0000000040b110e0 00002aaaaae9d7d8
      0x0000000040b10ff0: 00002aaaab40f3d0 00002aab08c2a4c8
      0x0000000040b11000: 0000000040b11940 00002aab09796260
      0x0000000040b11010: 00002aab09795b28 0000000000000000
      0x0000000040b11020: 00002aab08c2a4c8 00002aab009b9750
      0x0000000040b11030: 00002aab09796260 0000000040b11940
      0x0000000040b11040: 00002aaaab40f3d0 0000000000002023
      0x0000000040b11050: 0000000040b11940 00002aab09796260
      0x0000000040b11060: 0000000040b11090 00002aaaab0f199e
      0x0000000040b11070: 0000000040b11978 00002aab08c2a458
      0x0000000040b11080: 00002aaaab413210 0000000000002023
      0x0000000040b11090: 0000000040b110e0 00002aaaab0f1fcf
      0x0000000040b110a0: 0000000000002023 00002aab09796260
      0x0000000040b110b0: 00002aab08c2a3c8 0000000040b123b0
      0x0000000040b110c0: 00002aab08c2a458 0000000040b112f0
      0x0000000040b110d0: 00002aaaab40f3d0 00002aab00043670
      0x0000000040b110e0: 0000000040b11160 00002aaaab0e808d
      0x0000000040b110f0: 00002aab000417c0 00002aab009b66a8
      0x0000000040b11100: 0000000000000000 00002aab009b9750
      0x0000000040b11110: 0000000040b112f0 00002aab009bb360
      0x0000000040b11120: 0000000000000003 0000000040b113d0
      0x0000000040b11130: 01002aab0052d0c0 0000000040b113d0
      0x0000000040b11140: 00000000000000b3 0000000040b112f0
      0x0000000040b11150: 0000000040b113d0 00002aab08c2a108

      Instructions: (pc=0x00002aaaaadb9e3f)
      0x00002aaaaadb9e2f: 48 89 5d b0 49 8b 55 08 49 8b 4c 24 08 48 8b 32
      0x00002aaaaadb9e3f: 4c 8b 21 8b 4e 1c 49 8d 7c 24 10 89 cb 4a 39 34

      Stack: [0x0000000040a13000,0x0000000040b14000], sp=0x0000000040b10f60, free space=1015k
      Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
      V [libjvm.so+0x1fce3f]
      V [libjvm.so+0x2df2e4]
      V [libjvm.so+0x2e07d8]
      V [libjvm.so+0x52b08d]
      V [libjvm.so+0x524914]
      V [libjvm.so+0x51c0ea]
      V [libjvm.so+0x519f77]
      V [libjvm.so+0x519e7c]
      V [libjvm.so+0x519ad5]
      V [libjvm.so+0x1e0cf4]
      V [libjvm.so+0x2a0bc0]
      V [libjvm.so+0x528e03]
      V [libjvm.so+0x51c0ea]
      V [libjvm.so+0x519f77]
      V [libjvm.so+0x519e7c]
      V [libjvm.so+0x519ad5]
      V [libjvm.so+0x1e0cf4]
      V [libjvm.so+0x240eba]
      V [libjvm.so+0x1e05c7]
      V [libjvm.so+0x248ec8]
      V [libjvm.so+0x248866]
      V [libjvm.so+0x62a3f9]
      V [libjvm.so+0x6246a1]
      V [libjvm.so+0x505eea]

      Current CompileTask:
      C2:2408 ! org.apache.lucene.index.DocumentWriter.invertDocument(Lorg/apache/lucene/document/Document;)V (482 bytes)

      1. jvmerror.log
        40 kB
        Jason Rutherglen
      2. hs_err_pid27882.log
        183 kB
        Paul Smith
      3. hs_err_pid21301.log
        182 kB
        Paul Smith
      4. hs_err_pid13693.log
        38 kB
        Amit Nithian
      5. hs_err_pid10565.log
        18 kB
        Manish Dubey

        Activity

        Kevin Richards created issue -
        Hide
        Yonik Seeley added a comment -

        Have you tried the latest Java6 JVM to see if this issue has been fixed?

        Show
        Yonik Seeley added a comment - Have you tried the latest Java6 JVM to see if this issue has been fixed?
        Hide
        Kevin Richards added a comment -

        I've looked in the release notes for recent patches but there's nothing obviously in that direction. We can certainly try updating the JVM - but I was hoping for some feedback from a Lucene expert who might be able to go - 'Ah yeah that's because of xyz'.

        The problem is not reliable - we've had it fail once with this error since we changed our scheduled re-indexing to run hourly.

        Show
        Kevin Richards added a comment - I've looked in the release notes for recent patches but there's nothing obviously in that direction. We can certainly try updating the JVM - but I was hoping for some feedback from a Lucene expert who might be able to go - 'Ah yeah that's because of xyz'. The problem is not reliable - we've had it fail once with this error since we changed our scheduled re-indexing to run hourly.
        Hide
        Michael McCandless added a comment -

        It looks like this is <= version 2.2 of Lucene?

        There's at least one other known JRE bug affecting Lucene (LUCENE-1282), also JRE versions 1.6.0_04 and _05, but only on version 2.3+ of Lucene.

        Show
        Michael McCandless added a comment - It looks like this is <= version 2.2 of Lucene? There's at least one other known JRE bug affecting Lucene ( LUCENE-1282 ), also JRE versions 1.6.0_04 and _05, but only on version 2.3+ of Lucene.
        Hide
        Michael McCandless added a comment -

        This is a JRE bug.

        Show
        Michael McCandless added a comment - This is a JRE bug.
        Michael McCandless made changes -
        Field Original Value New Value
        Resolution Invalid [ 6 ]
        Status Open [ 1 ] Resolved [ 5 ]
        Hide
        Rick Masters added a comment -

        Do you have a number or link for the JRE bug? Do you know what versions it applies to?

        Thanks!

        Show
        Rick Masters added a comment - Do you have a number or link for the JRE bug? Do you know what versions it applies to? Thanks!
        Hide
        Michael McCandless added a comment -

        Sorry – I don't know which specific JRE bug this is. It'd be good to test if it's the same JRE bug behind LUCENE-1282 and if not, try to further isolate/characterize it (after upgrading to the lastest JRE 1.6 release).

        Show
        Michael McCandless added a comment - Sorry – I don't know which specific JRE bug this is. It'd be good to test if it's the same JRE bug behind LUCENE-1282 and if not, try to further isolate/characterize it (after upgrading to the lastest JRE 1.6 release).
        Hide
        Alison Winters added a comment -

        We have hit this issue in our QA environment as well, running Lucene 2.2 and Java SE 1.6.0_4 in -server mode. We will be upgrading to update 10 to see if it is solved (as with LUCENE-1282).

        Show
        Alison Winters added a comment - We have hit this issue in our QA environment as well, running Lucene 2.2 and Java SE 1.6.0_4 in -server mode. We will be upgrading to update 10 to see if it is solved (as with LUCENE-1282 ).
        Hide
        Michael McCandless added a comment -

        Please post back on whether Java 6 update 10 resolves this for you. If not, we need to try to boil your case down to a compact test case to open an issue with Sun. That's how the issue behind LUCENE-1282 was found & fixed.

        Show
        Michael McCandless added a comment - Please post back on whether Java 6 update 10 resolves this for you. If not, we need to try to boil your case down to a compact test case to open an issue with Sun. That's how the issue behind LUCENE-1282 was found & fixed.
        Hide
        Alison Winters added a comment -

        We have not been able to trigger this on 1.6.0_10. That said, we have run a few more tests on 1.6.0_4 and couldn't trigger it again there either - it must be a pretty obscure case. We are going to go with _10 in production because at least that hasn't failed yet.

        Show
        Alison Winters added a comment - We have not been able to trigger this on 1.6.0_10. That said, we have run a few more tests on 1.6.0_4 and couldn't trigger it again there either - it must be a pretty obscure case. We are going to go with _10 in production because at least that hasn't failed yet.
        Hide
        Paul Cowan added a comment -

        Just for everyone's information, we have since managed to reproduce this twice on 1.6.0_10, so this is still an active bug. I have raised the issue with Sun, and will link to the Sun Bug DB when the bug is accepted. I have linked without a test case, obviously if someone has the expertise to distil it into something easily reproducible that would be a huge help. I'll have a go and report back. It's hard because it occurs so rarely for us (but commong enough that we're scared about going live)

        Show
        Paul Cowan added a comment - Just for everyone's information, we have since managed to reproduce this twice on 1.6.0_10, so this is still an active bug. I have raised the issue with Sun, and will link to the Sun Bug DB when the bug is accepted. I have linked without a test case, obviously if someone has the expertise to distil it into something easily reproducible that would be a huge help. I'll have a go and report back. It's hard because it occurs so rarely for us (but commong enough that we're scared about going live)
        Hide
        Michael McCandless added a comment -

        Just to confirm, it was at least b28 of 1.6.0_10 (or, now, the released version of 1.6.0_10) that you see the bug happen?

        Paul can you describe how your app uses Lucene? Are you changing any of the default settings (RAM buffer size, mergeFactor etc)?

        A few things to try might be using a single thread for indexing (if you're using multiple threads now), switching back to SerialMergeScheduler, switching between autoCommit false vs true, very small or very large ram buffer sizes, etc. If one of these changes stops the bug or makes it more frequent then it's at least some progress towards narrowing it down.

        Show
        Michael McCandless added a comment - Just to confirm, it was at least b28 of 1.6.0_10 (or, now, the released version of 1.6.0_10) that you see the bug happen? Paul can you describe how your app uses Lucene? Are you changing any of the default settings (RAM buffer size, mergeFactor etc)? A few things to try might be using a single thread for indexing (if you're using multiple threads now), switching back to SerialMergeScheduler, switching between autoCommit false vs true, very small or very large ram buffer sizes, etc. If one of these changes stops the bug or makes it more frequent then it's at least some progress towards narrowing it down.
        Hide
        Paul Smith added a comment -

        java version "1.6.0_10"
        Java(TM) SE Runtime Environment (build 1.6.0_10-b33)
        Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)

        For clarity, there's 2 Paul's, myself included, and Alison here on the discussion thread, all from Aconex (we're all talking about the same problem at the same company, but are sharing in the discussion based on different analysis we're doing.

        We've recently upgraded to using Lucene 2.2 from 2.0 (yes, way behind, but we're cautious here..), and about 4 days from going into production with it.

        First off, an observation. The original bug report here was reported against Lucene 2.0, which we've been using in production for nearly 2 years against a few different JVM's (Java 1.5, plus a few builds of Java 1.6 up to and including 1.6.04). We've never encountered this in production or in our load test area using Lucene 2.0. However as soon as we switched to Lucene 2.2, using the same JRE as production (1.6.04), we started seeing these problems. After reviewing another HotSpot crash bug (LUCENE-1282) we decided to see if JRE 1.6.010 made a difference. Initially it did, we didn't find a problem with several load testing runs and we thought we were fine. Then a few weeks later, we started to see it occurring more frequently, yet none of the code changes in our application since the initial 1.6.010 switch could logically be connected to the indexing system at all (our application is spilt between an App, and an Index/Search server, and the SVN diff between the load testing tag runs didn't have any code change that was Indexer/Search related).

        At the same time we had a strange network problem going on in the load testing area that was causing problems with the App talking to the Indexer, which was caused by a local DNS problem. Inexplicably the JRE crash hasn't happened that I'm aware of; how that is related to the JRE hotspot compilation of Lucene byte-code, I have no idea.. BUT, since we had several weeks of stability and then several crashes, this is purely anecdotal/coincidental. I'm still rubbing my rabbits foot here. I need to chat with Alison & Paul Cowan about this to get more specific details about if/when the crash has occurred since the DNS problem was resolved, because it could purely be a statistical anomaly (we simply may not have done many runs to flush it out), and frankly I could be mistaken in the # crashes in the load testing env.

        For incremental indexing (which is what is happening during the load test that crashes) we are using compound file format, merge factor =default(10), minMergeDocs=200, maxMergeDocs=Default(MAX_INT). it's pretty vanilla really.. (the reason for a low mergeFactor is that we have several hundred indexes open at the same time for different projects, so open file handles becomes a problem).

        I'll let Alison/Paul Cowan comment further, this is just my 5 Aussie cents worth.

        Show
        Paul Smith added a comment - java version "1.6.0_10" Java(TM) SE Runtime Environment (build 1.6.0_10-b33) Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode) For clarity, there's 2 Paul's, myself included, and Alison here on the discussion thread, all from Aconex (we're all talking about the same problem at the same company, but are sharing in the discussion based on different analysis we're doing. We've recently upgraded to using Lucene 2.2 from 2.0 (yes, way behind, but we're cautious here..), and about 4 days from going into production with it. First off, an observation. The original bug report here was reported against Lucene 2.0, which we've been using in production for nearly 2 years against a few different JVM's (Java 1.5, plus a few builds of Java 1.6 up to and including 1.6.04). We've never encountered this in production or in our load test area using Lucene 2.0. However as soon as we switched to Lucene 2.2, using the same JRE as production (1.6.04), we started seeing these problems. After reviewing another HotSpot crash bug ( LUCENE-1282 ) we decided to see if JRE 1.6.010 made a difference. Initially it did, we didn't find a problem with several load testing runs and we thought we were fine. Then a few weeks later, we started to see it occurring more frequently, yet none of the code changes in our application since the initial 1.6.010 switch could logically be connected to the indexing system at all (our application is spilt between an App, and an Index/Search server, and the SVN diff between the load testing tag runs didn't have any code change that was Indexer/Search related). At the same time we had a strange network problem going on in the load testing area that was causing problems with the App talking to the Indexer, which was caused by a local DNS problem. Inexplicably the JRE crash hasn't happened that I'm aware of; how that is related to the JRE hotspot compilation of Lucene byte-code, I have no idea.. BUT, since we had several weeks of stability and then several crashes, this is purely anecdotal/coincidental. I'm still rubbing my rabbits foot here. I need to chat with Alison & Paul Cowan about this to get more specific details about if/when the crash has occurred since the DNS problem was resolved, because it could purely be a statistical anomaly (we simply may not have done many runs to flush it out), and frankly I could be mistaken in the # crashes in the load testing env. For incremental indexing (which is what is happening during the load test that crashes) we are using compound file format, merge factor =default(10), minMergeDocs=200, maxMergeDocs=Default(MAX_INT). it's pretty vanilla really.. (the reason for a low mergeFactor is that we have several hundred indexes open at the same time for different projects, so open file handles becomes a problem). I'll let Alison/Paul Cowan comment further, this is just my 5 Aussie cents worth.
        Hide
        Manish Dubey added a comment -

        We are seeing the same issue on 32 bit OS and JVM. Our lucene version is 2.0 and JVM info is:

        java version "1.6.0_10"
        Java(TM) SE Runtime Environment (build 1.6.0_10-b33)
        Java HotSpot(TM) Server VM (build 11.0-b15, mixed mode)

        We have run on this version with Java 1.5 for about 2 years now without any issues.

        The problem seems to happen only on the server that updates the index incrementally. We have not had any DNS issues either.

        Show
        Manish Dubey added a comment - We are seeing the same issue on 32 bit OS and JVM. Our lucene version is 2.0 and JVM info is: java version "1.6.0_10" Java(TM) SE Runtime Environment (build 1.6.0_10-b33) Java HotSpot(TM) Server VM (build 11.0-b15, mixed mode) We have run on this version with Java 1.5 for about 2 years now without any issues. The problem seems to happen only on the server that updates the index incrementally. We have not had any DNS issues either.
        Hide
        Michael McCandless added a comment -

        Can you attach the JRE's crash log?

        Show
        Michael McCandless added a comment - Can you attach the JRE's crash log?
        Hide
        Manish Dubey added a comment -

        Log of jvm segv.

        Show
        Manish Dubey added a comment - Log of jvm segv.
        Manish Dubey made changes -
        Attachment hs_err_pid10565.log [ 12394461 ]
        Hide
        Paul Smith added a comment -

        2 crash dumps attached.

        Show
        Paul Smith added a comment - 2 crash dumps attached.
        Paul Smith made changes -
        Attachment hs_err_pid27882.log [ 12394518 ]
        Attachment hs_err_pid21301.log [ 12394517 ]
        Hide
        Michael McCandless added a comment -

        These 3 crashes happen while compiling DocumentWriter.invertDocument. It must be a bug in Sun's hotspot compiler. Has anyone opened an issue at http://bugs.sun.com yet?

        That method, which inverts a single document, was replaced with new code starting with 2.3, so it's possible you can workaround the bug by upgrading to 2.3 or 2.4. But it'd still be nice to get the actual hotspot bug fixed.

        Show
        Michael McCandless added a comment - These 3 crashes happen while compiling DocumentWriter.invertDocument. It must be a bug in Sun's hotspot compiler. Has anyone opened an issue at http://bugs.sun.com yet? That method, which inverts a single document, was replaced with new code starting with 2.3, so it's possible you can workaround the bug by upgrading to 2.3 or 2.4. But it'd still be nice to get the actual hotspot bug fixed.
        Hide
        Paul Smith added a comment -

        yeah, it's definitely a Sun bug, not a Lucene one, but like the other recent JVM crash issue it sort of 'affects' Lucene specifically. Must be something about that byte code. No idea why it does/does not trigger it.

        We've raised a Sun bug, but it hasn't 'appeared' online yet (Paul Cowan raised it). Will post the cross link to it once we have confirmation that Sun has deemed it 'worthy' to accept it.

        Show
        Paul Smith added a comment - yeah, it's definitely a Sun bug, not a Lucene one, but like the other recent JVM crash issue it sort of 'affects' Lucene specifically. Must be something about that byte code. No idea why it does/does not trigger it. We've raised a Sun bug, but it hasn't 'appeared' online yet (Paul Cowan raised it). Will post the cross link to it once we have confirmation that Sun has deemed it 'worthy' to accept it.
        Hide
        Michael Böckling added a comment -

        We've just run into this bug with Lucene 2.1.0 and jdk 1.6.0_07-b06.

        Are there any news on this issue? Sun can't ignore a HotSpot compiler bug, can they? I can contribute a crash log if desired.

        Show
        Michael Böckling added a comment - We've just run into this bug with Lucene 2.1.0 and jdk 1.6.0_07-b06. Are there any news on this issue? Sun can't ignore a HotSpot compiler bug, can they? I can contribute a crash log if desired.
        Hide
        Earwin Burrfoot added a comment -

        Sun can't ignore a HotSpot compiler bug, can they?

        They are safely ignoring CMS collector bugs on 64bit archs.

        Show
        Earwin Burrfoot added a comment - Sun can't ignore a HotSpot compiler bug, can they? They are safely ignoring CMS collector bugs on 64bit archs.
        Hide
        Jason Rutherglen added a comment -

        Here's the JVM error I'm seeing on Amazon EC2:

        java version "1.6.0_07"
        Java(TM) SE Runtime Environment (build 1.6.0_07-b06)
        Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)

        #

        1. An unexpected error has been detected by Java Runtime Environment:
          #
        2. SIGSEGV (0xb) at pc=0x00002aaaab852a01, pid=2747, tid=1077070160
          #
        3. Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0-b23 mixed mode linux-amd64)
        4. Problematic frame:
        5. V [libjvm.so+0x2faa01]
          #
        6. An error report file with more information is saved as:
        7. /mnt/solr/conf/hs_err_pid2747.log
          #
        8. If you would like to submit a bug report, please visit:
        9. http://java.sun.com/webapps/bugreport/crash.jsp
          #
        Show
        Jason Rutherglen added a comment - Here's the JVM error I'm seeing on Amazon EC2: java version "1.6.0_07" Java(TM) SE Runtime Environment (build 1.6.0_07-b06) Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode) # An unexpected error has been detected by Java Runtime Environment: # SIGSEGV (0xb) at pc=0x00002aaaab852a01, pid=2747, tid=1077070160 # Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0-b23 mixed mode linux-amd64) Problematic frame: V [libjvm.so+0x2faa01] # An error report file with more information is saved as: /mnt/solr/conf/hs_err_pid2747.log # If you would like to submit a bug report, please visit: http://java.sun.com/webapps/bugreport/crash.jsp #
        Jason Rutherglen made changes -
        Attachment jvmerror.log [ 12418030 ]
        Hide
        Amit Nithian added a comment -

        I just encountered this error in our own QA environment. The last 3 days our JVM has been dying around 3AM with this bug and I am running 1.6.0_12. What OS/hardware environments are causing problems? I am running CentOS 5.2 and I'll attach my crash dump too.

        Has anyone seen any info on the Sun lists about this? I perused the change logs from 13-16 and didn't see anything specific to this unless it was listed as something else.

        Show
        Amit Nithian added a comment - I just encountered this error in our own QA environment. The last 3 days our JVM has been dying around 3AM with this bug and I am running 1.6.0_12. What OS/hardware environments are causing problems? I am running CentOS 5.2 and I'll attach my crash dump too. Has anyone seen any info on the Sun lists about this? I perused the change logs from 13-16 and didn't see anything specific to this unless it was listed as something else.
        Amit Nithian made changes -
        Attachment hs_err_pid13693.log [ 12422005 ]
        Mark Thomas made changes -
        Workflow jira [ 12435778 ] Default workflow, editable Closed status [ 12562412 ]
        Mark Thomas made changes -
        Workflow Default workflow, editable Closed status [ 12562412 ] jira [ 12584749 ]

          People

          • Assignee:
            Unassigned
            Reporter:
            Kevin Richards
          • Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development