Uploaded image for project: 'Solr'
  1. Solr
  2. SOLR-8575

Fix HDFSLogReader replay status numbers, a performance bug where we can reopen FSDataInputStream much too often, and an hdfs tlog data integrity bug.

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 6.0
    • Component/s: None
    • Labels:
      None

      Description

      Patrick Dvorak noticed some funny transaction log replay status logging a while back:

      active=true starting pos=444978 current pos=2855956 current size=16262 % read=17562
      active=true starting pos=444978 current pos=5748869 current size=16262 % read=35352

      17562% read? Current size does not change as expected in this case?

      1. SOLR-8575.patch
        20 kB
        Mark Miller
      2. SOLR-8575.patch
        10 kB
        Mark Miller

        Activity

        Hide
        markrmiller@gmail.com Mark Miller added a comment -

        Patch that fixes a couple issues and adds an isolated test that tries to target tlog replay while buffering during recovery.

        When we recalculated the size of the tlog, it keeps coming back the same as the first size call, even if the tlog has grown. I think this has something to do with an open file with hdfs.

        We were somewhat incorrectly using that size for tlog progress logging. We should having been using our internally tracked size.

        the same as the first size call, even if the tlog has grown.

        And that kept a large performance issue in. When we opened the tlog at first, we could replay fairly fast, but if we buffer updates while replay, we were reopening the reader every update due to the stale size issue.

        Show
        markrmiller@gmail.com Mark Miller added a comment - Patch that fixes a couple issues and adds an isolated test that tries to target tlog replay while buffering during recovery. When we recalculated the size of the tlog, it keeps coming back the same as the first size call, even if the tlog has grown. I think this has something to do with an open file with hdfs. We were somewhat incorrectly using that size for tlog progress logging. We should having been using our internally tracked size. the same as the first size call, even if the tlog has grown. And that kept a large performance issue in. When we opened the tlog at first, we could replay fairly fast, but if we buffer updates while replay, we were reopening the reader every update due to the stale size issue.
        Hide
        mdrob Mike Drob added a comment -

        I had talked to Andrew Wang about this maybe a month ago and he suggested that if you want to get the updated size from the file then you have to use hsync with the length update flag[1] using an HdfsOutputStream (not FSDataOutputStream like we use).

        Using an internally stored length is probably better anyway, though.

        [1]: https://github.com/apache/hadoop/blob/2ec438e8f7cd77cb48fd1264781e60a48e331908/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java#L105

        Show
        mdrob Mike Drob added a comment - I had talked to Andrew Wang about this maybe a month ago and he suggested that if you want to get the updated size from the file then you have to use hsync with the length update flag[1] using an HdfsOutputStream (not FSDataOutputStream like we use). Using an internally stored length is probably better anyway, though. [1]: https://github.com/apache/hadoop/blob/2ec438e8f7cd77cb48fd1264781e60a48e331908/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java#L105
        Hide
        markrmiller@gmail.com Mark Miller added a comment - - edited

        Using an internally stored length is probably better anyway, though.

        The problem is that our internal size does not correlate with what we can actually read, even after an hflush. (unless we reopen inputstreams)

        updated size from the file then you have to use hsync with the length update flag[1] using an HdfsOutputStream

        Ah, interesting, I'll poke around that a bit to see if we want to do anything different.

        Show
        markrmiller@gmail.com Mark Miller added a comment - - edited Using an internally stored length is probably better anyway, though. The problem is that our internal size does not correlate with what we can actually read, even after an hflush. (unless we reopen inputstreams) updated size from the file then you have to use hsync with the length update flag [1] using an HdfsOutputStream Ah, interesting, I'll poke around that a bit to see if we want to do anything different.
        Hide
        mdrob Mike Drob added a comment -

        Mark Miller - do you think we will be going with your current proposed approach or do you expect to redo this to use HdfsOutputStream? Not sure how much research you've already done so I don't want to duplicate effort, but would be interested in making sure this issue gets resolved.

        Show
        mdrob Mike Drob added a comment - Mark Miller - do you think we will be going with your current proposed approach or do you expect to redo this to use HdfsOutputStream? Not sure how much research you've already done so I don't want to duplicate effort, but would be interested in making sure this issue gets resolved.
        Hide
        markrmiller@gmail.com Mark Miller added a comment -

        Yeah, I was about to commit what I have. Adding an hsync would be much slower than this and is not necessary with this approach in my testing.

        Show
        markrmiller@gmail.com Mark Miller added a comment - Yeah, I was about to commit what I have. Adding an hsync would be much slower than this and is not necessary with this approach in my testing.
        Hide
        jira-bot ASF subversion and git services added a comment -

        Commit ec4c72310f3548b93139b25a12d6e9a16ac9e322 in lucene-solr's branch refs/heads/master from Mark Miller
        [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ec4c723 ]

        SOLR-8575: Fix HDFSLogReader replay status numbers and a performance bug where we can reopen FSDataInputStream too often.

        Show
        jira-bot ASF subversion and git services added a comment - Commit ec4c72310f3548b93139b25a12d6e9a16ac9e322 in lucene-solr's branch refs/heads/master from Mark Miller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ec4c723 ] SOLR-8575 : Fix HDFSLogReader replay status numbers and a performance bug where we can reopen FSDataInputStream too often.
        Hide
        jira-bot ASF subversion and git services added a comment -

        Commit 482b40f841660820f633267a21e6df44aff55346 in lucene-solr's branch refs/heads/branch_5x from Mark Miller
        [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=482b40f ]

        SOLR-8575: Fix HDFSLogReader replay status numbers and a performance bug where we can reopen FSDataInputStream too often.

        Show
        jira-bot ASF subversion and git services added a comment - Commit 482b40f841660820f633267a21e6df44aff55346 in lucene-solr's branch refs/heads/branch_5x from Mark Miller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=482b40f ] SOLR-8575 : Fix HDFSLogReader replay status numbers and a performance bug where we can reopen FSDataInputStream too often.
        Hide
        anshumg Anshum Gupta added a comment -

        Hey Mark Miller is the commit supposed to be from the mark dot miller at oblivion dot ch id ?

        Show
        anshumg Anshum Gupta added a comment - Hey Mark Miller is the commit supposed to be from the mark dot miller at oblivion dot ch id ?
        Hide
        markrmiller@gmail.com Mark Miller added a comment -

        Strange...not what is in my gitconfig. Must be something to do with the INFRA tagging script?

        Show
        markrmiller@gmail.com Mark Miller added a comment - Strange...not what is in my gitconfig. Must be something to do with the INFRA tagging script?
        Hide
        markrmiller@gmail.com Mark Miller added a comment -

        Probably because my name is not my username, just my email id's me. I'll switch the username.

        Show
        markrmiller@gmail.com Mark Miller added a comment - Probably because my name is not my username, just my email id's me. I'll switch the username.
        Hide
        thetaphi Uwe Schindler added a comment -

        Where do you see this in this commit? It shows both for author and committer the same @apache.org ID.

        Show
        thetaphi Uwe Schindler added a comment - Where do you see this in this commit? It shows both for author and committer the same @apache.org ID.
        Hide
        thetaphi Uwe Schindler added a comment -

        Ah the comment refers wrong JIRA user name! I think thats a bug and INFRA should take care.

        Show
        thetaphi Uwe Schindler added a comment - Ah the comment refers wrong JIRA user name! I think thats a bug and INFRA should take care.
        Hide
        anshumg Anshum Gupta added a comment -

        Thanks Uwe!

        Do you mean to say there's already an open issue? or do we need to open another one?

        Show
        anshumg Anshum Gupta added a comment - Thanks Uwe! Do you mean to say there's already an open issue? or do we need to open another one?
        Hide
        thetaphi Uwe Schindler added a comment -

        I think we should contact them or open issue. Maybe they have a "mapping" table (ASF-ID -> JIRA-ID) somewhere.

        Show
        thetaphi Uwe Schindler added a comment - I think we should contact them or open issue. Maybe they have a "mapping" table (ASF-ID -> JIRA-ID) somewhere.
        Hide
        jira-bot ASF subversion and git services added a comment -

        Commit ec4c72310f3548b93139b25a12d6e9a16ac9e322 in lucene-solr's branch refs/heads/lucene-6835 from Mark Miller
        [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ec4c723 ]

        SOLR-8575: Fix HDFSLogReader replay status numbers and a performance bug where we can reopen FSDataInputStream too often.

        Show
        jira-bot ASF subversion and git services added a comment - Commit ec4c72310f3548b93139b25a12d6e9a16ac9e322 in lucene-solr's branch refs/heads/lucene-6835 from Mark Miller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ec4c723 ] SOLR-8575 : Fix HDFSLogReader replay status numbers and a performance bug where we can reopen FSDataInputStream too often.
        Hide
        yseeley@gmail.com Yonik Seeley added a comment -

        I was going to reopen this issue, but it's still open anyway.
        I've changed to a blocker for 5.5 based on what I'm seeing in here:
        https://issues.apache.org/jira/browse/SOLR-8586?focusedCommentId=15142215&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15142215

        Show
        yseeley@gmail.com Yonik Seeley added a comment - I was going to reopen this issue, but it's still open anyway. I've changed to a blocker for 5.5 based on what I'm seeing in here: https://issues.apache.org/jira/browse/SOLR-8586?focusedCommentId=15142215&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15142215
        Hide
        mikemccand Michael McCandless added a comment -

        Hmm so it sounds like the changes committed for this issue caused the test failures you're seeing on SOLR-8586 Yonik Seeley? Should we revert the change here until we can explain it?

        Show
        mikemccand Michael McCandless added a comment - Hmm so it sounds like the changes committed for this issue caused the test failures you're seeing on SOLR-8586 Yonik Seeley ? Should we revert the change here until we can explain it?
        Hide
        markrmiller@gmail.com Mark Miller added a comment -

        Yeah, I would just pull it out of 5.5 rather than try and address it.

        Show
        markrmiller@gmail.com Mark Miller added a comment - Yeah, I would just pull it out of 5.5 rather than try and address it.
        Hide
        jira-bot ASF subversion and git services added a comment -

        Commit f6098148aed067c06e2459a3ab55abe2e66300b0 in lucene-solr's branch refs/heads/master from markrmiller
        [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f609814 ]

        SOLR-8575: Revert while investigated. (reverted from commit ec4c72310f3548b93139b25a12d6e9a16ac9e322)

        Show
        jira-bot ASF subversion and git services added a comment - Commit f6098148aed067c06e2459a3ab55abe2e66300b0 in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f609814 ] SOLR-8575 : Revert while investigated. (reverted from commit ec4c72310f3548b93139b25a12d6e9a16ac9e322)
        Hide
        jira-bot ASF subversion and git services added a comment -

        Commit 68ba7a5e5275d4ad10e4e8f70e223f9b61d70b54 in lucene-solr's branch refs/heads/branch_5x from markrmiller
        [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=68ba7a5 ]

        SOLR-8575: Revert while investigated. (reverted from commit 482b40f841660820f633267a21e6df44aff55346)

        Show
        jira-bot ASF subversion and git services added a comment - Commit 68ba7a5e5275d4ad10e4e8f70e223f9b61d70b54 in lucene-solr's branch refs/heads/branch_5x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=68ba7a5 ] SOLR-8575 : Revert while investigated. (reverted from commit 482b40f841660820f633267a21e6df44aff55346)
        Hide
        jira-bot ASF subversion and git services added a comment -

        Commit 51257d2ebe099a6c7029e7fd47ce25f4393cfb49 in lucene-solr's branch refs/heads/branch_5_5 from markrmiller
        [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=51257d2 ]

        SOLR-8575: Revert while investigated. (reverted from commit 482b40f841660820f633267a21e6df44aff55346)

        Show
        jira-bot ASF subversion and git services added a comment - Commit 51257d2ebe099a6c7029e7fd47ce25f4393cfb49 in lucene-solr's branch refs/heads/branch_5_5 from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=51257d2 ] SOLR-8575 : Revert while investigated. (reverted from commit 482b40f841660820f633267a21e6df44aff55346)
        Hide
        mikemccand Michael McCandless added a comment -

        Thanks Mark Miller.

        Show
        mikemccand Michael McCandless added a comment - Thanks Mark Miller .
        Hide
        yseeley@gmail.com Yonik Seeley added a comment -

        Here's an interesting exception I found logged:

          2> 84141 ERROR (recoveryExecutor-132-thread-2-processing-n:127.0.0.1:44435_ x:collection1 s:shard5 c:collection1 r:core_node4) [n:127.0.0.1:44435_ c:collection1 s:shard5 r:core_node4 x:collection1] o.a.s.u.UpdateLog java.io.EOFException
          2> 	at org.apache.solr.common.util.FastInputStream.readByte(FastInputStream.java:207)
          2> 	at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:207)
          2> 	at org.apache.solr.update.HdfsTransactionLog$HDFSLogReader.next(HdfsTransactionLog.java:419)
          2> 	at org.apache.solr.update.UpdateLog$LogReplayer.doReplay(UpdateLog.java:1333)
          2> 	at org.apache.solr.update.UpdateLog$LogReplayer.run(UpdateLog.java:1255)
        

        An important part of this patch was recording the amount of data we've written so far (as "sz"),
        and then a new input stream is opened.

        Does HDFS guarantee that all data written will be readable if we open the file again (even if we haven't closed the file)?
        And does read() make the same guarantees about reading at least a single byte (or blocking) unless we've reached EOF?

        Show
        yseeley@gmail.com Yonik Seeley added a comment - Here's an interesting exception I found logged: 2> 84141 ERROR (recoveryExecutor-132-thread-2-processing-n:127.0.0.1:44435_ x:collection1 s:shard5 c:collection1 r:core_node4) [n:127.0.0.1:44435_ c:collection1 s:shard5 r:core_node4 x:collection1] o.a.s.u.UpdateLog java.io.EOFException 2> at org.apache.solr.common.util.FastInputStream.readByte(FastInputStream.java:207) 2> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:207) 2> at org.apache.solr.update.HdfsTransactionLog$HDFSLogReader.next(HdfsTransactionLog.java:419) 2> at org.apache.solr.update.UpdateLog$LogReplayer.doReplay(UpdateLog.java:1333) 2> at org.apache.solr.update.UpdateLog$LogReplayer.run(UpdateLog.java:1255) An important part of this patch was recording the amount of data we've written so far (as "sz"), and then a new input stream is opened. Does HDFS guarantee that all data written will be readable if we open the file again (even if we haven't closed the file)? And does read() make the same guarantees about reading at least a single byte (or blocking) unless we've reached EOF?
        Hide
        markrmiller@gmail.com Mark Miller added a comment - - edited

        Here is the patch I'm currently playing with.

        On opening the the Log reader, when we first open the input stream reader, we now hflush before that. In all the fails I was seeing, the starting position was 0 and we were hitting EOF pretty much right away.

        Still testing out, but I think I'm on the right path.

        Show
        markrmiller@gmail.com Mark Miller added a comment - - edited Here is the patch I'm currently playing with. On opening the the Log reader, when we first open the input stream reader, we now hflush before that. In all the fails I was seeing, the starting position was 0 and we were hitting EOF pretty much right away. Still testing out, but I think I'm on the right path.
        Hide
        markrmiller@gmail.com Mark Miller added a comment -

        I'm no longer seeing inconsistency fails with this patch.

        Show
        markrmiller@gmail.com Mark Miller added a comment - I'm no longer seeing inconsistency fails with this patch.
        Hide
        markrmiller@gmail.com Mark Miller added a comment -

        Should also note, latest patch does also include an additional fast stream flushbuffer so that the sz we get is accurate and works with the hflush. I had originally thought that was simply the issue, but still got these EOF fails in the same read method until also changing the constructor.

        Show
        markrmiller@gmail.com Mark Miller added a comment - Should also note, latest patch does also include an additional fast stream flushbuffer so that the sz we get is accurate and works with the hflush. I had originally thought that was simply the issue, but still got these EOF fails in the same read method until also changing the constructor.
        Hide
        yseeley@gmail.com Yonik Seeley added a comment -

        I've started testing this morning with this patch... it will be a few hours at least before I know if it's fixed for me as well.

        One of the error caused by premature EOF that I was seeing happened after the re-open, so the constructor changes should not matter in that specific fail.
        But an important addition was made in this current patch, which calls fos.flushBuffer() in the reopen... that was missing in the previous patch.

        Actually, it looks like this patch fixed more than just performance... that missing fos.flushBuffer() wasn't just missing from the previous patch, it was never there in the code to begin with! This appears to mean that prior to this JIRA, buffering while replaying could sometimes prematurely abort (by getting an EOF) because a partial record was written. Simply adding a flushBuffer would not have been sufficient though... by using the actual size of the file (unsynchronized) as the point to read up to, we can get premature EOFs as well. Given we're using 64K write buffers, the odds of seeing issues due to this is related to the document size being indexed as well as the throughput.

        Show
        yseeley@gmail.com Yonik Seeley added a comment - I've started testing this morning with this patch... it will be a few hours at least before I know if it's fixed for me as well. One of the error caused by premature EOF that I was seeing happened after the re-open, so the constructor changes should not matter in that specific fail. But an important addition was made in this current patch, which calls fos.flushBuffer() in the reopen... that was missing in the previous patch. Actually, it looks like this patch fixed more than just performance... that missing fos.flushBuffer() wasn't just missing from the previous patch, it was never there in the code to begin with! This appears to mean that prior to this JIRA, buffering while replaying could sometimes prematurely abort (by getting an EOF) because a partial record was written. Simply adding a flushBuffer would not have been sufficient though... by using the actual size of the file (unsynchronized) as the point to read up to, we can get premature EOFs as well. Given we're using 64K write buffers, the odds of seeing issues due to this is related to the document size being indexed as well as the throughput.
        Hide
        yseeley@gmail.com Yonik Seeley added a comment -

        With the latest patch, the flushBuffer in this part of the code is redundant:

            public Object next() throws IOException, InterruptedException {
              long pos = fis.position();
        
              synchronized (HdfsTransactionLog.this) {
                if (trace) {
                  log.trace("Reading log record.  pos="+pos+" currentSize="+fos.size());
                }
        
                if (pos >= fos.size()) {
                  return null;
                }
               
                fos.flushBuffer();
              }
        
        Show
        yseeley@gmail.com Yonik Seeley added a comment - With the latest patch, the flushBuffer in this part of the code is redundant: public Object next() throws IOException, InterruptedException { long pos = fis.position(); synchronized (HdfsTransactionLog. this ) { if (trace) { log.trace( "Reading log record. pos=" +pos+ " currentSize=" +fos.size()); } if (pos >= fos.size()) { return null ; } fos.flushBuffer(); }
        Hide
        markrmiller@gmail.com Mark Miller added a comment -

        Actually, it looks like this patch fixed more than just performance

        Right, it's not just a performance fix or a 'status' numbers fix. The issue was the size hdfs was returning to us was wrong and we were going off the wrong size info. That made it so that when we had to open a new reader, we then did so every update. That seems to have hidden some of the issues here. There was no way to know if there was a bug users where hitting here beyond super, super slow replay while buffering performance though. For example, you were not seeing inconsistency fails with that code. It was obviously a bug no matter what flushing happened though, because we were basing our logic on file sizes that did not relate to reality (and did not generally change at all between calls).

        Show
        markrmiller@gmail.com Mark Miller added a comment - Actually, it looks like this patch fixed more than just performance Right, it's not just a performance fix or a 'status' numbers fix. The issue was the size hdfs was returning to us was wrong and we were going off the wrong size info. That made it so that when we had to open a new reader, we then did so every update. That seems to have hidden some of the issues here. There was no way to know if there was a bug users where hitting here beyond super, super slow replay while buffering performance though. For example, you were not seeing inconsistency fails with that code. It was obviously a bug no matter what flushing happened though, because we were basing our logic on file sizes that did not relate to reality (and did not generally change at all between calls).
        Hide
        yseeley@gmail.com Yonik Seeley added a comment -

        Yeah, if HDFS had reported the correct length, the old code (prior to this JIRA) would have attempted to read partial records and get EOFs where it shouldn't.

        For others following along... the key thing to the current patch is this:

                synchronized (HdfsTransactionLog.this) {
                  fos.flushBuffer();
                  sz = fos.size();
                }
        

        The synchronization (which is the same monitor used to write records) means that our recorded "sz" represents a whole record and is hence safe to read up to.

        Show
        yseeley@gmail.com Yonik Seeley added a comment - Yeah, if HDFS had reported the correct length, the old code (prior to this JIRA) would have attempted to read partial records and get EOFs where it shouldn't. For others following along... the key thing to the current patch is this: synchronized (HdfsTransactionLog. this ) { fos.flushBuffer(); sz = fos.size(); } The synchronization (which is the same monitor used to write records) means that our recorded "sz" represents a whole record and is hence safe to read up to.
        Hide
        jira-bot ASF subversion and git services added a comment -

        Commit 4cc844897e094ffc07f1825d88730ea975de3fde in lucene-solr's branch refs/heads/master from markrmiller
        [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4cc8448 ]

        SOLR-8575: Fix HDFSLogReader replay status numbers, a performance bug where we can reopen FSDataInputStream much too often, and an hdfs tlog data integrity bug.

        Show
        jira-bot ASF subversion and git services added a comment - Commit 4cc844897e094ffc07f1825d88730ea975de3fde in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4cc8448 ] SOLR-8575 : Fix HDFSLogReader replay status numbers, a performance bug where we can reopen FSDataInputStream much too often, and an hdfs tlog data integrity bug.
        Hide
        yseeley@gmail.com Yonik Seeley added a comment -

        Yeah, everything looks good - no consistency fails after running all day!

        Show
        yseeley@gmail.com Yonik Seeley added a comment - Yeah, everything looks good - no consistency fails after running all day!
        Hide
        markrmiller@gmail.com Mark Miller added a comment -

        I'll spend a little time trying to get rid of some false chaos monkey test fails so we can keep a better track when things go bad.

        Show
        markrmiller@gmail.com Mark Miller added a comment - I'll spend a little time trying to get rid of some false chaos monkey test fails so we can keep a better track when things go bad.
        Show
        mdrob Mike Drob added a comment - https://github.com/apache/lucene-solr/blob/0bba332549a11d5c381efc93a66087999b6de210/solr/core/src/java/org/apache/solr/update/UpdateLog.java#L1443 Is that line supposed to have an assert on it?
        Hide
        markrmiller@gmail.com Mark Miller added a comment -

        Yeah, good catch.

        Show
        markrmiller@gmail.com Mark Miller added a comment - Yeah, good catch.
        Hide
        jira-bot ASF subversion and git services added a comment -

        Commit 2fd90cd4893952f5150e34ed70e86d3e85f61458 in lucene-solr's branch refs/heads/master from markrmiller
        [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2fd90cd ]

        SOLR-8575: Add missing assert.

        Show
        jira-bot ASF subversion and git services added a comment - Commit 2fd90cd4893952f5150e34ed70e86d3e85f61458 in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2fd90cd ] SOLR-8575 : Add missing assert.

          People

          • Assignee:
            markrmiller@gmail.com Mark Miller
            Reporter:
            markrmiller@gmail.com Mark Miller
          • Votes:
            1 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development