Solr
  1. Solr
  2. SOLR-5658

commitWithin does not reflect the new documents added

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: 4.6, 6.0
    • Fix Version/s: 4.6.1, 4.7, 6.0
    • Component/s: None
    • Labels:
      None

      Description

      I start 4 nodes using the setup mentioned on - https://cwiki.apache.org/confluence/display/solr/Getting+Started+with+SolrCloud

      I added a document using -
      curl http://localhost:8983/solr/update?commitWithin=10000 -H "Content-Type: text/xml" --data-binary '<add><doc><field name="id">testdoc</field></doc></add>'

      In Solr 4.5.1 there is 1 soft commit with openSearcher=true and 1 hard commit with openSearcher=false
      In Solr 4.6.x there is there is only one commit hard commit with openSearcher=false

      So even after 10 seconds queries on none of the shards reflect the added document.

      This was also reported on the solr-user list ( http://lucene.472066.n3.nabble.com/Possible-regression-for-Solr-4-6-0-commitWithin-does-not-work-with-replicas-td4106102.html )

      Here are the relevant logs

      Logs from Solr 4.5.1
      Node 1:

      420021 [qtp619011445-12] INFO  org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] webapp=/solr path=/update params={commitWithin=10000} {add=[testdoc]} 0 45
      

      Node 2:

      119896 [qtp1608701025-10] INFO  org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] webapp=/solr path=/update params={distrib.from=http://192.168.1.103:8983/solr/collection1/&update.distrib=TOLEADER&wt=javabin&version=2} {add=[testdoc (1458003295513608192)]} 0 348
      129648 [commitScheduler-8-thread-1] INFO  org.apache.solr.update.UpdateHandler  – start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false}
      129679 [commitScheduler-8-thread-1] INFO  org.apache.solr.search.SolrIndexSearcher  – Opening Searcher@e174f70 main
      129680 [commitScheduler-8-thread-1] INFO  org.apache.solr.update.UpdateHandler  – end_commit_flush
      129681 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  – QuerySenderListener sending requests to Searcher@e174f70 main{StandardDirectoryReader(segments_3:11:nrt _2(4.5.1):C1)}
      129681 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  – QuerySenderListener done.
      129681 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  – [collection1] Registered new searcher Searcher@e174f70 main{StandardDirectoryReader(segments_3:11:nrt _2(4.5.1):C1)}
      134648 [commitScheduler-7-thread-1] INFO  org.apache.solr.update.UpdateHandler  – start commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
      134658 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – SolrDeletionPolicy.onCommit: commits: num=2
      	commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/solr-4.5.1/node2/solr/collection1/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@66a394a3; maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_3,generation=3}
      	commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/solr-4.5.1/node2/solr/collection1/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@66a394a3; maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_4,generation=4}
      134658 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – newest commit generation = 4
      134660 [commitScheduler-7-thread-1] INFO  org.apache.solr.update.UpdateHandler  – end_commit_flush
       

      Node 3:

      Node 4:

      374545 [qtp1608701025-16] INFO  org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] webapp=/solr path=/update params={distrib.from=http://192.168.1.103:7574/solr/collection1/&update.distrib=FROMLEADER&wt=javabin&version=2} {add=[testdoc (1458002133233172480)]} 0 20
      384545 [commitScheduler-8-thread-1] INFO  org.apache.solr.update.UpdateHandler  – start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false}
      384552 [commitScheduler-8-thread-1] INFO  org.apache.solr.search.SolrIndexSearcher  – Opening Searcher@36137e08 main
      384553 [commitScheduler-8-thread-1] INFO  org.apache.solr.update.UpdateHandler  – end_commit_flush
      384553 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  – QuerySenderListener sending requests to Searcher@36137e08 main{StandardDirectoryReader(segments_2:7:nrt _1(4.5.1):C1)}
      384553 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  – QuerySenderListener done.
      384554 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  – [collection1] Registered new searcher Searcher@36137e08 main{StandardDirectoryReader(segments_2:7:nrt _1(4.5.1):C1)}
      389545 [commitScheduler-7-thread-1] INFO  org.apache.solr.update.UpdateHandler  – start commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
      389549 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – SolrDeletionPolicy.onCommit: commits: num=2
      	commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/solr-4.5.1/node4/solr/collection1/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@6e4d4c84; maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_2,generation=2}
      	commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/solr-4.5.1/node4/solr/collection1/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@6e4d4c84; maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_3,generation=3}
      389550 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – newest commit generation = 3
      389551 [commitScheduler-7-thread-1] INFO  org.apache.solr.update.UpdateHandler  – end_commit_flush
      

      Using Solr 4.6

      Node 1:

      124513 [qtp1314570047-13] INFO  org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] webapp=/solr path=/update params={commitWithin=10000} {add=[testdoc]} 0 348
      

      Node 2:

      101586 [qtp1608701025-13] INFO  org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] webapp=/solr path=/update params={distrib.from=http://192.168.1.103:8983/solr/collection1/&update.distrib=TOLEADER&wt=javabin&version=2} {add=[testdoc (1458003613357965312)]} 0 217
      116407 [commitScheduler-7-thread-1] INFO  org.apache.solr.update.UpdateHandler  – start commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
      116429 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – SolrDeletionPolicy.onCommit: commits: num=2
      	commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/solr-4.6.0/node2/solr/collection1/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@245e7588; maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_1,generation=1}
      	commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/solr-4.6.0/node2/solr/collection1/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@245e7588; maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_2,generation=2}
      116430 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – newest commit generation = 2
      116444 [commitScheduler-7-thread-1] INFO  org.apache.solr.search.SolrIndexSearcher  – Opening Searcher@75e32318 realtime
      116445 [commitScheduler-7-thread-1] INFO  org.apache.solr.update.UpdateHandler  – end_commit_flush
       

      Node 3:

      Node 4:

      68183 [qtp1338008566-14] INFO  org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] webapp=/solr path=/update params={distrib.from=http://192.168.1.103:7574/solr/collection1/&update.distrib=FROMLEADER&wt=javabin&version=2} {add=[testdoc (1458003613357965312)]} 0 43
      83183 [commitScheduler-7-thread-1] INFO  org.apache.solr.update.UpdateHandler  – start commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
      83207 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – SolrDeletionPolicy.onCommit: commits: num=2
      	commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/solr-4.6.0/node4/solr/collection1/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@69c9fc69; maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_1,generation=1}
      	commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/solr-4.6.0/node4/solr/collection1/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@69c9fc69; maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_2,generation=2}
      83208 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – newest commit generation = 2
      83220 [commitScheduler-7-thread-1] INFO  org.apache.solr.search.SolrIndexSearcher  – Opening Searcher@326f944c realtime
      83220 [commitScheduler-7-thread-1] INFO  org.apache.solr.update.UpdateHandler  – end_commit_flush
       

      Using Solr 4.6.1

      Node 1:

      301363 [qtp619011445-15] INFO  org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] webapp=/solr path=/update params={commitWithin=10000} {add=[testdoc]} 0 32
      

      Node 2:

      207000 [qtp619011445-17] INFO  org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] webapp=/solr path=/update params={distrib.from=http://192.168.1.103:8983/solr/collection1/&update.distrib=TOLEADER&wt=javabin&version=2} {add=[testdoc (1458004563169640448)]} 0 28
      221974 [commitScheduler-7-thread-1] INFO  org.apache.solr.update.UpdateHandler  – start commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
      221987 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – SolrDeletionPolicy.onCommit: commits: num=2
      	commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/Downloads/search-downloads/solr-4.6.1/solr/node2/solr/collection1/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@352b9aeb; maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_2,generation=2}
      	commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/Downloads/search-downloads/solr-4.6.1/solr/node2/solr/collection1/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@352b9aeb; maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_3,generation=3}
      221987 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – newest commit generation = 3
      221989 [commitScheduler-7-thread-1] INFO  org.apache.solr.search.SolrIndexSearcher  – Opening Searcher@132713fa realtime
      221990 [commitScheduler-7-thread-1] INFO  org.apache.solr.update.UpdateHandler  – end_commit_flush
      

      Node 3:

      Node 4:

      193133 [qtp1608701025-16] INFO  org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] webapp=/solr path=/update params={distrib.from=http://192.168.1.103:7574/solr/collection1/&update.distrib=FROMLEADER&wt=javabin&version=2} {add=[testdoc (1458004563169640448)]} 0 23
      208133 [commitScheduler-7-thread-1] INFO  org.apache.solr.update.UpdateHandler  – start commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
      208141 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – SolrDeletionPolicy.onCommit: commits: num=2
      	commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/Downloads/search-downloads/solr-4.6.1/solr/node4/solr/collection1/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@3f83dcf3; maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_2,generation=2}
      	commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/Downloads/search-downloads/solr-4.6.1/solr/node4/solr/collection1/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@3f83dcf3; maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_3,generation=3}
      208141 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – newest commit generation = 3
      208144 [commitScheduler-7-thread-1] INFO  org.apache.solr.search.SolrIndexSearcher  – Opening Searcher@3171c7df realtime
      208146 [commitScheduler-7-thread-1] INFO  org.apache.solr.update.UpdateHandler  – end_commit_flush
      
      1. SOLR-5658.patch
        14 kB
        Mark Miller
      2. SOLR-5658.patch
        3 kB
        Mark Miller

        Issue Links

          Activity

          Hide
          Shalin Shekhar Mangar added a comment -

          Thanks for reporting this Varun.

          The thing to note between the 4.5 logs and the 4.6.x logs is that in 4.5, there are two commit statements per node (1 soft commit with openSearcher=true and another hard commit with openSearcher=false) whereas in 4.6.x there is only one commit (hard commit with openSearcher=false). We need to find out why.

          Show
          Shalin Shekhar Mangar added a comment - Thanks for reporting this Varun. The thing to note between the 4.5 logs and the 4.6.x logs is that in 4.5, there are two commit statements per node (1 soft commit with openSearcher=true and another hard commit with openSearcher=false) whereas in 4.6.x there is only one commit (hard commit with openSearcher=false). We need to find out why.
          Hide
          Shalin Shekhar Mangar added a comment -
          Show
          Shalin Shekhar Mangar added a comment - Also reported by a user (thanks Varun for pointing it out to me privately): http://lucene.472066.n3.nabble.com/Possible-regression-for-Solr-4-6-0-commitWithin-does-not-work-with-replicas-td4106102.html
          Hide
          Mark Miller added a comment -

          This is probably related to what params we pass on and which we filter out in DistributedUpdateProcessor.

          Show
          Mark Miller added a comment - This is probably related to what params we pass on and which we filter out in DistributedUpdateProcessor.
          Hide
          Mark Miller added a comment -

          Patch adds a test and forwards on the commitWithin param.

          Show
          Mark Miller added a comment - Patch adds a test and forwards on the commitWithin param.
          Hide
          Shalin Shekhar Mangar added a comment -

          Thanks Mark. Should we re-spin 4.6.1 for this?

          Show
          Shalin Shekhar Mangar added a comment - Thanks Mark. Should we re-spin 4.6.1 for this?
          Hide
          Mark Miller added a comment -

          I don't know. If there is support for it, I'm certainly willing to do it, but I'm not sure it's worth an RC4 myself. I think this issue has been around a very long time, and it's just random luck something seemed like it worked on 4.5.1 or older in any simple testing.

          Show
          Mark Miller added a comment - I don't know. If there is support for it, I'm certainly willing to do it, but I'm not sure it's worth an RC4 myself. I think this issue has been around a very long time, and it's just random luck something seemed like it worked on 4.5.1 or older in any simple testing.
          Hide
          Daniel Collins added a comment - - edited

          Mark, what's the impact of this issue? Are you saying that CommitWithin was never distributed (which seems quite a big deal!), or is it more subtle than that?

          Show
          Daniel Collins added a comment - - edited Mark, what's the impact of this issue? Are you saying that CommitWithin was never distributed (which seems quite a big deal!), or is it more subtle than that?
          Hide
          Mark Miller added a comment -

          Right - it was not distributed. Not since we started filtering most parameters, which was a very long time ago.

          Show
          Mark Miller added a comment - Right - it was not distributed. Not since we started filtering most parameters, which was a very long time ago.
          Hide
          Daniel Collins added a comment -

          Ok, was just puzzled about how our system is working then (4.4.0), we consistently see softCommits running on the replicas, maybe it is autoCommit firing instead...

          Show
          Daniel Collins added a comment - Ok, was just puzzled about how our system is working then (4.4.0), we consistently see softCommits running on the replicas, maybe it is autoCommit firing instead...
          Hide
          Shalin Shekhar Mangar added a comment -

          I think this issue has been around a very long time, and it's just random luck something seemed like it worked on 4.5.1 or older in any simple testing.

          I'm not sure about that. The commitWithin is also set in the AddUpdateCommand in addition to the request params. I ran your test against 4.5 without the fix for 5 times and it didn't fail. But it never passes with trunk (without the fix) so I think there may be another bug introduced with the streaming changes. I'll look at this again tomorrow my time.

          Show
          Shalin Shekhar Mangar added a comment - I think this issue has been around a very long time, and it's just random luck something seemed like it worked on 4.5.1 or older in any simple testing. I'm not sure about that. The commitWithin is also set in the AddUpdateCommand in addition to the request params. I ran your test against 4.5 without the fix for 5 times and it didn't fail. But it never passes with trunk (without the fix) so I think there may be another bug introduced with the streaming changes. I'll look at this again tomorrow my time.
          Hide
          Mark Miller added a comment -

          I think it just depends on if it's a request wide or document level commitWithin? If it was request level, the only way it would have worked previously is if there was some code that looked for the commitWithin param explicitly and set the commitWithin on the AddUpdateCommand - I don't recall something like that.

          Show
          Mark Miller added a comment - I think it just depends on if it's a request wide or document level commitWithin? If it was request level, the only way it would have worked previously is if there was some code that looked for the commitWithin param explicitly and set the commitWithin on the AddUpdateCommand - I don't recall something like that.
          Hide
          Mark Miller added a comment -

          Okay, outside of the distributed code, Solr does setup the cmd object with a request level commitWithin. That is how this used to work, that is why we didn't have to propagate the param.

          Perhaps the commitWithin is being lost when parsing the javabin.

          Show
          Mark Miller added a comment - Okay, outside of the distributed code, Solr does setup the cmd object with a request level commitWithin. That is how this used to work, that is why we didn't have to propagate the param. Perhaps the commitWithin is being lost when parsing the javabin.
          Hide
          Mark Miller added a comment -

          I see the bug.

          Show
          Mark Miller added a comment - I see the bug.
          Hide
          Mark Miller added a comment -

          I'm still look at the details, but the cause in the change of behavior is the switch from xml to binary update requests.

          Show
          Mark Miller added a comment - I'm still look at the details, but the cause in the change of behavior is the switch from xml to binary update requests.
          Hide
          Mark Miller added a comment -

          Kind of ugly - the real fix will take a bit of work. The patch might be a decent partial fix / workaround for 4.6.1 if we respin.

          Show
          Mark Miller added a comment - Kind of ugly - the real fix will take a bit of work. The patch might be a decent partial fix / workaround for 4.6.1 if we respin.
          Hide
          Mark Miller added a comment -

          Here is a full fix patch.

          So previously, we still didn't propagate request level commitWithin at the request level, but each document add would pick up the request level commitWithin.

          This per document level commitWithin support was not working with javabin.

          This has to do with how javabin handles adds for streaming, and some funny code that I have improved with a comment and cleanup.

          Show
          Mark Miller added a comment - Here is a full fix patch. So previously, we still didn't propagate request level commitWithin at the request level, but each document add would pick up the request level commitWithin. This per document level commitWithin support was not working with javabin. This has to do with how javabin handles adds for streaming, and some funny code that I have improved with a comment and cleanup.
          Hide
          ASF subversion and git services added a comment -

          Commit 1560859 from Mark Miller in branch 'dev/trunk'
          [ https://svn.apache.org/r1560859 ]

          SOLR-5658: commitWithin and overwrite are not being distributed to replicas now that SolrCloud uses javabin to distribute updates.

          Show
          ASF subversion and git services added a comment - Commit 1560859 from Mark Miller in branch 'dev/trunk' [ https://svn.apache.org/r1560859 ] SOLR-5658 : commitWithin and overwrite are not being distributed to replicas now that SolrCloud uses javabin to distribute updates.
          Hide
          ASF subversion and git services added a comment -

          Commit 1560860 from Mark Miller in branch 'dev/branches/branch_4x'
          [ https://svn.apache.org/r1560860 ]

          SOLR-5658: commitWithin and overwrite are not being distributed to replicas now that SolrCloud uses javabin to distribute updates.

          Show
          ASF subversion and git services added a comment - Commit 1560860 from Mark Miller in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1560860 ] SOLR-5658 : commitWithin and overwrite are not being distributed to replicas now that SolrCloud uses javabin to distribute updates.
          Hide
          ASF subversion and git services added a comment -

          Commit 1560866 from Mark Miller in branch 'dev/branches/lucene_solr_4_6'
          [ https://svn.apache.org/r1560866 ]

          SOLR-5658: commitWithin and overwrite are not being distributed to replicas now that SolrCloud uses javabin to distribute updates.

          Show
          ASF subversion and git services added a comment - Commit 1560866 from Mark Miller in branch 'dev/branches/lucene_solr_4_6' [ https://svn.apache.org/r1560866 ] SOLR-5658 : commitWithin and overwrite are not being distributed to replicas now that SolrCloud uses javabin to distribute updates.
          Hide
          Mark Miller added a comment -

          As a separate issue, it probably makes sense to send request level commitWithin as a param rather than setting it per doc - that would mean less repeated data in the request. We still need to properly support per doc like this as well though, because that is the level cmd objects support and we are distributing cmd objects.

          Show
          Mark Miller added a comment - As a separate issue, it probably makes sense to send request level commitWithin as a param rather than setting it per doc - that would mean less repeated data in the request. We still need to properly support per doc like this as well though, because that is the level cmd objects support and we are distributing cmd objects.
          Hide
          Shalin Shekhar Mangar added a comment -

          This was nasty. Thanks for fixing and back-porting this, Mark!

          I opened SOLR-5660 to send request level commitWithin as a param.

          Show
          Shalin Shekhar Mangar added a comment - This was nasty. Thanks for fixing and back-porting this, Mark! I opened SOLR-5660 to send request level commitWithin as a param.
          Hide
          Mark Miller added a comment -

          Another issue this fixed is that the documents were being serialized and sent twice - though they were not processed twice, so just wasteful and not functionally problematic.

          Show
          Mark Miller added a comment - Another issue this fixed is that the documents were being serialized and sent twice - though they were not processed twice, so just wasteful and not functionally problematic.
          Hide
          Erik Hatcher added a comment -

          [~markmiller@gmail.com] Is this ticket complete as of Solr 4.6.1? Just wondering if it can be closed. Thanks!

          Show
          Erik Hatcher added a comment - [~markmiller@gmail.com] Is this ticket complete as of Solr 4.6.1? Just wondering if it can be closed. Thanks!
          Hide
          ASF subversion and git services added a comment -

          Commit 1562836 from shalin@apache.org in branch 'dev/trunk'
          [ https://svn.apache.org/r1562836 ]

          SOLR-5658: Removing System.out.println in JavaBinUpdatedRequestCodec added for debugging

          Show
          ASF subversion and git services added a comment - Commit 1562836 from shalin@apache.org in branch 'dev/trunk' [ https://svn.apache.org/r1562836 ] SOLR-5658 : Removing System.out.println in JavaBinUpdatedRequestCodec added for debugging
          Hide
          Shalin Shekhar Mangar added a comment -

          Perhaps I should remove this println as another issue because this has already been released?

          Show
          Shalin Shekhar Mangar added a comment - Perhaps I should remove this println as another issue because this has already been released?
          Hide
          ASF subversion and git services added a comment -

          Commit 1562860 from shalin@apache.org in branch 'dev/branches/branch_4x'
          [ https://svn.apache.org/r1562860 ]

          SOLR-5658: Removing System.out.println in JavaBinUpdatedRequestCodec added for debugging

          Show
          ASF subversion and git services added a comment - Commit 1562860 from shalin@apache.org in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1562860 ] SOLR-5658 : Removing System.out.println in JavaBinUpdatedRequestCodec added for debugging
          Hide
          Mark Miller added a comment -

          Thanks Shalin! Your call on the new issue or not.

          Show
          Mark Miller added a comment - Thanks Shalin! Your call on the new issue or not.
          Hide
          Jessica Cheng Mallet added a comment -

          Hi Mark,

          Is the change of serialization of docMap from Map to List necessary? Looks like the unmarshaled docMap isn't used anymore either, but the casting there is causing a "ClassCastException: java.util.LinkedHashMap cannot be cast to java.util.List" when interacting with an older solrj client.

          Thanks,
          Jessica

          Show
          Jessica Cheng Mallet added a comment - Hi Mark, Is the change of serialization of docMap from Map to List necessary? Looks like the unmarshaled docMap isn't used anymore either, but the casting there is causing a "ClassCastException: java.util.LinkedHashMap cannot be cast to java.util.List" when interacting with an older solrj client. Thanks, Jessica
          Hide
          Yonik Seeley added a comment -

          Ah, this explains the issue our partner was having testing out HDS 4.6.1 - he didn't upgrade the entire cluster from 4.6.0 at once and got an exception in JavaBinCodec complaining about "Unknown type 19"

          Not sure if there is much we can do about it now given that 4.6.1 has been released.

          Show
          Yonik Seeley added a comment - Ah, this explains the issue our partner was having testing out HDS 4.6.1 - he didn't upgrade the entire cluster from 4.6.0 at once and got an exception in JavaBinCodec complaining about "Unknown type 19" Not sure if there is much we can do about it now given that 4.6.1 has been released.
          Hide
          Mark Miller added a comment -

          Oh darn, that's no good. Need to fix that for 4.7.

          Show
          Mark Miller added a comment - Oh darn, that's no good. Need to fix that for 4.7.
          Hide
          Mark Miller added a comment - - edited

          Is the change of serialization of docMap from Map to List necessary?

          It's part of supporting the iterator/streaming case. It's needed because of how things currently work. However, we shouldn't break with a class cast exception when using an older client - we should have the same old bad behavior.

          Show
          Mark Miller added a comment - - edited Is the change of serialization of docMap from Map to List necessary? It's part of supporting the iterator/streaming case. It's needed because of how things currently work. However, we shouldn't break with a class cast exception when using an older client - we should have the same old bad behavior.
          Hide
          Mark Miller added a comment -

          We should probably add the ability to configure what format distributed updates will use internally so that you can tmp flip to xml or something for this type of issue.

          Show
          Mark Miller added a comment - We should probably add the ability to configure what format distributed updates will use internally so that you can tmp flip to xml or something for this type of issue.
          Hide
          Jessica Cheng Mallet added a comment -

          Thanks for looking at it Mark!

          If you don't mind me asking, the one thing I didn't understand is why docMap is needed. The line in unmarshal

          <quote>docMap = (List<Entry<SolrInputDocument,Map<Object,Object>>>) namedList[0].get("docsMap");</quote>

          loads docMap from the named list but the docMap variable doesn't seem to be used anywhere. Also, a text search of "docsMap" seems to indicate that JavaBinUpdateRequestCodec is the only class using it. What am I missing?

          Thanks,
          Jessica

          Show
          Jessica Cheng Mallet added a comment - Thanks for looking at it Mark! If you don't mind me asking, the one thing I didn't understand is why docMap is needed. The line in unmarshal <quote>docMap = (List<Entry<SolrInputDocument,Map<Object,Object>>>) namedList [0] .get("docsMap");</quote> loads docMap from the named list but the docMap variable doesn't seem to be used anywhere. Also, a text search of "docsMap" seems to indicate that JavaBinUpdateRequestCodec is the only class using it. What am I missing? Thanks, Jessica
          Hide
          Mark Miller added a comment -

          exception in JavaBinCodec complaining about "Unknown type 19"

          I hadn't considered going the other way - new client to old.

          That's a bummer. Sucks this was implemented wrong first.

          Show
          Mark Miller added a comment - exception in JavaBinCodec complaining about "Unknown type 19" I hadn't considered going the other way - new client to old. That's a bummer. Sucks this was implemented wrong first.
          Hide
          Hoss Man added a comment -

          mark: since this issue was recorded as "fixed" in 4.6.1 changes, re-opening now to address the problem it may have caused seems like a bad idea from an accountability standpoint – since if/when it's fixed, it will be confusing to users if it gets "re-recorded" in CHANGES under 4.7 (or whatever)

          Suggest you re-resolve this, and open a new linked ("Broken By") issue for the newly discovered problem in 4.6.1.

          Show
          Hoss Man added a comment - mark: since this issue was recorded as "fixed" in 4.6.1 changes, re-opening now to address the problem it may have caused seems like a bad idea from an accountability standpoint – since if/when it's fixed, it will be confusing to users if it gets "re-recorded" in CHANGES under 4.7 (or whatever) Suggest you re-resolve this, and open a new linked ("Broken By") issue for the newly discovered problem in 4.6.1.
          Hide
          Mark Miller added a comment -

          Reopen is just so its not lost until we figure what if anything we do.

          Show
          Mark Miller added a comment - Reopen is just so its not lost until we figure what if anything we do.
          Hide
          Mark Miller added a comment -

          The backcompat issue was moved to SOLR-5658. Resolving this.

          Show
          Mark Miller added a comment - The backcompat issue was moved to SOLR-5658 . Resolving this.

            People

            • Assignee:
              Mark Miller
              Reporter:
              Varun Thacker
            • Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development