Solr
  1. Solr
  2. SOLR-139

Support updateable/modifiable documents

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 4.0
    • Component/s: update
    • Labels:
      None

      Description

      It would be nice to be able to update some fields on a document without having to insert the entire document.

      Given the way lucene is structured, (for now) one can only modify stored fields.

      While we are at it, we can support incrementing an existing value - I think this only makes sense for numbers.

      for background, see:
      http://www.nabble.com/loading-many-documents-by-ID-tf3145666.html#a8722293

      1. SOLR-269+139-ModifiableDocumentUpdateProcessor.patch
        91 kB
        Ryan McKinley
      2. SOLR-139-XmlUpdater.patch
        10 kB
        Ryan McKinley
      3. SOLR-139-ModifyInputDocuments.patch
        43 kB
        Ryan McKinley
      4. SOLR-139-ModifyInputDocuments.patch
        39 kB
        Ryan McKinley
      5. SOLR-139-ModifyInputDocuments.patch
        57 kB
        Ryan McKinley
      6. SOLR-139-ModifyInputDocuments.patch
        51 kB
        Ryan McKinley
      7. SOLR-139-IndexDocumentCommand.patch
        31 kB
        Ryan McKinley
      8. SOLR-139-IndexDocumentCommand.patch
        31 kB
        Ryan McKinley
      9. SOLR-139-IndexDocumentCommand.patch
        57 kB
        Ryan McKinley
      10. SOLR-139-IndexDocumentCommand.patch
        33 kB
        Ryan McKinley
      11. SOLR-139-IndexDocumentCommand.patch
        43 kB
        Ryan McKinley
      12. SOLR-139-IndexDocumentCommand.patch
        46 kB
        Ryan McKinley
      13. SOLR-139-IndexDocumentCommand.patch
        36 kB
        Ryan McKinley
      14. SOLR-139-IndexDocumentCommand.patch
        29 kB
        Ryan McKinley
      15. SOLR-139-IndexDocumentCommand.patch
        26 kB
        Ryan McKinley
      16. SOLR-139-IndexDocumentCommand.patch
        36 kB
        Ryan McKinley
      17. SOLR-139-IndexDocumentCommand.patch
        47 kB
        Ryan McKinley
      18. SOLR-139.patch
        48 kB
        Yonik Seeley
      19. SOLR-139.patch
        55 kB
        Yonik Seeley
      20. SOLR-139_createIfNotExist.patch
        4 kB
        Yonik Seeley
      21. getStoredFields.patch
        10 kB
        Yonik Seeley
      22. getStoredFields.patch
        11 kB
        Yonik Seeley
      23. getStoredFields.patch
        11 kB
        Yonik Seeley
      24. getStoredFields.patch
        19 kB
        Yonik Seeley
      25. getStoredFields.patch
        25 kB
        Yonik Seeley
      26. Eriks-ModifiableDocument.patch
        44 kB
        Erik Hatcher
      27. Eriks-ModifiableDocument.patch
        53 kB
        Ryan McKinley
      28. Eriks-ModifiableDocument.patch
        52 kB
        Ryan McKinley
      29. Eriks-ModifiableDocument.patch
        50 kB
        Ryan McKinley
      30. Eriks-ModifiableDocument.patch
        65 kB
        Ryan McKinley
      31. Eriks-ModifiableDocument.patch
        65 kB
        Ryan McKinley

        Issue Links

          Activity

          Hide
          Ryan McKinley added a comment -

          SOLR-139-IndexDocumentCommand.patch adds a new command to UpdateHandler and deprecates 'AddUpdateCommand'

          This patch is only concerned with adding updateability to the UpdateHandler, it does not deal with how request handlers specify what should happen with each field.

          I added:

          public class IndexDocumentCommand
          {
          public enum MODE

          { OVERWRITE, // overwrite existing values with the new one. (the default behavior) APPEND, // add the new value to existing value DISTINCT, // same as APPEND, but make sure each value is distinct INCREMENT, // increment existing value. Must be a number! REMOVE // remove the previous value. }

          ;

          public boolean overwrite = true;
          public SolrDocument doc;
          public Map<SchemaField,MODE> mode; // What to do for each field. null is the default
          public int commitMaxTime = -1; // make sure the document is commited within this much time
          }

          RequestHandlers will need to fill up the 'mode' map if they want to support updateability. Setting the mode.put( null, APPEND ) sets the default mode.

          Show
          Ryan McKinley added a comment - SOLR-139 -IndexDocumentCommand.patch adds a new command to UpdateHandler and deprecates 'AddUpdateCommand' This patch is only concerned with adding updateability to the UpdateHandler, it does not deal with how request handlers specify what should happen with each field. I added: public class IndexDocumentCommand { public enum MODE { OVERWRITE, // overwrite existing values with the new one. (the default behavior) APPEND, // add the new value to existing value DISTINCT, // same as APPEND, but make sure each value is distinct INCREMENT, // increment existing value. Must be a number! REMOVE // remove the previous value. } ; public boolean overwrite = true; public SolrDocument doc; public Map<SchemaField,MODE> mode; // What to do for each field. null is the default public int commitMaxTime = -1; // make sure the document is commited within this much time } RequestHandlers will need to fill up the 'mode' map if they want to support updateability. Setting the mode.put( null, APPEND ) sets the default mode.
          Hide
          Ryan McKinley added a comment -

          I just attached a modified XmlUpdateRequestHandler that uses the new IndexDocumentCommand . I left this out originally because I think the discussion around syntax and functionality should be separated... BUT without some example, it is tough to get a sense how this would work, so i added this example.

          Check the new file:
          monitor-modifier.xml

          It starts with:
          <add mode="cat=DISTINCT,features=APPEND,price=INCREMENT,sku=REMOVE,OVERWRITE">
          <doc>
          <field name="id">3007WFP</field>
          ...

          If you run ./post.sh monitor-modifier.xml multiple times and check: http://localhost:8983/solr/select?q=id:3007WFP you should notice
          1) the price increments by 5 each time
          2) there is an additional 'feature' line each time
          3) the categories are distinct even if the input is not

          sku=REMOVE is required because sku is a stored field that is written to with copyField.

          Although I think this syntax is reasonable, this is just an example intended to spark discussion. Other things to consider:

          • rather then 'field=mode,' we could do 'field:mode,' this may look less like HTTP request parameter syntax
          • The update handler could skip any stored field that is the target of a 'copyField' automatically. This is the most normal case, so it may be the most reasonable thing to do.
          Show
          Ryan McKinley added a comment - I just attached a modified XmlUpdateRequestHandler that uses the new IndexDocumentCommand . I left this out originally because I think the discussion around syntax and functionality should be separated... BUT without some example, it is tough to get a sense how this would work, so i added this example. Check the new file: monitor-modifier.xml It starts with: <add mode="cat=DISTINCT,features=APPEND,price=INCREMENT,sku=REMOVE,OVERWRITE"> <doc> <field name="id">3007WFP</field> ... If you run ./post.sh monitor-modifier.xml multiple times and check: http://localhost:8983/solr/select?q=id:3007WFP you should notice 1) the price increments by 5 each time 2) there is an additional 'feature' line each time 3) the categories are distinct even if the input is not sku=REMOVE is required because sku is a stored field that is written to with copyField. Although I think this syntax is reasonable, this is just an example intended to spark discussion. Other things to consider: rather then 'field=mode,' we could do 'field:mode,' this may look less like HTTP request parameter syntax The update handler could skip any stored field that is the target of a 'copyField' automatically. This is the most normal case, so it may be the most reasonable thing to do.
          Hide
          Yonik Seeley added a comment -

          Haven't had a chance to check out any code, but a few quick comments:

          If the field modes were parameters, they could be reused for other update handlers like SQL or CSV
          Perhaps something like:
          /update/xml?modify=true&f.price.mode=increment,f.features.mode=append

          > sku=REMOVE is required because sku is a stored field that is written to with copyField.
          I'm not sure I quite grok what REMOVE means yet, and how it fixes the copyField problem.

          Another way to work around copyField is to only collect stored fields that aren't copyField targets. Then you run the copyField logic while indexing the document again (as you should anyway).

          I think I'll have a tagging usecase that requires removing a specific field value from a multivalued field. Removing based on a regex might be nice too. though.

          f.tag.mode=remove
          f.tag.mode=removeMatching or removeRegex

          Show
          Yonik Seeley added a comment - Haven't had a chance to check out any code, but a few quick comments: If the field modes were parameters, they could be reused for other update handlers like SQL or CSV Perhaps something like: /update/xml?modify=true&f.price.mode=increment,f.features.mode=append > sku=REMOVE is required because sku is a stored field that is written to with copyField. I'm not sure I quite grok what REMOVE means yet, and how it fixes the copyField problem. Another way to work around copyField is to only collect stored fields that aren't copyField targets. Then you run the copyField logic while indexing the document again (as you should anyway). I think I'll have a tagging usecase that requires removing a specific field value from a multivalued field. Removing based on a regex might be nice too. though. f.tag.mode=remove f.tag.mode=removeMatching or removeRegex
          Hide
          Yonik Seeley added a comment -

          I browsed the code really quick, looking for the tricky part... It's here:
          + openSearcher();
          + Term t = new Term( uniqueKey.getName(), uniqueKey.getType().toInternal( id.toString() ) );
          + int docID = searcher.getFirstMatch( t );

          When you overwrite a document, it is really just adds another instance... so the index contains multiple copies. When we "commit", deletes of the older versions are performed. So you really want the last doc matching a term, not the first.

          Also, to need to make sure that the searcher you are using can actually "see" the last document (once a searcher is opened, it sees documents that were added since the last IndexWriter close().

          So a quick fix would be to do a commit() first that would close the writer, and then delete any old copies of docments.
          Opening and closing readers and writers is very expensive though.
          You can get slightly more sophisticated by checking the pset (set of pending documents), and skip the commit() if the doc you are updating isn't in there (so you know an older searcher will still have the freshest doc for that id).

          We might be able to get more efficient yet in the future by leveraging NewIndexModifier: SOLR-124

          Show
          Yonik Seeley added a comment - I browsed the code really quick, looking for the tricky part... It's here: + openSearcher(); + Term t = new Term( uniqueKey.getName(), uniqueKey.getType().toInternal( id.toString() ) ); + int docID = searcher.getFirstMatch( t ); When you overwrite a document, it is really just adds another instance... so the index contains multiple copies. When we "commit", deletes of the older versions are performed. So you really want the last doc matching a term, not the first. Also, to need to make sure that the searcher you are using can actually "see" the last document (once a searcher is opened, it sees documents that were added since the last IndexWriter close(). So a quick fix would be to do a commit() first that would close the writer, and then delete any old copies of docments. Opening and closing readers and writers is very expensive though. You can get slightly more sophisticated by checking the pset (set of pending documents), and skip the commit() if the doc you are updating isn't in there (so you know an older searcher will still have the freshest doc for that id). We might be able to get more efficient yet in the future by leveraging NewIndexModifier: SOLR-124
          Hide
          Yonik Seeley added a comment -

          Oh, so we obviously need some tests that modify the same document multiple times w/o a commit in between.

          Show
          Yonik Seeley added a comment - Oh, so we obviously need some tests that modify the same document multiple times w/o a commit in between.
          Hide
          Ryan McKinley added a comment -

          >
          > If the field modes were parameters, they could be reused for other update handlers like SQL or CSV
          > Perhaps something like:
          > /update/xml?modify=true&f.price.mode=increment,f.features.mode=append
          >

          Yes. I think we should decide a standard 'modify modes' syntax that can be used across handlers. In this example, I am using the string:

          mode=cat=DISTINCT,features=APPEND,price=INCREMENT,sku=REMOVE,OVERWRITE

          and passing it to 'parseFieldModes' in XmlUpdateRequestHandler.

          Personally, I think all the modes should be specified in a single param rather then a list of them. I vote for a syntax like:

          <lst name="params">
          <str name="mode">cat=DISTINCT,features=APPEND,price=INCREMENT,sku=REMOVE,OVERWRITE</str>
          </lst>
          or:
          <lst name="params">
          <str name="mode">cat:DISTINCT,features:APPEND,price:INCREMENT,sku:REMOVE,OVERWRITE</str>
          </lst>

          rather then:

          <lst name="params">
          <str name="f.cat.mode">DISTINCT</str>
          <str name="f.features.mode">APPEND</str>
          <str name="f.price.mode">INCREMENT</str>
          <str name="f.sku.mode">REMOVE</str>
          <str name="modify.default.mode">OVERWRITE</str>
          </lst>

          >> sku=REMOVE is required because sku is a stored field that is written to with copyField.
          > I'm not sure I quite grok what REMOVE means yet, and how it fixes the copyField problem.
          >

          I'm using 'REMOVE' to say "remove the previous value of this field before doing anything." Essentially, this makes sure you new document does not start with a value for 'sku'.

          > Another way to work around copyField is to only collect stored fields that aren't copyField targets.

          I just implemented this. It is the most normal case, so it should be the default. It can be overridden by setting the mode for a copyField explicitly.

          Show
          Ryan McKinley added a comment - > > If the field modes were parameters, they could be reused for other update handlers like SQL or CSV > Perhaps something like: > /update/xml?modify=true&f.price.mode=increment,f.features.mode=append > Yes. I think we should decide a standard 'modify modes' syntax that can be used across handlers. In this example, I am using the string: mode=cat=DISTINCT,features=APPEND,price=INCREMENT,sku=REMOVE,OVERWRITE and passing it to 'parseFieldModes' in XmlUpdateRequestHandler. Personally, I think all the modes should be specified in a single param rather then a list of them. I vote for a syntax like: <lst name="params"> <str name="mode">cat=DISTINCT,features=APPEND,price=INCREMENT,sku=REMOVE,OVERWRITE</str> </lst> or: <lst name="params"> <str name="mode">cat:DISTINCT,features:APPEND,price:INCREMENT,sku:REMOVE,OVERWRITE</str> </lst> rather then: <lst name="params"> <str name="f.cat.mode">DISTINCT</str> <str name="f.features.mode">APPEND</str> <str name="f.price.mode">INCREMENT</str> <str name="f.sku.mode">REMOVE</str> <str name="modify.default.mode">OVERWRITE</str> </lst> >> sku=REMOVE is required because sku is a stored field that is written to with copyField. > I'm not sure I quite grok what REMOVE means yet, and how it fixes the copyField problem. > I'm using 'REMOVE' to say "remove the previous value of this field before doing anything." Essentially, this makes sure you new document does not start with a value for 'sku'. > Another way to work around copyField is to only collect stored fields that aren't copyField targets. I just implemented this. It is the most normal case, so it should be the default. It can be overridden by setting the mode for a copyField explicitly.
          Hide
          Yonik Seeley added a comment -

          If modes are in a single param, a ":" syntax might be nicer because you could cut-n-paste it into a URL w/o escaping.

          > I'm using 'REMOVE' to say "remove the previous value of this field before doing anything." Essentially, this makes sure you new
          > document does not start with a value for 'sku'.

          That wouldn't work for multi-valued fields though, right?
          If we keep this option, perhaps we should find a better name... to me "remove" suggests that the given value should be removed. Think adding / removing tag values from a document.

          Show
          Yonik Seeley added a comment - If modes are in a single param, a ":" syntax might be nicer because you could cut-n-paste it into a URL w/o escaping. > I'm using 'REMOVE' to say "remove the previous value of this field before doing anything." Essentially, this makes sure you new > document does not start with a value for 'sku'. That wouldn't work for multi-valued fields though, right? If we keep this option, perhaps we should find a better name... to me "remove" suggests that the given value should be removed. Think adding / removing tag values from a document.
          Hide
          Ryan McKinley added a comment -

          Is this what you are suggesting?

          I added a second searcher to DirectUpdatehandler2 that is only closed when you call commit();

          // Check if the document has not been commited yet
          Integer cnt = pset.get( id.toString() );
          if( cnt != null && cnt > 0 )

          { commit( new CommitUpdateCommand(false) ); }

          if( committedSearcher == null )

          { committedSearcher = core.newSearcher("DirectUpdateHandler2.committed"); }

          Term t = new Term( uniqueKey.getName(), uniqueKey.getType().toInternal( id.toString() ) );
          int docID = committedSearcher.getFirstMatch( t );
          if( docID >= 0 )

          { Document luceneDoc = committedSearcher.doc( docID ); // set the new doc as the existingDoc + our changes DocumentBuilder builder = new DocumentBuilder( schema ); add.doc = builder.bulid( this.modifyDocument( luceneDoc, cmd.doc, cmd.mode ) ); }
          • - - - - - -

          This passes a new test that adds the same doc multiple times. BUT it does commit each time.

          Another alternative would be to keep a Map<String,Document> of the pending documents in memory. Then we would not have to commit each time something has changed.

          Show
          Ryan McKinley added a comment - Is this what you are suggesting? I added a second searcher to DirectUpdatehandler2 that is only closed when you call commit(); // Check if the document has not been commited yet Integer cnt = pset.get( id.toString() ); if( cnt != null && cnt > 0 ) { commit( new CommitUpdateCommand(false) ); } if( committedSearcher == null ) { committedSearcher = core.newSearcher("DirectUpdateHandler2.committed"); } Term t = new Term( uniqueKey.getName(), uniqueKey.getType().toInternal( id.toString() ) ); int docID = committedSearcher.getFirstMatch( t ); if( docID >= 0 ) { Document luceneDoc = committedSearcher.doc( docID ); // set the new doc as the existingDoc + our changes DocumentBuilder builder = new DocumentBuilder( schema ); add.doc = builder.bulid( this.modifyDocument( luceneDoc, cmd.doc, cmd.mode ) ); } - - - - - - This passes a new test that adds the same doc multiple times. BUT it does commit each time. Another alternative would be to keep a Map<String,Document> of the pending documents in memory. Then we would not have to commit each time something has changed.
          Hide
          Ryan McKinley added a comment -

          >
          > That wouldn't work for multi-valued fields though, right?

          'REMOVE' on a mult-valued field would clear the old fields before adding the new ones. It is essentially the same as OVERWRITE, but you may or may not pass in a new value on top of the old one.

          > If we keep this option, perhaps we should find a better name...

          how about 'IGNORE' or 'CLEAR' It is awkward because it refers to what was in in the document before, not what you are passing in.

          The more i think about it, I think we should drop the 'REMOVE' option. You can get the same effect using 'OVERWRITE' and passing in a null value.

          Show
          Ryan McKinley added a comment - > > That wouldn't work for multi-valued fields though, right? 'REMOVE' on a mult-valued field would clear the old fields before adding the new ones. It is essentially the same as OVERWRITE, but you may or may not pass in a new value on top of the old one. > If we keep this option, perhaps we should find a better name... how about 'IGNORE' or 'CLEAR' It is awkward because it refers to what was in in the document before, not what you are passing in. The more i think about it, I think we should drop the 'REMOVE' option. You can get the same effect using 'OVERWRITE' and passing in a null value.
          Hide
          Yonik Seeley added a comment -

          > I added a second searcher to DirectUpdatehandler2 that is only closed when you call commit();

          That's OK for right now, but longer term we should re-use the other searcher.
          Previously, the searcher was only used for deletions, hence we always had to close it before we opened a writer.
          If it's going to be used for doc retrieval now, we should change closeSearcher() to flushDeletes() and only really close the searcher if there had been deletes pending (from our current deleteByQuery implementation).

          Show
          Yonik Seeley added a comment - > I added a second searcher to DirectUpdatehandler2 that is only closed when you call commit(); That's OK for right now, but longer term we should re-use the other searcher. Previously, the searcher was only used for deletions, hence we always had to close it before we opened a writer. If it's going to be used for doc retrieval now, we should change closeSearcher() to flushDeletes() and only really close the searcher if there had been deletes pending (from our current deleteByQuery implementation).
          Hide
          Ryan McKinley added a comment -

          I added a new version of SOLR-139-IndexDocumentCommand.patch that:

          • gets rid of 'REMOVE' option
          • uses a separate searcher to search for existing docs
          • includes the XmlUpdateRequestHandler
          • moves general code from XmlUpdateRequestHandler to SolrPluginUtils
          • adds a few more tests

          Can someone with a better lucene understanding look into re-useint the existing searcher as Yonik suggests above - I don't quite understand the other DUH2 implications.

          I moved the part that parses (and validates) field mode parsing into SolrPluginUtils. This could be used by other RequestHandlers to parse the mode map.

          The XmlUpdateRequestHandler in this patch should support all legacy calls except cases where overwritePending != overwriteCommitted. There are no existing tests with this case, so it is not a problem from the testing standpoint. I don't know if anyone is using this (mis?) feature.

          Show
          Ryan McKinley added a comment - I added a new version of SOLR-139 -IndexDocumentCommand.patch that: gets rid of 'REMOVE' option uses a separate searcher to search for existing docs includes the XmlUpdateRequestHandler moves general code from XmlUpdateRequestHandler to SolrPluginUtils adds a few more tests Can someone with a better lucene understanding look into re-useint the existing searcher as Yonik suggests above - I don't quite understand the other DUH2 implications. I moved the part that parses (and validates) field mode parsing into SolrPluginUtils. This could be used by other RequestHandlers to parse the mode map. The XmlUpdateRequestHandler in this patch should support all legacy calls except cases where overwritePending != overwriteCommitted. There are no existing tests with this case, so it is not a problem from the testing standpoint. I don't know if anyone is using this (mis?) feature.
          Hide
          Ryan McKinley added a comment -

          This is a minor change that keeps the OVERWRITE property if it is specified. The previous version ignored the update mode if everthing was OVERWRITE.

          Show
          Ryan McKinley added a comment - This is a minor change that keeps the OVERWRITE property if it is specified. The previous version ignored the update mode if everthing was OVERWRITE.
          Hide
          Ryan McKinley added a comment -

          added missing files. this should is ready to check over if anyone has some time

          Show
          Ryan McKinley added a comment - added missing files. this should is ready to check over if anyone has some time
          Hide
          Ryan McKinley added a comment -

          An updated and cleaned up versoin. The big change is to the SolrDocument interface - rather then expose Maps directly, it hides them in an interface:

          http://solrstuff.org/svn/solrj/src/org/apache/solr/util/SolrDocument.java

          Still needs some test cases

          Show
          Ryan McKinley added a comment - An updated and cleaned up versoin. The big change is to the SolrDocument interface - rather then expose Maps directly, it hides them in an interface: http://solrstuff.org/svn/solrj/src/org/apache/solr/util/SolrDocument.java Still needs some test cases
          Hide
          Ryan McKinley added a comment -

          depending on SOLR-193

          Show
          Ryan McKinley added a comment - depending on SOLR-193
          Hide
          Ryan McKinley added a comment -

          applies (almost cleanly) with trunk + SOLR-193

          Show
          Ryan McKinley added a comment - applies (almost cleanly) with trunk + SOLR-193
          Hide
          Ryan McKinley added a comment -

          updated with trunk

          Show
          Ryan McKinley added a comment - updated with trunk
          Hide
          Ryan McKinley added a comment -
          • Updated to work with trunk.
          • moved MODE enum to common

          This could be added without affecting any call to the XmlUpdateHandler – (for now) this only works with the StaxUpdateHandler. Adding it will help clarify some of the DocumentBuilder needs/issues...

          Show
          Ryan McKinley added a comment - Updated to work with trunk. moved MODE enum to common This could be added without affecting any call to the XmlUpdateHandler – (for now) this only works with the StaxUpdateHandler. Adding it will help clarify some of the DocumentBuilder needs/issues...
          Hide
          Ryan McKinley added a comment -
          • Updated for new SolrDocument implementation.
          • Testing via solrj.

          I think this should be committed soon. It should only affect performance if you specify a 'mode' in the add command.

          Show
          Ryan McKinley added a comment - Updated for new SolrDocument implementation. Testing via solrj. I think this should be committed soon. It should only affect performance if you specify a 'mode' in the add command.
          Hide
          Yonik Seeley added a comment -

          Another use case to keep in mind... someone might use an UpdateRequestProcessor
          to add new fields to a document (something like copyField, but more complex). Some of this logic might be based on the value of other fields themselves.

          example1: a userTag field that represents tags on objects of the form user#tagstring.
          If user==member, then add tagstring to the indexed-only ownerTags field, else
          add the tagstring to the socialTags field.

          example2: an UpdateRequestProcessor is used to encode the value of a field with rot13... this should obviously only be done for new field values, and not values that are just being re-stored, so the UpdateRequestProcessor needs to be able to distinguish between the two.

          example3: some field values are pulled from a database when missing rather than being stored values.

          I think we need to try to find the right UpdateRequestProcessor interface to support all these with UpdateableDocuments.

          Show
          Yonik Seeley added a comment - Another use case to keep in mind... someone might use an UpdateRequestProcessor to add new fields to a document (something like copyField, but more complex). Some of this logic might be based on the value of other fields themselves. example1: a userTag field that represents tags on objects of the form user#tagstring. If user==member, then add tagstring to the indexed-only ownerTags field, else add the tagstring to the socialTags field. example2: an UpdateRequestProcessor is used to encode the value of a field with rot13... this should obviously only be done for new field values, and not values that are just being re-stored, so the UpdateRequestProcessor needs to be able to distinguish between the two. example3: some field values are pulled from a database when missing rather than being stored values. I think we need to try to find the right UpdateRequestProcessor interface to support all these with UpdateableDocuments.
          Hide
          Ryan McKinley added a comment -

          So you are suggesting pulling this out of the UpdateHandler and managing the document merging in the UpdateRequestProcessor? (this might makes sense - It was not an option when the patch started in feb)

          How can the UpdateHandler get access to pending documents? should it just use req.getSearcher()?

          > example1: a userTag field that represents tags on objects of the form user#tagstring.
          > If user==member, then add tagstring to the indexed-only ownerTags field, else
          > add the tagstring to the socialTags field.
          >
          > example2: an UpdateRequestProcessor is used to encode the value of a field with rot13... this should obviously only be done for new field values, and not values that are just being re-stored, so the UpdateRequestProcessor needs to be able to distinguish between the two.
          >

          1 & 2 seem pretty straightforwad

          > example3: some field values are pulled from a database when missing rather than being stored values.
          >

          Do you mean as input or output? The UpdateRequestProcessor could not affect if a field is stored or not, it could augment a document with more fields before it is indexed. To add fields from a database rather then storing them, we would need a hook at the end.

          Show
          Ryan McKinley added a comment - So you are suggesting pulling this out of the UpdateHandler and managing the document merging in the UpdateRequestProcessor? (this might makes sense - It was not an option when the patch started in feb) How can the UpdateHandler get access to pending documents? should it just use req.getSearcher()? > example1: a userTag field that represents tags on objects of the form user#tagstring. > If user==member, then add tagstring to the indexed-only ownerTags field, else > add the tagstring to the socialTags field. > > example2: an UpdateRequestProcessor is used to encode the value of a field with rot13... this should obviously only be done for new field values, and not values that are just being re-stored, so the UpdateRequestProcessor needs to be able to distinguish between the two. > 1 & 2 seem pretty straightforwad > example3: some field values are pulled from a database when missing rather than being stored values. > Do you mean as input or output? The UpdateRequestProcessor could not affect if a field is stored or not, it could augment a document with more fields before it is indexed. To add fields from a database rather then storing them, we would need a hook at the end.
          Hide
          Yonik Seeley added a comment -

          > So you are suggesting [...]

          I don't have a concrete implementation idea, I'm just going over all the things I know people will want to do (and many of these I have an immediate use for).

          > Do you mean as input or output?

          Input, for index-only fields. Normally field values need to be stored for an "update" to work, but we could also allow the user to get these field values from an external source.

          > we would need a hook at the end.

          Yes, it might make sense to have more than one callback method per UpdateRequestProcessor

          Of course now that I finally look at the code, UpdateRequestProcessor isn't quite what I expected.
          I was originally thinking more along the lines of DocumentMutator(s) that manipulate a document, not that actually initiate the add/delete/udpate calls. But there is a certain greater power to what you are exposing/allowing too (as long as you don't need multiple of them).

          In UpdateRequestProcessor , instead of
          protected final NamedList<Object> response;
          Why not just expose SolrQueryRequest, SolrQueryResponse?

          Show
          Yonik Seeley added a comment - > So you are suggesting [...] I don't have a concrete implementation idea, I'm just going over all the things I know people will want to do (and many of these I have an immediate use for). > Do you mean as input or output? Input, for index-only fields. Normally field values need to be stored for an "update" to work, but we could also allow the user to get these field values from an external source. > we would need a hook at the end. Yes, it might make sense to have more than one callback method per UpdateRequestProcessor Of course now that I finally look at the code, UpdateRequestProcessor isn't quite what I expected. I was originally thinking more along the lines of DocumentMutator(s) that manipulate a document, not that actually initiate the add/delete/udpate calls. But there is a certain greater power to what you are exposing/allowing too (as long as you don't need multiple of them). In UpdateRequestProcessor , instead of protected final NamedList<Object> response; Why not just expose SolrQueryRequest, SolrQueryResponse?
          Hide
          Yonik Seeley added a comment -

          > How can the UpdateHandler get access to pending documents? should it just use req.getSearcher()?

          I don't think so... the document may have been added more recently than the last searcher was opened.
          We need to use the IndexReader from the UpdateHandler. The reader and writer can both remain open as long as the reader is not used for deletions (which is a write operation and hence exclusive with IndexWriter).

          Show
          Yonik Seeley added a comment - > How can the UpdateHandler get access to pending documents? should it just use req.getSearcher()? I don't think so... the document may have been added more recently than the last searcher was opened. We need to use the IndexReader from the UpdateHandler. The reader and writer can both remain open as long as the reader is not used for deletions (which is a write operation and hence exclusive with IndexWriter).
          Hide
          Yonik Seeley added a comment -

          > (as long as you don't need multiple of them).

          If you had a bunch of independent things you wanted to do, having a single processor forces you to squish them together (say you even added another document type to your index and then wanted to add another mutation operation).

          What if we had some sort of hybrid between the two approaches. Had a list of processors, and the last would have primary responsibility for getting the documents in the index?

          Show
          Yonik Seeley added a comment - > (as long as you don't need multiple of them). If you had a bunch of independent things you wanted to do, having a single processor forces you to squish them together (say you even added another document type to your index and then wanted to add another mutation operation). What if we had some sort of hybrid between the two approaches. Had a list of processors, and the last would have primary responsibility for getting the documents in the index?
          Hide
          Ryan McKinley added a comment -

          implementing modifiable documents in UpdateRequestProcessor.

          This adds two example docs:
          modify1.xml and modify2.xml

          <add mode="OVERWRITE,price:INCREMENT,cat:DISTINCT">
          <doc>
          <field name="id">CSC</field>
          <field name="name">Campbell's Soup Can</field>
          <field name="manu">Andy Warhol</field>
          <field name="price">23.00</field>
          <field name="popularity">100</field>
          <field name="cat">category1</field>
          </doc>
          </add>

          will increment the price by 23 each time it is run.

          Show
          Ryan McKinley added a comment - implementing modifiable documents in UpdateRequestProcessor. This adds two example docs: modify1.xml and modify2.xml <add mode="OVERWRITE,price:INCREMENT,cat:DISTINCT"> <doc> <field name="id">CSC</field> <field name="name">Campbell's Soup Can</field> <field name="manu">Andy Warhol</field> <field name="price">23.00</field> <field name="popularity">100</field> <field name="cat">category1</field> </doc> </add> will increment the price by 23 each time it is run.
          Hide
          Ryan McKinley added a comment -

          implements modifiable documents in the SOLR-269 update processor chain.

          If the request does not have a 'mode' string, the ModifyDocumentProcessorFactory does not add a processor to the chain.

          Show
          Ryan McKinley added a comment - implements modifiable documents in the SOLR-269 update processor chain. If the request does not have a 'mode' string, the ModifyDocumentProcessorFactory does not add a processor to the chain.
          Hide
          Yonik Seeley added a comment -

          Some general issues w/ update processors and modifiable documents, and keeping this stuff out of the update handler is that the udpate handler knows much more about the index than we do outside, and it constrains implementation (and performance optimizations).

          For example, if modifiable documents were implemented in the update handler, and the old version of the document hasn't been committed yet, the update handler could buffer the complete modify command to be done at a later time (the much slower alternative is closing the writer and opening the reader to get the latest stored fields), then closing the reader and re-opening the writer.

          Show
          Yonik Seeley added a comment - Some general issues w/ update processors and modifiable documents, and keeping this stuff out of the update handler is that the udpate handler knows much more about the index than we do outside, and it constrains implementation (and performance optimizations). For example, if modifiable documents were implemented in the update handler, and the old version of the document hasn't been committed yet, the update handler could buffer the complete modify command to be done at a later time (the much slower alternative is closing the writer and opening the reader to get the latest stored fields), then closing the reader and re-opening the writer.
          Hide
          Ryan McKinley added a comment -

          Updated patch to work with SOLR-269 UpdateRequestProcessors.

          One thing I think is weird about this is that it uses parameters to say the mode rather then the add command. That is, to modify a documetn you have to send:

          /update?mode=OVERWRITE,count:INCREMENT
          <add>
          <doc>
          <field name="id">1</field>
          <field name="count">5</field>
          </doc>
          </add>

          rather then:
          <add mode="OVERWRITE,count:INCREMENT">
          <doc>
          <field name="id">1</field>
          <field name="count">5</field>
          </doc>
          </add>

          This is fine, but it makes it hard to have an example 'modify' xml document.

          Show
          Ryan McKinley added a comment - Updated patch to work with SOLR-269 UpdateRequestProcessors. One thing I think is weird about this is that it uses parameters to say the mode rather then the add command. That is, to modify a documetn you have to send: /update?mode=OVERWRITE,count:INCREMENT <add> <doc> <field name="id">1</field> <field name="count">5</field> </doc> </add> rather then: <add mode="OVERWRITE,count:INCREMENT"> <doc> <field name="id">1</field> <field name="count">5</field> </doc> </add> This is fine, but it makes it hard to have an example 'modify' xml document.
          Hide
          Ryan McKinley added a comment -

          > the udpate handler knows much more about the index than we do outside

          Yes. The patch i just attached only deals with documents that are already commited. It uses req.getSearcher() to find existing documents.

          Beyond finding commited or non-commited Documents, is there anything else that it can do better?

          Is it enought to add something to UpdateHandler to ask for a pending or commited document by uniqueId?

          I like having the the actual document manipulation happening in the Processor because it is an easy place to put in other things like grabbing stuff from a SQL database.

          Show
          Ryan McKinley added a comment - > the udpate handler knows much more about the index than we do outside Yes. The patch i just attached only deals with documents that are already commited. It uses req.getSearcher() to find existing documents. Beyond finding commited or non-commited Documents, is there anything else that it can do better? Is it enought to add something to UpdateHandler to ask for a pending or commited document by uniqueId? I like having the the actual document manipulation happening in the Processor because it is an easy place to put in other things like grabbing stuff from a SQL database.
          Hide
          Yonik Seeley added a comment -

          > I like having the the actual document manipulation happening in the Processor because it is an easy
          > place to put in other things like grabbing stuff from a SQL database.

          The update handler could call the processor when it was time to do the manipulation too.

          Consider the case of future support for ParallelReader, where some fields are in one sub-index and some fields are in another sub-index. I'm not sure if our current processor interfaces will be able to handle that scenario well (but I'm not sure if we should worry about it too much right now either).

          There is another way to perhaps mitigate the problem of a few frequently modified documents causing thrashing: a document cache in the update handler.

          Show
          Yonik Seeley added a comment - > I like having the the actual document manipulation happening in the Processor because it is an easy > place to put in other things like grabbing stuff from a SQL database. The update handler could call the processor when it was time to do the manipulation too. Consider the case of future support for ParallelReader, where some fields are in one sub-index and some fields are in another sub-index. I'm not sure if our current processor interfaces will be able to handle that scenario well (but I'm not sure if we should worry about it too much right now either). There is another way to perhaps mitigate the problem of a few frequently modified documents causing thrashing: a document cache in the update handler.
          Hide
          Ryan McKinley added a comment -

          >
          > The update handler could call the processor when it was time to do the manipulation too.
          >

          What are you thinking? Adding the processor as a parameter to AddUpdateCommand?

          > ... ParallelReader, where some fields are in one sub-index ...

          the processor would ask the updateHandler for the existing document - the updateHandler deals with getting it to/from the right place.

          we could add something like:
          Document getDocumentFromPendingOrCommited( String indexId )
          to UpdateHandler and then that is taken care of.

          Other then extracting the old document, what needs to be done that cant be done in the processor?

          Show
          Ryan McKinley added a comment - > > The update handler could call the processor when it was time to do the manipulation too. > What are you thinking? Adding the processor as a parameter to AddUpdateCommand? > ... ParallelReader, where some fields are in one sub-index ... the processor would ask the updateHandler for the existing document - the updateHandler deals with getting it to/from the right place. we could add something like: Document getDocumentFromPendingOrCommited( String indexId ) to UpdateHandler and then that is taken care of. Other then extracting the old document, what needs to be done that cant be done in the processor?
          Hide
          Yonik Seeley added a comment -

          >> ... ParallelReader, where some fields are in one sub-index ...
          > the processor would ask the updateHandler for the existing document - the updateHandler deals with
          > getting it to/from the right place.

          The big reason you would use ParallelReader is to avoid touching the less-modified/bigger fields in one index when changing some of the other fields in the other index.

          > What are you thinking? Adding the processor as a parameter to AddUpdateCommand?

          I didn't have a clear alternative... I was just pointing out the future pitfalls of assuming too much implementation knowledge.

          Show
          Yonik Seeley added a comment - >> ... ParallelReader, where some fields are in one sub-index ... > the processor would ask the updateHandler for the existing document - the updateHandler deals with > getting it to/from the right place. The big reason you would use ParallelReader is to avoid touching the less-modified/bigger fields in one index when changing some of the other fields in the other index. > What are you thinking? Adding the processor as a parameter to AddUpdateCommand? I didn't have a clear alternative... I was just pointing out the future pitfalls of assuming too much implementation knowledge.
          Hide
          Ryan McKinley added a comment -

          >
          > ... avoid touching the less-modified/bigger fields ...
          >

          aaah, perhaps a future updateHandler getDocument() function could take a list of fields it should extract. Still problems with what to do when you add it.. maybe it checks if anything has changed in the less-modified index? I see your point.

          >> What are you thinking? Adding the processor as a parameter to AddUpdateCommand?
          >
          > I didn't have a clear alternative... I was just pointing out the future pitfalls of assuming too much implementation knowledge.
          >

          I am fine either way – in the UpdateHandler or the Processors.

          Request plumbing-wise, it feels the most natural in a processor. But if we rework the AddUpdateCommand it could fit there too. I don't know if it is an advantage or disadvantage to have the 'modify' parameters tied to the command or the parameters. either way has its +-, with no real winner (or loser) IMO

          In the end, I want to make sure that I never need a custom UpdateHandler (80% is greek to me), but can easily change the 'modify' logic.

          Show
          Ryan McKinley added a comment - > > ... avoid touching the less-modified/bigger fields ... > aaah, perhaps a future updateHandler getDocument() function could take a list of fields it should extract. Still problems with what to do when you add it.. maybe it checks if anything has changed in the less-modified index? I see your point. >> What are you thinking? Adding the processor as a parameter to AddUpdateCommand? > > I didn't have a clear alternative... I was just pointing out the future pitfalls of assuming too much implementation knowledge. > I am fine either way – in the UpdateHandler or the Processors. Request plumbing-wise, it feels the most natural in a processor. But if we rework the AddUpdateCommand it could fit there too. I don't know if it is an advantage or disadvantage to have the 'modify' parameters tied to the command or the parameters. either way has its +-, with no real winner (or loser) IMO In the end, I want to make sure that I never need a custom UpdateHandler (80% is greek to me), but can easily change the 'modify' logic.
          Hide
          Yonik Seeley added a comment -

          FYI, I'm working on a loadStoredFields() for UpdateHandler now.

          Show
          Yonik Seeley added a comment - FYI, I'm working on a loadStoredFields() for UpdateHandler now.
          Hide
          Yonik Seeley added a comment -

          Here's a patch that adds getStoredFields to updateHandler.
          Changes include leaving the searcher open as long as possible (and in conjunction with the writer, if the reader has no unflushed deletions). This would allow reuse of a single index searcher when modifying multiple documents that are not in the pending set.

          No tests for getStoredFields() yet... I plan on a comprehensive update handler test that uses multiple threads to try and flush out any possible concurrency bugs.

          Show
          Yonik Seeley added a comment - Here's a patch that adds getStoredFields to updateHandler. Changes include leaving the searcher open as long as possible (and in conjunction with the writer, if the reader has no unflushed deletions). This would allow reuse of a single index searcher when modifying multiple documents that are not in the pending set. No tests for getStoredFields() yet... I plan on a comprehensive update handler test that uses multiple threads to try and flush out any possible concurrency bugs.
          Hide
          Yonik Seeley added a comment -

          updated getStoredFields.patch to use synchronization instead of the write lock since you can't upgrade a read lock to a write lock. I could have started out with the write lock and downgraded it to a read lock, but that would probably lead to much worse concurrency since it couldn't proceed in parallel with other operations such as adds.

          It would be good if someone could review this for threading issues.

          Show
          Yonik Seeley added a comment - updated getStoredFields.patch to use synchronization instead of the write lock since you can't upgrade a read lock to a write lock. I could have started out with the write lock and downgraded it to a read lock, but that would probably lead to much worse concurrency since it couldn't proceed in parallel with other operations such as adds. It would be good if someone could review this for threading issues.
          Hide
          Yonik Seeley added a comment -

          Found another bug just by looking at it a little longer... pset needs to be synchronized if all we have is the read lock, since addDoc() actually changes the pset while only holding the readLock (but it also synchronizes). I expanded the sync block to encompass the pset access and merged the two sync blocks. Hopefully this one should be correct.

          Show
          Yonik Seeley added a comment - Found another bug just by looking at it a little longer... pset needs to be synchronized if all we have is the read lock, since addDoc() actually changes the pset while only holding the readLock (but it also synchronizes). I expanded the sync block to encompass the pset access and merged the two sync blocks. Hopefully this one should be correct.
          Hide
          Mike Klaas added a comment -

          It is my fault that the DUH2 locking is so hairy to begin with, so I should at least review changes to it

          With your last change, the locking looks sound. However, I noticed a few things:

          This comment is now inaccurate:
          + // need to start off with the write lock because we can't aquire
          + // the write lock if we need to.

          Should openSearcher() call closeSearcher() instead of doing it manually? It looks like searcherHasChanges is not being reset to false.

          Show
          Mike Klaas added a comment - It is my fault that the DUH2 locking is so hairy to begin with, so I should at least review changes to it With your last change, the locking looks sound. However, I noticed a few things: This comment is now inaccurate: + // need to start off with the write lock because we can't aquire + // the write lock if we need to. Should openSearcher() call closeSearcher() instead of doing it manually? It looks like searcherHasChanges is not being reset to false.
          Hide
          Yonik Seeley added a comment -

          Thanks Mike, I've made those changes to my local copy.

          Show
          Yonik Seeley added a comment - Thanks Mike, I've made those changes to my local copy.
          Hide
          Yonik Seeley added a comment -

          Here's the latest patch, with the beginnings of a test, but I've run into some issues.

          1) I'm getting occasional errors about the index writer already being closed (with lucene trunk), or null pointer exception with the lucene version we have bundled. It's the same issue I believe... somehow the writer gets closed when someone else is trying to use it. If I comment out verifyLatest() in the test (that calls getStoredFields), it no longer happens.

          2) When I comment out getStoredFields, things seem to run correctly, but memory grows without bound and i eventually run out. 100 threads each limiting themselves to manipulating 16 possible documents each should not be able to cause this.

          At this point, I think #2 is the most important to investigate. It may be unrelated to this latest patch, and could be related to other peoples reports of running out of memory while indexing.

          Show
          Yonik Seeley added a comment - Here's the latest patch, with the beginnings of a test, but I've run into some issues. 1) I'm getting occasional errors about the index writer already being closed (with lucene trunk), or null pointer exception with the lucene version we have bundled. It's the same issue I believe... somehow the writer gets closed when someone else is trying to use it. If I comment out verifyLatest() in the test (that calls getStoredFields), it no longer happens. 2) When I comment out getStoredFields, things seem to run correctly, but memory grows without bound and i eventually run out. 100 threads each limiting themselves to manipulating 16 possible documents each should not be able to cause this. At this point, I think #2 is the most important to investigate. It may be unrelated to this latest patch, and could be related to other peoples reports of running out of memory while indexing.
          Hide
          Yonik Seeley added a comment -

          Weird... I put a profiler on it to try and figure out the OOM issue, and I never saw heap usage growing over time (stayed between 20M and 30M), right up until I got an OOM (it never registered on the profiler).

          Show
          Yonik Seeley added a comment - Weird... I put a profiler on it to try and figure out the OOM issue, and I never saw heap usage growing over time (stayed between 20M and 30M), right up until I got an OOM (it never registered on the profiler).
          Hide
          Yonik Seeley added a comment -

          OOM still happens from the command line also after lucene updates to 2.2.
          Looks like it's time for old-school instrumentation (printfs, etc).

          Show
          Yonik Seeley added a comment - OOM still happens from the command line also after lucene updates to 2.2. Looks like it's time for old-school instrumentation (printfs, etc).
          Hide
          Yonik Seeley added a comment -

          I disabled logging on all of "org.apache.solr" via a filter, and voila, OOM problems are gone.
          Perhaps the logger could not keep up with the number of records and they piled up over time time (does any component of the logging framework use another thread that might be getting starved?)

          Anyway, it doesn't look like Solr has a memory leak.
          On to the next issue.

          Show
          Yonik Seeley added a comment - I disabled logging on all of "org.apache.solr" via a filter, and voila, OOM problems are gone. Perhaps the logger could not keep up with the number of records and they piled up over time time (does any component of the logging framework use another thread that might be getting starved?) Anyway, it doesn't look like Solr has a memory leak. On to the next issue.
          Hide
          Yonik Seeley added a comment -

          The locking logic for getStoredFields() is indeed flawed.
          closing the writer inside the sync block of getStoredFields() doesn't project callers of addDoc() from concurrently using that writer. The commit lock aquire will be needed after all... no getting around it I think.

          Show
          Yonik Seeley added a comment - The locking logic for getStoredFields() is indeed flawed. closing the writer inside the sync block of getStoredFields() doesn't project callers of addDoc() from concurrently using that writer. The commit lock aquire will be needed after all... no getting around it I think.
          Hide
          Mike Klaas added a comment -

          Darn, you're right: writer.addDocument() is outside of the synchronized block.

          We could do as you suggested, downgrading to a read lock from commit. It should only reduce concurrently when the document is in pending state.

          Show
          Mike Klaas added a comment - Darn, you're right: writer.addDocument() is outside of the synchronized block. We could do as you suggested, downgrading to a read lock from commit. It should only reduce concurrently when the document is in pending state.
          Hide
          Yonik Seeley added a comment -

          Attaching a patch for getStoredFields that appears to work.

          Show
          Yonik Seeley added a comment - Attaching a patch for getStoredFields that appears to work.
          Hide
          Yonik Seeley added a comment -

          So the big issue now is that I don't think we can use getStoredFields() and do document modification outside the update handler. The biggest reason is that I think we need to be able to update documents atomically (in the sense that updates should not be lost).

          Consider the usecase of adding a new tag to a multi-valued field: if two different clients tag a document at the same time, it doesn't seem acceptable that one of the tags could be lost. So I think that we need a modifyDocument() call on updateHandler, and perhaps a ModifyUpdateCommand to go along with it.

          I'm not sure yet what this means for request processors. Perhaps another method that handles the reloaded storedFields?

          Show
          Yonik Seeley added a comment - So the big issue now is that I don't think we can use getStoredFields() and do document modification outside the update handler. The biggest reason is that I think we need to be able to update documents atomically (in the sense that updates should not be lost). Consider the usecase of adding a new tag to a multi-valued field: if two different clients tag a document at the same time, it doesn't seem acceptable that one of the tags could be lost. So I think that we need a modifyDocument() call on updateHandler, and perhaps a ModifyUpdateCommand to go along with it. I'm not sure yet what this means for request processors. Perhaps another method that handles the reloaded storedFields?
          Hide
          Ryan McKinley added a comment -

          >
          > So I think that we need a modifyDocument() call on updateHandler, and perhaps a ModifyUpdateCommand to go along with it.
          >
          > I'm not sure yet what this means for request processors. Perhaps another method that handles the reloaded storedFields?
          >

          Another option might be some sort of transaction or locking model. Could it block other requests while there is an open transaction/lock?

          Consider the case where we need the same atomic protection for fields loaded from non-stored fields loaded from a SQL database. In this case, it may be nice to have locking/blocking happen at the processor level.

          I don't know synchronized well enough to know if this works or is a bad idea, but what about something like:

          class ModifyExistingDocumentProcessor {
          void processAdd(AddUpdateCommand cmd) {
          String id = cmd.getIndexedId(schema);
          synchronized( updateHandler )

          { SolrDocument existing = updateHandler.loadStoredFields( id ); myCustomHelper.loadTagsFromDB( existing ); cmd.solrDoc = ModifyDocumentUtils.modifyDocument( ... ); // This eventually calls: updateHandler.addDoc(cmd); super.processAdd(cmd); }

          }
          }

          This type of approach would need to make sure everyone modifying fields was locking on the same updateHandler.

          • - - -

          I'm not against adding a ModifyUpdateCommand, I just like having the modify logic sit outside the UpdateHandler.

          Show
          Ryan McKinley added a comment - > > So I think that we need a modifyDocument() call on updateHandler, and perhaps a ModifyUpdateCommand to go along with it. > > I'm not sure yet what this means for request processors. Perhaps another method that handles the reloaded storedFields? > Another option might be some sort of transaction or locking model. Could it block other requests while there is an open transaction/lock? Consider the case where we need the same atomic protection for fields loaded from non-stored fields loaded from a SQL database. In this case, it may be nice to have locking/blocking happen at the processor level. I don't know synchronized well enough to know if this works or is a bad idea, but what about something like: class ModifyExistingDocumentProcessor { void processAdd(AddUpdateCommand cmd) { String id = cmd.getIndexedId(schema); synchronized( updateHandler ) { SolrDocument existing = updateHandler.loadStoredFields( id ); myCustomHelper.loadTagsFromDB( existing ); cmd.solrDoc = ModifyDocumentUtils.modifyDocument( ... ); // This eventually calls: updateHandler.addDoc(cmd); super.processAdd(cmd); } } } This type of approach would need to make sure everyone modifying fields was locking on the same updateHandler. - - - I'm not against adding a ModifyUpdateCommand, I just like having the modify logic sit outside the UpdateHandler.
          Hide
          Ryan McKinley added a comment -

          Updated to use Yonik's 'getStoredFields'. The one change to that is to have UpdateHandler addFields() load fields as the Object (not external string) and to skip copy fields:

          void addFields(Document luceneDoc, SolrDocument solrDoc) {
          for (Fieldable f : (List<Fieldable>)luceneDoc.getFields()) {
          SchemaField sf = schema.getField( f.name() );
          if( !schema.isCopyFieldTarget( sf ) )

          { Object externalVal = sf.getType().toObject(f); solrDoc.addField(f.name(), externalVal); }

          }
          }

          • - - -

          This still implements modifiable documents as a RequestProcessor.

          Show
          Ryan McKinley added a comment - Updated to use Yonik's 'getStoredFields'. The one change to that is to have UpdateHandler addFields() load fields as the Object (not external string) and to skip copy fields: void addFields(Document luceneDoc, SolrDocument solrDoc) { for (Fieldable f : (List<Fieldable>)luceneDoc.getFields()) { SchemaField sf = schema.getField( f.name() ); if( !schema.isCopyFieldTarget( sf ) ) { Object externalVal = sf.getType().toObject(f); solrDoc.addField(f.name(), externalVal); } } } - - - This still implements modifiable documents as a RequestProcessor.
          Hide
          Ryan McKinley added a comment -

          applies with trunk

          Show
          Ryan McKinley added a comment - applies with trunk
          Hide
          Erik Hatcher added a comment -

          I'm experimenting with this patch with tagging. I'm modeling the fields in this way, beyond general document metadata fields:

          <username>_tags
          usernames

          And copyFielding *_tags into "tags".

          usernames field allows seeing all users who have tagged documents.

          Users are allowed to "uncollect" an object, which would remove the <username>_tags field and remove their name from the usernames field. Removing the <username>_tags field use case is covered with the <username>_tags:OVERWRITE mode. But removing a username from the multiValued and non-duplicating usernames field is not.

          An example (from some conversations with Ryan):

          id: 10
          usernames: ryan, erik

          You want to be able to remove 'ryan' but keep 'erik'.

          Perhaps we need to add a 'REMOVE' mode to remove the first (all?)
          matching values
          /update?mode=OVERWRITE,username=REMOVE
          <doc>
          id=10,
          usernames=ryan
          </doc>

          and make the output:

          id: 10
          usernames: erik

          But what about duplicate values?

          Show
          Erik Hatcher added a comment - I'm experimenting with this patch with tagging. I'm modeling the fields in this way, beyond general document metadata fields: <username>_tags usernames And copyFielding *_tags into "tags". usernames field allows seeing all users who have tagged documents. Users are allowed to "uncollect" an object, which would remove the <username>_tags field and remove their name from the usernames field. Removing the <username>_tags field use case is covered with the <username>_tags:OVERWRITE mode. But removing a username from the multiValued and non-duplicating usernames field is not. An example (from some conversations with Ryan): id: 10 usernames: ryan, erik You want to be able to remove 'ryan' but keep 'erik'. Perhaps we need to add a 'REMOVE' mode to remove the first (all?) matching values /update?mode=OVERWRITE,username=REMOVE <doc> id=10, usernames=ryan </doc> and make the output: id: 10 usernames: erik But what about duplicate values?
          Hide
          Erik Hatcher added a comment -

          One thing to note about overwrite and copyFields is that to keep a purely copyFielded field in sync you must basically remove it (overwrite without providing a value).

          For example, my schema:

          <dynamicField name="*_tag" type="string" indexed="true" stored="true" multiValued="true"/>
          <field name="tag" type="string" indexed="true" stored="true" multiValued="true"/>

          and then this:
          <copyField source="*_tag" dest="tag"/>

          The client never provides a value for "tag" only ever <username>_tag values. I was seeing old values in the tag field after doing overwrites of <username>_tag expecting "tag" to get rewritten entirely. Saying mode=tag:OVERWRITE does the trick. This is understandable, but confusing, as the client then needs to know about purely copyFielded fields that it never sends directly.

          Show
          Erik Hatcher added a comment - One thing to note about overwrite and copyFields is that to keep a purely copyFielded field in sync you must basically remove it (overwrite without providing a value). For example, my schema: <dynamicField name="*_tag" type="string" indexed="true" stored="true" multiValued="true"/> <field name="tag" type="string" indexed="true" stored="true" multiValued="true"/> and then this: <copyField source="*_tag" dest="tag"/> The client never provides a value for "tag" only ever <username>_tag values. I was seeing old values in the tag field after doing overwrites of <username>_tag expecting "tag" to get rewritten entirely. Saying mode=tag:OVERWRITE does the trick. This is understandable, but confusing, as the client then needs to know about purely copyFielded fields that it never sends directly.
          Hide
          Yonik Seeley added a comment -

          IMO, copyField targets should always be re-generated... so no, it doesn't seem like you should have to say anything about tag if you update erik_tag.

          On a side note, I'm not sure how scalable the field-per-user strategy is. There are some places in Lucene (segment merging for one) that go over each field, merging the properties. Low thousands would be OK, but millions would not be OK.

          Show
          Yonik Seeley added a comment - IMO, copyField targets should always be re-generated... so no, it doesn't seem like you should have to say anything about tag if you update erik_tag. On a side note, I'm not sure how scalable the field-per-user strategy is. There are some places in Lucene (segment merging for one) that go over each field, merging the properties. Low thousands would be OK, but millions would not be OK.
          Hide
          Erik Hatcher added a comment -

          I agree in the gut feel to how copyFields and overwrite should work, but what about the situation where the client is sending in values for that field also? If it got completely regenerated during a modify operation, data would be lost. No?

          Thanks for the note about field-per-user strategy. In our case, even thousands of users is on down the road. The simplicity of having fields per user is mighty alluring and is working nicely in the prototype for now.

          Show
          Erik Hatcher added a comment - I agree in the gut feel to how copyFields and overwrite should work, but what about the situation where the client is sending in values for that field also? If it got completely regenerated during a modify operation, data would be lost. No? Thanks for the note about field-per-user strategy. In our case, even thousands of users is on down the road. The simplicity of having fields per user is mighty alluring and is working nicely in the prototype for now.
          Hide
          Yonik Seeley added a comment -

          I think there are a number of restrictions for using this feature:
          1) all source fields (not copyField targets) need to be stored, or included in modify commands
          2) copyField targets should either be unstored, or if stored, should never be explicitly set by the user

          What tags would you send in that aren't user tags? If they are some sort of global tags, then you could do global_tag=foo (reusing your dynamic user_tag field), or create a new globalTags field and an additional copyField to the tag field.

          Show
          Yonik Seeley added a comment - I think there are a number of restrictions for using this feature: 1) all source fields (not copyField targets) need to be stored, or included in modify commands 2) copyField targets should either be unstored, or if stored, should never be explicitly set by the user What tags would you send in that aren't user tags? If they are some sort of global tags, then you could do global_tag=foo (reusing your dynamic user_tag field), or create a new globalTags field and an additional copyField to the tag field.
          Hide
          Erik Hatcher added a comment -

          A mistake I had was my copyField target ("tag") was stored. Setting it to be unstored alleviated the need to overwrite it - thanks!

          One thing I noticed is that all fields sent in the update must be stored, but that doesn't really need to be the case with fields being overwritten - perhaps that restriction should be lifted and only applies when the stored data is needed.

          As for sending in a field that was the target of a copyField - I'm not doing this nor can I really envision this case, but it seemed like it might be a case to consider here. Perhaps a "text" field that could be optionally set by the client, and is also the destination of copyField's of title, author, etc.

          Show
          Erik Hatcher added a comment - A mistake I had was my copyField target ("tag") was stored. Setting it to be unstored alleviated the need to overwrite it - thanks! One thing I noticed is that all fields sent in the update must be stored, but that doesn't really need to be the case with fields being overwritten - perhaps that restriction should be lifted and only applies when the stored data is needed. As for sending in a field that was the target of a copyField - I'm not doing this nor can I really envision this case, but it seemed like it might be a case to consider here. Perhaps a "text" field that could be optionally set by the client, and is also the destination of copyField's of title, author, etc.
          Hide
          Yonik Seeley added a comment -

          > One thing I noticed is that all fields sent in the update must be stored, but that doesn't really need to be the case with fields being overwritten - perhaps that restriction should be lifted and only applies when the stored data is needed.

          +1

          > Perhaps a "text" field that could be optionally set by the client, and is also the destination of copyField's of title, author, etc.

          Seems like things of this form can always be refactored by adding a field "settableText" and adding another copyField from that to "text"

          Show
          Yonik Seeley added a comment - > One thing I noticed is that all fields sent in the update must be stored, but that doesn't really need to be the case with fields being overwritten - perhaps that restriction should be lifted and only applies when the stored data is needed. +1 > Perhaps a "text" field that could be optionally set by the client, and is also the destination of copyField's of title, author, etc. Seems like things of this form can always be refactored by adding a field "settableText" and adding another copyField from that to "text"
          Hide
          Erik Hatcher added a comment - - edited

          There is a bug in the last patch that allows an update to a non-existent document to create a new document.

          I've corrected this by adding this else clause in ModifyExistingDocumentProcessor:

          if( existing != null )

          { cmd.solrDoc = ModifyDocumentUtils.modifyDocument(existing, cmd.solrDoc, modes, schema ); }

          else

          { throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Cannot update non-existent document: " + id); }

          I can't generate a patch, yet, that is clean addition of just that bit of code along with the other changes.

          Show
          Erik Hatcher added a comment - - edited There is a bug in the last patch that allows an update to a non-existent document to create a new document. I've corrected this by adding this else clause in ModifyExistingDocumentProcessor: if( existing != null ) { cmd.solrDoc = ModifyDocumentUtils.modifyDocument(existing, cmd.solrDoc, modes, schema ); } else { throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Cannot update non-existent document: " + id); } I can't generate a patch, yet, that is clean addition of just that bit of code along with the other changes.
          Hide
          Erik Hatcher added a comment -

          Some thoughts on the update request, how about using <update> instead of <add>? And put the modes as attributes to <update>...

          <update overwrite="title" distinct="cat">
          <field name="id">ID</field>
          <field name="title">new title</field>
          <field name="cat">NewCategory</field>
          </update>

          Using <add> has the wrong implication for this operation. Thoughts?

          Show
          Erik Hatcher added a comment - Some thoughts on the update request, how about using <update> instead of <add>? And put the modes as attributes to <update>... <update overwrite="title" distinct="cat"> <field name="id">ID</field> <field name="title">new title</field> <field name="cat">NewCategory</field> </update> Using <add> has the wrong implication for this operation. Thoughts?
          Hide
          Yonik Seeley added a comment -

          > how about using <update> instead of <add>

          We had previously talked about making this distinction (and configuration for each field) in the URL:
          http://localhost:8983/solr/update?mode=title:overwrite,cat:distinct
          This makes it usable and consistent for different update handlers and formats (CSV, future SQL, future JSON, etc)

          but perhaps if we allowed the <add> tag to optionally be called something more neutral like <docs>?

          wrt patches, I think the functionality needs refactoring so that modify document logic is in the update handler. It seems like it's the only clean way from a locking perspective, and it also leaves open future optimizations (like using different indices depending on the fieldname and using a parallel reader across them).

          Show
          Yonik Seeley added a comment - > how about using <update> instead of <add> We had previously talked about making this distinction (and configuration for each field) in the URL: http://localhost:8983/solr/update?mode=title:overwrite,cat:distinct This makes it usable and consistent for different update handlers and formats (CSV, future SQL, future JSON, etc) but perhaps if we allowed the <add> tag to optionally be called something more neutral like <docs>? wrt patches, I think the functionality needs refactoring so that modify document logic is in the update handler. It seems like it's the only clean way from a locking perspective, and it also leaves open future optimizations (like using different indices depending on the fieldname and using a parallel reader across them).
          Hide
          Erik Hatcher added a comment -

          Added DELETE support in ModifyDocumentUtils like this:

          case DELETE:
          if( field != null ) {
          Collection<Object> collection = existing.getFieldValues(name);
          if (collection != null) {
          collection.remove(field.getValue());
          if (collection.isEmpty())

          { existing.removeField(name); }

          else

          { existing.setField(name, collection, field.getBoost()); }

          }
          }
          // TODO: if field is null, should the field be deleted?
          break;

          Show
          Erik Hatcher added a comment - Added DELETE support in ModifyDocumentUtils like this: case DELETE: if( field != null ) { Collection<Object> collection = existing.getFieldValues(name); if (collection != null) { collection.remove(field.getValue()); if (collection.isEmpty()) { existing.removeField(name); } else { existing.setField(name, collection, field.getBoost()); } } } // TODO: if field is null, should the field be deleted? break;
          Hide
          Erik Hatcher added a comment -

          For posterity, here's the patch I'm running in Collex production right now, and its working fine thus far.

          Sorry, I know this issue has a convoluted set of patches to follow at the moment. I trust that when Ryan is back in action we'll get this tidied up and somehow get Yonik-satisfying refactorings

          Show
          Erik Hatcher added a comment - For posterity, here's the patch I'm running in Collex production right now, and its working fine thus far. Sorry, I know this issue has a convoluted set of patches to follow at the moment. I trust that when Ryan is back in action we'll get this tidied up and somehow get Yonik-satisfying refactorings
          Hide
          Ryan McKinley added a comment -

          Updated Erik's patch to /trunk and added back the solrj modifiable document tests.

          Show
          Ryan McKinley added a comment - Updated Erik's patch to /trunk and added back the solrj modifiable document tests.
          Hide
          Ryan McKinley added a comment -

          I'm back and have some time to focus on this... Internally, I need to move the document modification to the update handler, but before I get going it would be nice to agree what we want the external interface to look like.

          • - -

          Should we deprecate the AddUpdateCommand and replace it with something else? Do we want one command to do Add/Update/Modify? Two?

          • - -

          what should happen if you "modify" a non-existent document?
          a. error – makes sense in a 'tagging' context
          b. treat it as an add – makes sense in a "keep these fields up to date" context (i don't want to check if the document already exists or not)

          I happen to be working with context b, but I suspect 'a' makes more sense.

          • - -

          Should we have different xml syntax for document modification vs add? Erik suggested:

          <update overwrite="title" distinct="cat">
          ...
          </update>

          since 'update' is already used in a few ways, maybe <modify>?

          Should we put the id as an xml attribute? this would make it possible to change the unique key.
          <modify id="ID">
          <field name="id">new id</field>
          </modify>
          That may look weird if the uniqueKeyField is not called "id"

          Assuming we put the modes as attributes, I guess multiple fields would be comma delimited?
          <modify distinct="cat,keyword">

          Do you like the default mode called "default" or "mode"?
          <modify id="ID" default="overwrite">?
          <modify id="ID" mode="overwrite">?

          Show
          Ryan McKinley added a comment - I'm back and have some time to focus on this... Internally, I need to move the document modification to the update handler, but before I get going it would be nice to agree what we want the external interface to look like. - - Should we deprecate the AddUpdateCommand and replace it with something else? Do we want one command to do Add/Update/Modify? Two? - - what should happen if you "modify" a non-existent document? a. error – makes sense in a 'tagging' context b. treat it as an add – makes sense in a "keep these fields up to date" context (i don't want to check if the document already exists or not) I happen to be working with context b, but I suspect 'a' makes more sense. - - Should we have different xml syntax for document modification vs add? Erik suggested: <update overwrite="title" distinct="cat"> ... </update> since 'update' is already used in a few ways, maybe <modify>? Should we put the id as an xml attribute? this would make it possible to change the unique key. <modify id="ID"> <field name="id">new id</field> </modify> That may look weird if the uniqueKeyField is not called "id" Assuming we put the modes as attributes, I guess multiple fields would be comma delimited? <modify distinct="cat,keyword"> Do you like the default mode called "default" or "mode"? <modify id="ID" default="overwrite">? <modify id="ID" mode="overwrite">?
          Hide
          Ryan McKinley added a comment -

          applies to /trunk

          Show
          Ryan McKinley added a comment - applies to /trunk
          Hide
          Ryan McKinley added a comment -

          applies with trunk...

          Show
          Ryan McKinley added a comment - applies with trunk...
          Hide
          Jörg Kiegeland added a comment -

          A useful feature would be "update based on query", so that documents matching the query condition will all be modified in the same way on the given update fields.
          This feature would correspond to SQL's UPDATE command, so that Solr would now cover all the basic commands SQL provides. (While this is a theoretic motivation, I just missed this feature for my Solr projects..)

          Show
          Jörg Kiegeland added a comment - A useful feature would be "update based on query", so that documents matching the query condition will all be modified in the same way on the given update fields. This feature would correspond to SQL's UPDATE command, so that Solr would now cover all the basic commands SQL provides. (While this is a theoretic motivation, I just missed this feature for my Solr projects..)
          Hide
          Ryan McKinley added a comment -

          updated to work with trunk. no real changes.

          The final version of this will need to move updating logic out of the processor into the UpdateHandler

          Show
          Ryan McKinley added a comment - updated to work with trunk. no real changes. The final version of this will need to move updating logic out of the processor into the UpdateHandler
          Hide
          Yonik Seeley added a comment -

          I'm having second thoughts if this is a good enough approach to really put in core Solr.
          Requiring that all fields be stored is a really large drawback, esp for large indicies with really large documents.

          Show
          Yonik Seeley added a comment - I'm having second thoughts if this is a good enough approach to really put in core Solr. Requiring that all fields be stored is a really large drawback, esp for large indicies with really large documents.
          Hide
          Ryan McKinley added a comment -

          that is part of why i thought having it in an update request processor makes sense – it can easily be subclassed to pull the existing fields from whereever it needs. Even if it is directly in the UpdateHandler, there could be some interface to loadExistingFields( id ) or something similar.

          Show
          Ryan McKinley added a comment - that is part of why i thought having it in an update request processor makes sense – it can easily be subclassed to pull the existing fields from whereever it needs. Even if it is directly in the UpdateHandler, there could be some interface to loadExistingFields( id ) or something similar.
          Hide
          Otis Gospodnetic added a comment -

          I'm looking at this issue now. I used to think "sure, this will work fine", but I'm looking at a 1B doc index (split over N servers) and am suddenly very scared of having to store more than necessary in the index. In other words, writing custom field value loaders from external storage sounds like the right thing to do. Perhaps one such loader could simply load from the index itself.

          Show
          Otis Gospodnetic added a comment - I'm looking at this issue now. I used to think "sure, this will work fine", but I'm looking at a 1B doc index (split over N servers) and am suddenly very scared of having to store more than necessary in the index. In other words, writing custom field value loaders from external storage sounds like the right thing to do. Perhaps one such loader could simply load from the index itself.
          Hide
          Lance Norskog added a comment -

          If I may comment?

          Is it ok if a first implementation requires all fields to stored, and then a later iteration supports non-stored fields? This seems to be a complex problem, and you might decide later that the first design is completely bogus.

          Show
          Lance Norskog added a comment - If I may comment? Is it ok if a first implementation requires all fields to stored, and then a later iteration supports non-stored fields? This seems to be a complex problem, and you might decide later that the first design is completely bogus.
          Hide
          Ryan McKinley added a comment -

          updated patch to work with trunk

          Show
          Ryan McKinley added a comment - updated patch to work with trunk
          Hide
          David Smiley added a comment -

          I agree Lance. I don't know what the "first design" is that might be bogus you're talking about... but we definitely eventually want to handle non-stored fields. I have an index that isn't huge and I'm salivating at the prospects of doing an update. For non-stored fields... it seems that if an update always over-writes such fields with new data then we should be able to support that now easily because we don't care what the old data was.

          For non-stored fields that need to be retained (Otis & Yoniks concern)... I wonder what Lucene exposes about the indexed data for a non-stored field. We'd just want to copy this low-level data over to a new document, basically.

          Show
          David Smiley added a comment - I agree Lance. I don't know what the "first design" is that might be bogus you're talking about... but we definitely eventually want to handle non-stored fields. I have an index that isn't huge and I'm salivating at the prospects of doing an update. For non-stored fields... it seems that if an update always over-writes such fields with new data then we should be able to support that now easily because we don't care what the old data was. For non-stored fields that need to be retained (Otis & Yoniks concern)... I wonder what Lucene exposes about the indexed data for a non-stored field. We'd just want to copy this low-level data over to a new document, basically.
          Hide
          Yonik Seeley added a comment -

          For non-stored fields that need to be retained (Otis & Yoniks concern)... I wonder what Lucene exposes about the indexed data for a non-stored field. We'd just want to copy this low-level data over to a new document, basically.

          Lucene maintains an inverted index, so the "indexed" part is spread over the entire index (terms point to documents). Copying an indexed field would require looping over every indexed term (all documents with that field). It would be very slow once an index got large.

          Show
          Yonik Seeley added a comment - For non-stored fields that need to be retained (Otis & Yoniks concern)... I wonder what Lucene exposes about the indexed data for a non-stored field. We'd just want to copy this low-level data over to a new document, basically. Lucene maintains an inverted index, so the "indexed" part is spread over the entire index (terms point to documents). Copying an indexed field would require looping over every indexed term (all documents with that field). It would be very slow once an index got large.
          Hide
          Bill Au added a comment -

          I noticed that this bug is no longer included in the 1.3 release. Are there any outstanding issues if all the fields are stored? Requiring that all fields are stored for a document to be update-able seems like reasonable to me. This feature will simplify things for Solr users who are doing a query to get all the fields following by an add when they only want to update a very small number of fields.

          Show
          Bill Au added a comment - I noticed that this bug is no longer included in the 1.3 release. Are there any outstanding issues if all the fields are stored? Requiring that all fields are stored for a document to be update-able seems like reasonable to me. This feature will simplify things for Solr users who are doing a query to get all the fields following by an add when they only want to update a very small number of fields.
          Hide
          Ryan McKinley added a comment -

          the biggest reason this patch won't work is that with SOLR-559, the DirectUpdateHandler2 does not keep track of pending updates – to get this to work again, it will need to maintain a list somewhere.

          Also, we need to make sure the API lets you grab stored fields from somewhere else – as is, it forces you to store all fields for all documents.

          Show
          Ryan McKinley added a comment - the biggest reason this patch won't work is that with SOLR-559 , the DirectUpdateHandler2 does not keep track of pending updates – to get this to work again, it will need to maintain a list somewhere. Also, we need to make sure the API lets you grab stored fields from somewhere else – as is, it forces you to store all fields for all documents.
          Hide
          Noble Paul added a comment - - edited

          How about maintaining a separate index at store.index and write all the documents to that index also.

          In the store.index , all the fields must be stored and none will be indexed. This index will not have any copy fields . It will blindly dump the data as it is.

          Updating can be done by reading data from this. Deletion must also be done on both the indices

          This way the original index will be small and the users do not have to make all fields stored

          Show
          Noble Paul added a comment - - edited How about maintaining a separate index at store.index and write all the documents to that index also. In the store.index , all the fields must be stored and none will be indexed. This index will not have any copy fields . It will blindly dump the data as it is. Updating can be done by reading data from this. Deletion must also be done on both the indices This way the original index will be small and the users do not have to make all fields stored
          Hide
          David Smiley added a comment -

          It's unclear to me how your suggestion, Paul, is better. If at the end of the day, all the fields must be stored on disk somewhere (and that is the case), then why complicate matters and split out the storage of the fields into a separate index? In my case, nearly all of my fields are stored so this would be redundant. AFAIK, having your "main" index "small" only matters as far as index storage, not stored-field storage. So I don't get the point in this.

          Show
          David Smiley added a comment - It's unclear to me how your suggestion, Paul, is better. If at the end of the day, all the fields must be stored on disk somewhere (and that is the case), then why complicate matters and split out the storage of the fields into a separate index? In my case, nearly all of my fields are stored so this would be redundant. AFAIK, having your "main" index "small" only matters as far as index storage, not stored-field storage. So I don't get the point in this.
          Hide
          Ryan McKinley added a comment -

          There are many approaches to make this work – i don't think there will be a one-size-fits all approach though. Storing all fields in the lucene index may be fine. Perhaps we want to write the xml to disk when it is indexed, then reload it when the file is 'updated', perhaps the content should be stored in a SQL db.

          David - storing all data in the search index can be a problem because it can get BIG. Imagine if nutch stored the raw content in the lucene index? (I may be wrong on this) even with Lazy loading, there is a query time cost to having stored fields.

          In general, I think we just need an API that will allow for a variety of storage mechanisms.

          Show
          Ryan McKinley added a comment - There are many approaches to make this work – i don't think there will be a one-size-fits all approach though. Storing all fields in the lucene index may be fine. Perhaps we want to write the xml to disk when it is indexed, then reload it when the file is 'updated', perhaps the content should be stored in a SQL db. David - storing all data in the search index can be a problem because it can get BIG. Imagine if nutch stored the raw content in the lucene index? (I may be wrong on this) even with Lazy loading, there is a query time cost to having stored fields. In general, I think we just need an API that will allow for a variety of storage mechanisms.
          Hide
          David Smiley added a comment -

          Ryan, I know of course the index can get big because one needs to store all the data for re-indexing; but due to Lucene's fundamental limitations, we can't get around that fact. Moving the data off to another place (a DB of some sort or whatever) doesn't change the fundamental problem. If one is unwilling to store a copy somewhere convenient due to data scalability issues then we simply cannot support the feature because Lucene doesn't have an underlying update capability.

          If the schema's field isn't stored, then it may be useful to provide an API that can fetch un-stored fields for a given document. I don't think it'd be common to use the API and definitely wouldn't be worth providing a default implementation.

          Show
          David Smiley added a comment - Ryan, I know of course the index can get big because one needs to store all the data for re-indexing; but due to Lucene's fundamental limitations, we can't get around that fact. Moving the data off to another place (a DB of some sort or whatever) doesn't change the fundamental problem. If one is unwilling to store a copy somewhere convenient due to data scalability issues then we simply cannot support the feature because Lucene doesn't have an underlying update capability. If the schema's field isn't stored, then it may be useful to provide an API that can fetch un-stored fields for a given document. I don't think it'd be common to use the API and definitely wouldn't be worth providing a default implementation.
          Hide
          Andrzej Bialecki added a comment -

          It's possible to recover unstored fields, if the purpose of such recovery is to make a copy of the document and update other fields. The process is time-consuming, because you need to traverse all postings for all terms, so it might be impractical for larger indexes. Furthermore, such recovered content may be incomplete - tokens may have been changed or skipped/added by analyzers, positionIncrement gaps may have been introduced, etc, etc.

          Most of this functionality is implemented in Luke "Restore & Edit" function. Perhaps it's possible to implement a new low-level Lucene API to do it more efficiently.

          Show
          Andrzej Bialecki added a comment - It's possible to recover unstored fields, if the purpose of such recovery is to make a copy of the document and update other fields. The process is time-consuming, because you need to traverse all postings for all terms, so it might be impractical for larger indexes. Furthermore, such recovered content may be incomplete - tokens may have been changed or skipped/added by analyzers, positionIncrement gaps may have been introduced, etc, etc. Most of this functionality is implemented in Luke "Restore & Edit" function. Perhaps it's possible to implement a new low-level Lucene API to do it more efficiently.
          Hide
          Mike Klaas added a comment -

          [quote]David - storing all data in the search index can be a problem because it can get BIG. Imagine if nutch stored the raw content in the lucene index? (I may be wrong on this) even with Lazy loading, there is a query time cost to having stored fields.[/quote]

          Splitting it out into another store is much better at scale. A distinct lucene index works relatively well.

          Show
          Mike Klaas added a comment - [quote] David - storing all data in the search index can be a problem because it can get BIG. Imagine if nutch stored the raw content in the lucene index? (I may be wrong on this) even with Lazy loading, there is a query time cost to having stored fields. [/quote] Splitting it out into another store is much better at scale. A distinct lucene index works relatively well.
          Hide
          Noble Paul added a comment -
          • If your index is really big most likely you may have a master/slave deployment. In that case only master needs to store the data copy. The slaves do not have to pay the 'update tax'

          Perhaps we want to write the xml to disk when it is indexed, then reload it when the file is 'updated', perhaps the content should be stored in a SQL db.

          Writing the xml may not be an option if I use DIH for indexing (there is no xml). And what if I use CSV. That means multiple storage formats. RDDBMS storage is a problem because of incompatibility of Lecene/RDBMS data structures. Creating a schema will be extremely hard because of dynamic fields

          I guess we can have multiple solutions. I can provide a simple DuplicateIndexUpdateProcessor (for lack of a better name) which can store all the data in a duplicate index. Let the user decide what he wants

          It's possible to recover unstored fields, if the purpose of such recovery is to make a copy of the document and update other fields.

          It is not wise to invest our time to do 'very clever' things because it is error prone. Unless Lucene gives us a clean API to do so

          Show
          Noble Paul added a comment - If your index is really big most likely you may have a master/slave deployment. In that case only master needs to store the data copy. The slaves do not have to pay the 'update tax' Perhaps we want to write the xml to disk when it is indexed, then reload it when the file is 'updated', perhaps the content should be stored in a SQL db. Writing the xml may not be an option if I use DIH for indexing (there is no xml). And what if I use CSV. That means multiple storage formats. RDDBMS storage is a problem because of incompatibility of Lecene/RDBMS data structures. Creating a schema will be extremely hard because of dynamic fields I guess we can have multiple solutions. I can provide a simple DuplicateIndexUpdateProcessor (for lack of a better name) which can store all the data in a duplicate index. Let the user decide what he wants It's possible to recover unstored fields, if the purpose of such recovery is to make a copy of the document and update other fields. It is not wise to invest our time to do 'very clever' things because it is error prone. Unless Lucene gives us a clean API to do so
          Hide
          Noble Paul added a comment -

          Shall I open another issue on my idea keeping another index for just stored fields (in an UpdateRequestProcessor).

          Is it a good idea to have multiple approaches for the same feature?
          Or should I post the patch in this issue only?

          Show
          Noble Paul added a comment - Shall I open another issue on my idea keeping another index for just stored fields (in an UpdateRequestProcessor). Is it a good idea to have multiple approaches for the same feature? Or should I post the patch in this issue only?
          Hide
          Ryan McKinley added a comment -

          to me the key is getting an interface that would allow for the existing fields to be stored a number of ways:

          • within the index itself
          • within an independent index (as you suggest)
          • within SQL
          • on the file system
          • ...
          Show
          Ryan McKinley added a comment - to me the key is getting an interface that would allow for the existing fields to be stored a number of ways: within the index itself within an independent index (as you suggest) within SQL on the file system ...
          Hide
          Noble Paul added a comment -

          there are pros and cons w/ each approaches (am I discovering a universal truth here )

          Many approaches can to confuse users . I can propose something like

          <modifiable>true</modifiable> in the manIndex .
          And say
          <modificationStrategy>solr.SepareteIndexStrategy</modificationStrategy>
          or
          <modificationStrategy>solr.SameIndexStrategy</modificationStrategy>

          (I did not mention the other two because of personal preferences )

          Show
          Noble Paul added a comment - there are pros and cons w/ each approaches (am I discovering a universal truth here ) Many approaches can to confuse users . I can propose something like <modifiable>true</modifiable> in the manIndex . And say <modificationStrategy>solr.SepareteIndexStrategy</modificationStrategy> or <modificationStrategy>solr.SameIndexStrategy</modificationStrategy> (I did not mention the other two because of personal preferences )
          Hide
          Shalin Shekhar Mangar added a comment -

          Marking for 1.5

          Show
          Shalin Shekhar Mangar added a comment - Marking for 1.5
          Hide
          Marcus Herou added a comment -

          It would make sense of adding ParallelReader functionality so a core can read from several index-dirs.
          Guess it complicates things a little since you would need to have support for adding data as well to more than one index.

          Suggestion:
          /update/coreX/index1 - Uses schema1.xml
          /update/coreX/index2 - Uses schema2.xml
          /select/coreX - Uses all schemas e.g. A ParallelReader.

          Seing quite a lot questions on the mailinglist about users that want to be able to update a single field while maintaining the rest of the index intact (not reindex).

          Show
          Marcus Herou added a comment - It would make sense of adding ParallelReader functionality so a core can read from several index-dirs. Guess it complicates things a little since you would need to have support for adding data as well to more than one index. Suggestion: /update/coreX/index1 - Uses schema1.xml /update/coreX/index2 - Uses schema2.xml /select/coreX - Uses all schemas e.g. A ParallelReader. Seing quite a lot questions on the mailinglist about users that want to be able to update a single field while maintaining the rest of the index intact (not reindex).
          Hide
          Yonik Seeley added a comment -

          ParallelReader assumes you have two indexes that "line up" so the internal docids match. Maintaining something like that would currently be pretty hard or impractical.

          Show
          Yonik Seeley added a comment - ParallelReader assumes you have two indexes that "line up" so the internal docids match. Maintaining something like that would currently be pretty hard or impractical.
          Hide
          Mark Diggory added a comment -

          I notice this is a very long lived issue and that it is marked for 1.5. Are there outstanding issues or problems with its usage if I apply it to my 1.4 source?

          Show
          Mark Diggory added a comment - I notice this is a very long lived issue and that it is marked for 1.5. Are there outstanding issues or problems with its usage if I apply it to my 1.4 source?
          Hide
          Hoss Man added a comment -

          Bulk updating 240 Solr issues to set the Fix Version to "next" per the process outlined in this email...

          http://mail-archives.apache.org/mod_mbox/lucene-dev/201005.mbox/%3Calpine.DEB.1.10.1005251052040.24672@radix.cryptio.net%3E

          Selection criteria was "Unresolved" with a Fix Version of 1.5, 1.6, 3.1, or 4.0. email notifications were suppressed.

          A unique token for finding these 240 issues in the future: hossversioncleanup20100527

          Show
          Hoss Man added a comment - Bulk updating 240 Solr issues to set the Fix Version to "next" per the process outlined in this email... http://mail-archives.apache.org/mod_mbox/lucene-dev/201005.mbox/%3Calpine.DEB.1.10.1005251052040.24672@radix.cryptio.net%3E Selection criteria was "Unresolved" with a Fix Version of 1.5, 1.6, 3.1, or 4.0. email notifications were suppressed. A unique token for finding these 240 issues in the future: hossversioncleanup20100527
          Hide
          Ravish Bhagdev added a comment -

          This seems like a very old issue as someone suggested. Is there any update on whether it will ever be resolved? It is quite important feature and causes big problems when you have a huge index and only need to update one column.

          Show
          Ravish Bhagdev added a comment - This seems like a very old issue as someone suggested. Is there any update on whether it will ever be resolved? It is quite important feature and causes big problems when you have a huge index and only need to update one column.
          Hide
          Gerd Bremer added a comment -

          I need this feature. How much do I have to pay in order to get this issue fixed? Can I pass around a piggy bank?

          Show
          Gerd Bremer added a comment - I need this feature. How much do I have to pay in order to get this issue fixed? Can I pass around a piggy bank?
          Hide
          Yonik Seeley added a comment -

          Here's a patch for updateable docs that reuses the infrastructure we put in place around versioning and realtime-get.

          Only the JSON parser is currently implemented.

          Recall that a normal field value in JSON is of the form:

          "myfield":10
          

          This patch extends the JSON parser to support extended values as a Map. For example, to add an additional value to a multi-valued field, you would use

          "myfield":{"add":10}
          

          Using an existing data structure (a Map) should allow us to pass this through javabin format w/o any additional changes.

          This patch depends on optimistic locking (it's currently included in this patch) and updating documents fully supports optimistic locking (i.e. you can conditionally update a document based on it's version)

          Right now only "add" and "set" are supported as mutators (and setting a null value is like "remove"), but I figured it would be best to do a slice first from start to finish.

          Comments?

          Show
          Yonik Seeley added a comment - Here's a patch for updateable docs that reuses the infrastructure we put in place around versioning and realtime-get. Only the JSON parser is currently implemented. Recall that a normal field value in JSON is of the form: "myfield" :10 This patch extends the JSON parser to support extended values as a Map. For example, to add an additional value to a multi-valued field, you would use "myfield" :{ "add" :10} Using an existing data structure (a Map) should allow us to pass this through javabin format w/o any additional changes. This patch depends on optimistic locking (it's currently included in this patch) and updating documents fully supports optimistic locking (i.e. you can conditionally update a document based on it's version) Right now only "add" and "set" are supported as mutators (and setting a null value is like "remove"), but I figured it would be best to do a slice first from start to finish. Comments?
          Hide
          Yonik Seeley added a comment -

          I'm working in getting an XML syntax going. The easiest/least disruptive seems to be an attribute:

          Current:

            <field name="foo" boost="2.5">100</foo>
          

          Proposed:

            <field name="foo" boost="2.5" update="add">100</foo>
          
          Show
          Yonik Seeley added a comment - I'm working in getting an XML syntax going. The easiest/least disruptive seems to be an attribute: Current: <field name= "foo" boost= "2.5" >100</foo> Proposed: <field name= "foo" boost= "2.5" update= "add" >100</foo>
          Hide
          Jan Høydahl added a comment -

          Cool. Any plans for supporting modification of existing value? Most useful would be add, subtract (for numeric) and append text for textual. (In FAST ESP we had this as part of the partial update APIs)

          Show
          Jan Høydahl added a comment - Cool. Any plans for supporting modification of existing value? Most useful would be add, subtract (for numeric) and append text for textual. (In FAST ESP we had this as part of the partial update APIs)
          Hide
          Yonik Seeley added a comment -

          Cool. Any plans for supporting modification of existing value?

          Definitely!

          • increment or inc (add is taken for adding additional field values). decrement not needed (just use a negative increment)
          • append/prepend (and maybe allow "add" to mean "append" if a text/string field is multi-valued)
          • set operations for multi-valued fields (union, intersection, remove, etc)
          • we could get into conditionals, but at some point we should just punt that to a script-updator (i.e. update this document using the given script)
          Show
          Yonik Seeley added a comment - Cool. Any plans for supporting modification of existing value? Definitely! increment or inc (add is taken for adding additional field values). decrement not needed (just use a negative increment) append/prepend (and maybe allow "add" to mean "append" if a text/string field is multi-valued) set operations for multi-valued fields (union, intersection, remove, etc) we could get into conditionals, but at some point we should just punt that to a script-updator (i.e. update this document using the given script)
          Hide
          David Smiley added a comment -

          Yonik; I don't see your patch. Will this support the ability to replace a text field that is indexed? If so, what is the essence of the implementation? The most common use-case I need for my employer involves essentially a complete re-indexing of all docs but only for a few particular fields. I tend to think, at least for my use-case, that it is within reach if I had enough time to work on it. The implementation concept I have involves building parallel segments with aligned docids. Segment merging would clean out older versions of a field.

          Show
          David Smiley added a comment - Yonik; I don't see your patch. Will this support the ability to replace a text field that is indexed? If so, what is the essence of the implementation? The most common use-case I need for my employer involves essentially a complete re-indexing of all docs but only for a few particular fields. I tend to think, at least for my use-case, that it is within reach if I had enough time to work on it. The implementation concept I have involves building parallel segments with aligned docids. Segment merging would clean out older versions of a field.
          Hide
          Yonik Seeley added a comment -

          Yonik; I don't see your patch.

          Try sorting by date, or click on the "All" tab and you can see where I added it.
          https://issues.apache.org/jira/secure/attachment/12525736/SOLR-139.patch

          what is the essence of the implementation?

          This is the simplest form. Original fields need to be stored. The document is retrieved and then re-indexed after modification.

          Show
          Yonik Seeley added a comment - Yonik; I don't see your patch. Try sorting by date, or click on the "All" tab and you can see where I added it. https://issues.apache.org/jira/secure/attachment/12525736/SOLR-139.patch what is the essence of the implementation? This is the simplest form. Original fields need to be stored. The document is retrieved and then re-indexed after modification.
          Hide
          Andrzej Bialecki added a comment -

          David, please see LUCENE-3837 for a low-level partial update of inverted fields without re-indexing other fields. That is very much work in progress, and it's more complex. This issue provides a shortcut to a "retrieve stored fields, modify, delete original doc, add modified doc" sequence that users would have to execute manually.

          Show
          Andrzej Bialecki added a comment - David, please see LUCENE-3837 for a low-level partial update of inverted fields without re-indexing other fields. That is very much work in progress, and it's more complex. This issue provides a shortcut to a "retrieve stored fields, modify, delete original doc, add modified doc" sequence that users would have to execute manually.
          Hide
          Yonik Seeley added a comment -

          Here's an updated patch with XML support and cloud tests.

          The underlying mechanism for updating a field always change in the future (and depending on the field type), so the really important part of this is the API.

          I plan on committing soon unless someone comes does come up with some API improvements.

          Show
          Yonik Seeley added a comment - Here's an updated patch with XML support and cloud tests. The underlying mechanism for updating a field always change in the future (and depending on the field type), so the really important part of this is the API. I plan on committing soon unless someone comes does come up with some API improvements.
          Hide
          Yonik Seeley added a comment -

          Committed (5 years after the issue was opened!)

          I'll keep this issue open and we can add follow-on patches to implement increment and other set operations.

          Show
          Yonik Seeley added a comment - Committed (5 years after the issue was opened!) I'll keep this issue open and we can add follow-on patches to implement increment and other set operations.
          Hide
          Seshagiri Nuthalapati added a comment -

          yonik, I see a small issue with this.

          • I have added one of the example document from exampledocs folder. The schema has "price" and "price_c" (copy field from price)

          1.When i add the document, it looks as follows:

          <str name="id">TWINX2048-3200PRO</str>
          <str name="name">CORSAIR XMS 2GB (2 x 1GB) 184-Pin DDR SDRAM Unbuffered DDR
          400 (PC 3200) Dual Channel Kit System Memory - Retail
          </str>
          <str name="manu">Corsair Microsystems Inc.</str>
          <float name="price">185.0</float>
          <arr name="price_c">
          <str>185,USD</str>
          </arr>

          2. Now I want to set price field with 100 , so I sent a json for it.
          [{"id":"TWINX2048-3200PRO","price":{"set":100}}]

          Now the document looks as follows:
          <str name="id">TWINX2048-3200PRO</str>
          <str name="name">CORSAIR XMS 2GB (2 x 1GB) 184-Pin DDR SDRAM Unbuffered DDR
          400 (PC 3200) Dual Channel Kit System Memory - Retail
          </str>
          <str name="manu">Corsair Microsystems Inc.</str>

          <float name="price">100.0</float>
          <arr name="price_c">
          <str>100,USD</str>
          <str>185,USD</str>
          </arr>

          as you can see, the old price value still there in the "price_c". Is there a workaround/patch we can do for this?

          Show
          Seshagiri Nuthalapati added a comment - yonik, I see a small issue with this. I have added one of the example document from exampledocs folder. The schema has "price" and "price_c" (copy field from price) 1.When i add the document, it looks as follows: <str name="id">TWINX2048-3200PRO</str> <str name="name">CORSAIR XMS 2GB (2 x 1GB) 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) Dual Channel Kit System Memory - Retail </str> <str name="manu">Corsair Microsystems Inc.</str> <float name="price">185.0</float> <arr name="price_c"> <str>185,USD</str> </arr> 2. Now I want to set price field with 100 , so I sent a json for it. [{"id":"TWINX2048-3200PRO","price":{"set":100}}] Now the document looks as follows: <str name="id">TWINX2048-3200PRO</str> <str name="name">CORSAIR XMS 2GB (2 x 1GB) 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) Dual Channel Kit System Memory - Retail </str> <str name="manu">Corsair Microsystems Inc.</str> <float name="price">100.0</float> <arr name="price_c"> <str>100,USD</str> <str>185,USD</str> </arr> as you can see, the old price value still there in the "price_c". Is there a workaround/patch we can do for this?
          Hide
          Yonik Seeley added a comment -

          The schema has "price" and "price_c" (copy field from price)

          For this feature to work correctly, source fields (the ones you normally send in) should be stored, and copyField targets (like price_c) should be un-stored.

          Show
          Yonik Seeley added a comment - The schema has "price" and "price_c" (copy field from price) For this feature to work correctly, source fields (the ones you normally send in) should be stored, and copyField targets (like price_c) should be un-stored.
          Hide
          Mikhail Khludnev added a comment -

          Yonik,

          it's hard to follow for me. Can't you clarify what's actually can be updated stored/indexed field, field cache? Where update will be searchable?

          Thanks

          Show
          Mikhail Khludnev added a comment - Yonik, it's hard to follow for me. Can't you clarify what's actually can be updated stored/indexed field, field cache? Where update will be searchable? Thanks
          Hide
          Yonik Seeley added a comment -

          Per the discussion on the mailing list, here's a patch that creates the document being updated if it doesn't exist already. The standard optimistic concurrency mechanism can be used to specify that a document must exist if desired.

          Show
          Yonik Seeley added a comment - Per the discussion on the mailing list, here's a patch that creates the document being updated if it doesn't exist already. The standard optimistic concurrency mechanism can be used to specify that a document must exist if desired.
          Hide
          Yonik Seeley added a comment -

          The create-if-not-exist patch was committed to both trunk and 4x branch.
          http://svn.apache.org/viewvc?rev=1361301&view=rev

          Show
          Yonik Seeley added a comment - The create-if-not-exist patch was committed to both trunk and 4x branch. http://svn.apache.org/viewvc?rev=1361301&view=rev
          Hide
          Christopher Ball added a comment -

          Yonik,

          Do you have an example with the XML syntax? I have been trying to test this in 4.0-Beta, but am obviously not grokking the right syntax =(
          Also, have you tried to use this with a join query? I can think of some interesting use cases =)

          Regards

          Show
          Christopher Ball added a comment - Yonik, Do you have an example with the XML syntax? I have been trying to test this in 4.0-Beta, but am obviously not grokking the right syntax =( Also, have you tried to use this with a join query? I can think of some interesting use cases =) Regards
          Hide
          Dilip Maddi added a comment -

          Christopher,
          Here is how I am able to update a document by posting an XML
          <add>
          <doc>
          <field name="id">VA902B</field>
          <field name="price" update="set">300</field>
          </doc>
          </add>

          Show
          Dilip Maddi added a comment - Christopher, Here is how I am able to update a document by posting an XML <add> <doc> <field name="id">VA902B</field> <field name="price" update="set">300</field> </doc> </add>
          Hide
          Robert Muir added a comment -

          Unassigned issues -> 4.1

          Show
          Robert Muir added a comment - Unassigned issues -> 4.1
          Hide
          Sean Timm added a comment -

          It appears that SolrJ does not yet (as of 4.0 Alpha) support updating fields in a document. Is there a separate Jira ticket for this?

          Show
          Sean Timm added a comment - It appears that SolrJ does not yet (as of 4.0 Alpha) support updating fields in a document. Is there a separate Jira ticket for this?
          Hide
          Mike added a comment -

          Can we get this on the Wiki somewhere? I've been looking around, haven't been able to find it. Not sure where to put it...

          Show
          Mike added a comment - Can we get this on the Wiki somewhere? I've been looking around, haven't been able to find it. Not sure where to put it...
          Hide
          Matt Altermatt added a comment -

          
          I will be out of the office until the 29th of October.

          If you need immediate assistance, please contact IT Integration (itintegration@paml.com) or my manager Jon Tolley (jtolley@paml.com).

          Thanks.

          PAML EMAIL DISCLAIMER:
          Information contained in this message may be privileged and confidential.
          If the reader of this message is not the intended recipient, be notified
          that any dissemination, distribution or copying of this communication is
          strictly prohibited. If this communication is received in error, please
          notify the sender immediately by replying to the message and deleting
          from your computer. Thank you

          Show
          Matt Altermatt added a comment -  I will be out of the office until the 29th of October. If you need immediate assistance, please contact IT Integration (itintegration@paml.com) or my manager Jon Tolley (jtolley@paml.com). Thanks. PAML EMAIL DISCLAIMER: Information contained in this message may be privileged and confidential. If the reader of this message is not the intended recipient, be notified that any dissemination, distribution or copying of this communication is strictly prohibited. If this communication is received in error, please notify the sender immediately by replying to the message and deleting from your computer. Thank you
          Hide
          Joel Nothman added a comment -

          Hi,

          I'm a fan of the feature, but not really a fan of the syntax, for the following reasons:

          • It is extremely verbose for batch update operations, e.g. setting a new field on all documents in the index. Surely the update modes should be specified outside of each individual record (either as URL query parameters, or in some content header/wrapper). The current approach is entirely inappropriate for extending to CSV, which might otherwise be an obvious choice of format when adding a single field to each of a set of objects.
          • The distinction between an "insert" and an "update" operation (in SQL terms) is implicit, only indicated by the presence of an object in a JSON value, or by the presence of update in any one of the specified fields. Since insert and update operations are quite distinct on the server, it should select between these on a per-request basis, not per-record.
          • The JSON syntax would appear as if one could extend {"set":100}

            to

            {"set":100,"inc":2}

            on the same field, which is nonsense. It uses JSON a object for inappropriate semantics, where what one actually means is

            {"op":"set","val":100}

            , or even

            {"name":"price","op":"set","val":100}

            .

          • It may be worth reserving JSON-object-as-value for something more literal in the future.
          Show
          Joel Nothman added a comment - Hi, I'm a fan of the feature, but not really a fan of the syntax, for the following reasons: It is extremely verbose for batch update operations, e.g. setting a new field on all documents in the index. Surely the update modes should be specified outside of each individual record (either as URL query parameters, or in some content header/wrapper). The current approach is entirely inappropriate for extending to CSV, which might otherwise be an obvious choice of format when adding a single field to each of a set of objects. The distinction between an "insert" and an "update" operation (in SQL terms) is implicit, only indicated by the presence of an object in a JSON value, or by the presence of update in any one of the specified fields. Since insert and update operations are quite distinct on the server, it should select between these on a per-request basis, not per-record. The JSON syntax would appear as if one could extend {"set":100} to {"set":100,"inc":2} on the same field, which is nonsense. It uses JSON a object for inappropriate semantics, where what one actually means is {"op":"set","val":100} , or even {"name":"price","op":"set","val":100} . It may be worth reserving JSON-object-as-value for something more literal in the future.
          Hide
          Jack Krupansky added a comment -

          Thanks!

          Show
          Jack Krupansky added a comment - Thanks!
          Hide
          Lukas Graf added a comment - - edited

          This feature doesn't work as advertised in Solr 4.0.0 (final).

          Since it's not documented, I used the information in these blog posts (yonik.com, solr.pl) and this ticket to try to get it working, and asked in the #solr IRC channel, to no avail.

          Whenever I use the 'set' command in an update message, it mangles the value to something like

          <str name="Title">{set=My new title}</str>

          , and drops all other fields.

          I tried the JSON as well as the XML Syntax for the update message, and I tried it with both a manually defined 'version' field and without.

          Relevant parts from my schema.xml:

          <schema name="solr-instance" version="1.4">
          
              <fields>
                <field name="Creator" type="string" indexed="true" stored="true" required="false" multiValued="false" />
                <!-- ... -->
                <field name="Title" type="text" indexed="true" stored="true" required="false" multiValued="false" />
                <field name="UID" type="string" indexed="true" stored="true" required="true" multiValued="false" />
                <field name="_version_" type="long" indexed="true" stored="true" required="false" multiValued="false" />
              </fields>
          
              <!-- ... -->
          
              <uniqueKey>UID</uniqueKey>
          
          </schema>
          

          I initially created some content like this:

          $ curl 'localhost:8983/solr/update?commit=true' -H 'Content-type:application/json' -d '[{"UID":"7cb8a43c","Title":"My original Title", "Creator": "John Doe"}]'
          

          Which resulted in this document:

          <doc>
              <str name="UID">7cb8a43c</str>
              <str name="Title">My original Title</str>
              <str name="Creator">John Doe</str>
          </doc>
          

          Then I tried to update that document with this statement:

          $ curl 'localhost:8983/solr/update?commit=true' -H 'Content-type:application/json' -d '[{"UID":"7cb8a43c","Title":{"set":"My new title"}}]'
          

          Which resulted in this mangled document:

          <doc>
              <str name="UID">7cb8a43c</str>
              <str name="Title">{set=My new title}</str>
          </doc>
          

          (I would have expected the document to still have the value 'John Doe' for the 'Creator' field,
          and have the value of its 'Title' field update to 'My new title')

          I tried using the XML format for the update message as well:

          <add>
             <doc>
                 <field name="UID">7cb8a43c</field>
                 <field name="Title" update="set">My new title</field>
             </doc>
          </add>
          

          Same result as above.

          Show
          Lukas Graf added a comment - - edited This feature doesn't work as advertised in Solr 4.0.0 (final). Since it's not documented, I used the information in these blog posts ( yonik.com , solr.pl ) and this ticket to try to get it working, and asked in the #solr IRC channel, to no avail. Whenever I use the 'set' command in an update message, it mangles the value to something like <str name= "Title" > {set=My new title} </str> , and drops all other fields. I tried the JSON as well as the XML Syntax for the update message, and I tried it with both a manually defined ' version ' field and without. Relevant parts from my schema.xml: <schema name= "solr-instance" version= "1.4" > <fields> <field name= "Creator" type= "string" indexed= "true" stored= "true" required= "false" multiValued= "false" /> <!-- ... --> <field name= "Title" type= "text" indexed= "true" stored= "true" required= "false" multiValued= "false" /> <field name= "UID" type= "string" indexed= "true" stored= "true" required= "true" multiValued= "false" /> <field name= "_version_" type= "long" indexed= "true" stored= "true" required= "false" multiValued= "false" /> </fields> <!-- ... --> <uniqueKey> UID </uniqueKey> </schema> I initially created some content like this: $ curl 'localhost:8983/solr/update?commit=true' -H 'Content-type:application/json' -d '[{"UID":"7cb8a43c","Title":"My original Title", "Creator": "John Doe"}]' Which resulted in this document: <doc> <str name= "UID" > 7cb8a43c </str> <str name= "Title" > My original Title </str> <str name= "Creator" > John Doe </str> </doc> Then I tried to update that document with this statement: $ curl 'localhost:8983/solr/update?commit=true' -H 'Content-type:application/json' -d '[{"UID":"7cb8a43c","Title":{"set":"My new title"}}]' Which resulted in this mangled document: <doc> <str name= "UID" > 7cb8a43c </str> <str name= "Title" > {set=My new title} </str> </doc> (I would have expected the document to still have the value 'John Doe' for the 'Creator' field, and have the value of its 'Title' field update to 'My new title') I tried using the XML format for the update message as well: <add> <doc> <field name= "UID" > 7cb8a43c </field> <field name= "Title" update= "set" > My new title </field> </doc> </add> Same result as above.
          Hide
          Jack Krupansky added a comment -

          I just tried it and the feature does work as advertised. If there is a bug, that should be filed as a separate issue. If there is a question or difficulty using the feature, that should be pursued on the Solr user list.

          For reference, I took a fresh, stock copy of the Solr 4.0 example, no changes to schema or config, and added one document:

          curl 'localhost:8983/solr/update?commit=true' -H 'Content-type:application/json' -d '
          [{"id":"id-123","title":"My original Title", "content": "Initial content"}]'
          

          I queried it and it looked fine.

          I then modified only the title field:

          curl 'localhost:8983/solr/update?commit=true' -H 'Content-type:application/json' -d '
          [{"id":"id-123","title":{"set":"My new title"}}]'
          

          I tried the XML equivalents and that worked fine as well, with the original content field preserved.

          Show
          Jack Krupansky added a comment - I just tried it and the feature does work as advertised. If there is a bug, that should be filed as a separate issue. If there is a question or difficulty using the feature, that should be pursued on the Solr user list. For reference, I took a fresh, stock copy of the Solr 4.0 example, no changes to schema or config, and added one document: curl 'localhost:8983/solr/update?commit= true ' -H 'Content-type:application/json' -d ' [{ "id" : "id-123" , "title" : "My original Title" , "content" : "Initial content" }]' I queried it and it looked fine. I then modified only the title field: curl 'localhost:8983/solr/update?commit= true ' -H 'Content-type:application/json' -d ' [{ "id" : "id-123" , "title" :{ "set" : "My new title" }}]' I tried the XML equivalents and that worked fine as well, with the original content field preserved.
          Hide
          Lukas Graf added a comment -

          Thanks for your response. I thought I had the issue reduced to a simple enough test case, but apparently not. I will try again with a clean stock Solr 4.0, and file a seperate issue if necessary, or look for support on the mailing list. My choice of words ('doesn't work as advertised') might have been influenced by frustration about the lack of documentation, sorry for the noise.

          Show
          Lukas Graf added a comment - Thanks for your response. I thought I had the issue reduced to a simple enough test case, but apparently not. I will try again with a clean stock Solr 4.0, and file a seperate issue if necessary, or look for support on the mailing list. My choice of words ('doesn't work as advertised') might have been influenced by frustration about the lack of documentation, sorry for the noise.
          Hide
          Jack Krupansky added a comment -

          No apology necessary for the noise. I mean, none of us was able to offer a prompt response to earlier inquiries and this got me focused on actually trying the feature for the first time.

          Show
          Jack Krupansky added a comment - No apology necessary for the noise. I mean, none of us was able to offer a prompt response to earlier inquiries and this got me focused on actually trying the feature for the first time.
          Hide
          Mark Miller added a comment -

          about the lack of documentation

          Since you are eating some of this pain, perhaps you could give a hand when you have it figured out and contribute to our wiki? http://wiki.apache.org/solr/

          Show
          Mark Miller added a comment - about the lack of documentation Since you are eating some of this pain, perhaps you could give a hand when you have it figured out and contribute to our wiki? http://wiki.apache.org/solr/
          Hide
          Lukas Graf added a comment -

          Certainly, to the extent that I can.

          Show
          Lukas Graf added a comment - Certainly, to the extent that I can.
          Hide
          Lukas Graf added a comment -

          Ok, I finally figured it out by diffing every single difference from my test case to the stock Solr 4.0 example using git bisect.

          The culprit was a missing <updateLog /> directive in solrconfig.xml. As soon as I configured a transaction log, atomic updates worked as expected. I added a note about this at http://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_for_.22field.22 .

          Show
          Lukas Graf added a comment - Ok, I finally figured it out by diffing every single difference from my test case to the stock Solr 4.0 example using git bisect . The culprit was a missing <updateLog /> directive in solrconfig.xml . As soon as I configured a transaction log, atomic updates worked as expected. I added a note about this at http://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_for_.22field.22 .
          Hide
          Jack Krupansky added a comment -

          Oh, yeah, that. I actually was going to mention it, but I wanted to focus on running with the stock Solr example first. Actually, we need to look a little closer as to why/whether the <updateLog> directive is really always needed for partial document update. That should probably be a separate Jira issue.

          Show
          Jack Krupansky added a comment - Oh, yeah, that. I actually was going to mention it, but I wanted to focus on running with the stock Solr example first. Actually, we need to look a little closer as to why/whether the <updateLog> directive is really always needed for partial document update. That should probably be a separate Jira issue.
          Hide
          Mark Miller added a comment -

          we need to look a little closer as to why/whether the <updateLog> directive is really always needed for partial document update.

          I believe yonik chose to implement it by using updateLog features.

          Show
          Mark Miller added a comment - we need to look a little closer as to why/whether the <updateLog> directive is really always needed for partial document update. I believe yonik chose to implement it by using updateLog features.
          Hide
          Hoss Man added a comment -

          I believe yonik chose to implement it by using updateLog features.

          i think it has to be - the "real time get" support provided by the updateLog is the only way to garuntee that the document will be available to atomicly update it.

          Lukas: if the atomic update code path isn't throwing a big fat error if you try to use it w/o updateLog configured then that sounds to me like a bug – can you please file a Jira for that

          Show
          Hoss Man added a comment - I believe yonik chose to implement it by using updateLog features. i think it has to be - the "real time get" support provided by the updateLog is the only way to garuntee that the document will be available to atomicly update it. Lukas: if the atomic update code path isn't throwing a big fat error if you try to use it w/o updateLog configured then that sounds to me like a bug – can you please file a Jira for that
          Hide
          Lukas Graf added a comment -

          Filed SOLR-4127: Atomic updates used w/o updateLog should throw an error

          Show
          Lukas Graf added a comment - Filed SOLR-4127 : Atomic updates used w/o updateLog should throw an error
          Hide
          Abhinav Shah added a comment - - edited

          I am using apache-solr 4.0.
          I am trying to post the following document -

          curl http://irvis016:8983/solr/collection1/update?commit=true -H "Content-Type: text/xml" --data-binary '<add commitWithin="5000"><doc boost="1.0"><field name="accessionNumber" update="set">3165297</field><field name="status" update="set">ORDERED</field><field name="account.accountName" update="set">US LABS DEMO ACCOUNT</field><field name="account.addresses.address1" update="set">2601 Campus Drive</field><field name="account.addresses.city" update="set">Irvine</field><field name="account.addresses.state" update="set">CA</field><field name="account.addresses.zip" update="set">92622</field><field name="account.externalIds.sourceSystem" update="set">10442</field><field name="orderingPhysician.lcProviderNumber" update="set">60086</field><field name="patient.lpid" update="set">5571351625769103</field><field name="patient.patientName.lastName" update="set">test</field><field name="patient.patientName.firstName" update="set">test123</field><field name="patient.patientSSN" update="set">643522342</field><field name="patient.patientDOB" update="set">1979-11-11T08:00:00.000Z</field><field name="patient.mrNs.mrn" update="set">5423</field><field name="specimens.specimenType" update="set">Bone Marrow</field><field name="specimens.specimenType" update="set">Nerve tissue</field><field name="UID">3165297USLABS2012</field></doc></add>'
          

          This document gets successfully posted. However, the multi-valued field 'specimens.specimenType', gets stored as following in SOLR -

          <arr name="specimens.specimenType">
          <str>{set=Bone Marrow}</str>
          <str>{set=Nerve tissue}</str>
          </arr>
          

          I did not expect "{set=" to be stored along with the text "Bone Marror".

          My Solr schema xml definition for the field specimens.SpecimenType is -

          <field indexed="true" multiValued="true" name="specimens.specimenType" omitNorms="false" omitPositions="true" omitTermFreqAndPositions="true" stored="true" termVectors="false" type="text_en"/>
          

          Can someone help?

          Show
          Abhinav Shah added a comment - - edited I am using apache-solr 4.0. I am trying to post the following document - curl http: //irvis016:8983/solr/collection1/update?commit= true -H "Content-Type: text/xml" --data-binary '<add commitWithin= "5000" ><doc boost= "1.0" ><field name= "accessionNumber" update= "set" >3165297</field><field name= "status" update= "set" >ORDERED</field><field name= "account.accountName" update= "set" >US LABS DEMO ACCOUNT</field><field name= "account.addresses.address1" update= "set" >2601 Campus Drive</field><field name= "account.addresses.city" update= "set" >Irvine</field><field name= "account.addresses.state" update= "set" >CA</field><field name= "account.addresses.zip" update= "set" >92622</field><field name= "account.externalIds.sourceSystem" update= "set" >10442</field><field name= "orderingPhysician.lcProviderNumber" update= "set" >60086</field><field name= "patient.lpid" update= "set" >5571351625769103</field><field name= "patient.patientName.lastName" update= "set" >test</field><field name= "patient.patientName.firstName" update= "set" >test123</field><field name= "patient.patientSSN" update= "set" >643522342</field><field name= "patient.patientDOB" update= "set" >1979-11-11T08:00:00.000Z</field><field name= "patient.mrNs.mrn" update= "set" >5423</field><field name= "specimens.specimenType" update= "set" >Bone Marrow</field><field name= "specimens.specimenType" update= "set" >Nerve tissue</field><field name= "UID" >3165297USLABS2012</field></doc></add>' This document gets successfully posted. However, the multi-valued field 'specimens.specimenType', gets stored as following in SOLR - <arr name= "specimens.specimenType" > <str>{set=Bone Marrow}</str> <str>{set=Nerve tissue}</str> </arr> I did not expect "{set=" to be stored along with the text "Bone Marror". My Solr schema xml definition for the field specimens.SpecimenType is - <field indexed= " true " multiValued= " true " name= "specimens.specimenType" omitNorms= " false " omitPositions= " true " omitTermFreqAndPositions= " true " stored= " true " termVectors= " false " type= "text_en" /> Can someone help?
          Hide
          Uwe Schindler added a comment -

          Closed after release.

          Show
          Uwe Schindler added a comment - Closed after release.

            People

            • Assignee:
              Unassigned
              Reporter:
              Ryan McKinley
            • Votes:
              57 Vote for this issue
              Watchers:
              81 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development