Solr
  1. Solr
  2. SOLR-6304

Transforming and Indexing custom JSON data

    Details

    • Type: Improvement Improvement
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 4.10, 6.0
    • Component/s: None
    • Labels:
      None

      Description

      example

      curl localhost:8983/update/json/docs?split=/batters/batter&f=recipeId:/id&f=recipeType:/type&f=id:/batters/batter/id&f=type:/batters/batter/type -d '
      {
      		"id": "0001",
      		"type": "donut",
      		"name": "Cake",
      		"ppu": 0.55,
      		"batters": {
      				"batter":
      					[
      						{ "id": "1001", "type": "Regular" },
      						{ "id": "1002", "type": "Chocolate" },
      						{ "id": "1003", "type": "Blueberry" },
      						{ "id": "1004", "type": "Devil's Food" }
      					]
      			}
      }'
      

      should produce the following output docs

      { "recipeId":"001", "recipeType":"donut", "id":"1001", "type":"Regular" }
      { "recipeId":"001", "recipeType":"donut", "id":"1002", "type":"Chocolate" }
      { "recipeId":"001", "recipeType":"donut", "id":"1003", "type":"Blueberry" }
      { "recipeId":"001", "recipeType":"donut", "id":"1004", "type":"Devil's food" }
      

      the split param is the element in the tree where it should be split into multiple docs. The 'f' are field name mappings

      1. SOLR-6304.patch
        32 kB
        Noble Paul
      2. SOLR-6304.patch
        20 kB
        Noble Paul

        Issue Links

          Activity

          Hide
          Shalin Shekhar Mangar added a comment -

          I like this. We should try to standardize this across CSV and XML formats too (but don't let that stop you).

          Initially I thought that we could do f.id.map=/batters/batter/id instead of f=id:/batters/batter/id but then that'd mean that not specifying field names would not be possible. In the current syntax, one can just write f=:/batters/batter/id and the field name can automatically be inferred as id if required.

          Show
          Shalin Shekhar Mangar added a comment - I like this. We should try to standardize this across CSV and XML formats too (but don't let that stop you). Initially I thought that we could do f.id.map=/batters/batter/id instead of f=id:/batters/batter/id but then that'd mean that not specifying field names would not be possible. In the current syntax, one can just write f=:/batters/batter/id and the field name can automatically be inferred as id if required.
          Hide
          Noble Paul added a comment -

          In the current syntax, one can just write f=:/batters/batter/id and the field name can automatically be inferred as id

          it will be f=/batters/batter/id . omit the colon too. And I expect this to be a very common usecase

          Show
          Noble Paul added a comment - In the current syntax, one can just write f=:/batters/batter/id and the field name can automatically be inferred as id it will be f=/batters/batter/id . omit the colon too. And I expect this to be a very common usecase
          Hide
          Noble Paul added a comment -

          We should try to standardize this across CSV and XML formats too

          We already have avery powerful XPathRecordReader in the DIH. I'm planning to move that into the common util and make this syntax valid for xml as well. But for csv , we already have a very powerful processing syntax . I'm not sure if we should change that

          Show
          Noble Paul added a comment - We should try to standardize this across CSV and XML formats too We already have avery powerful XPathRecordReader in the DIH. I'm planning to move that into the common util and make this syntax valid for xml as well. But for csv , we already have a very powerful processing syntax . I'm not sure if we should change that
          Hide
          Shalin Shekhar Mangar added a comment -

          it will be f=/batters/batter/id . omit the colon too. And I expect this to be a very common usecase

          Yes, you are right, the colon is not necessary.

          We already have avery powerful XPathRecordReader in the DIH. I'm planning to move that into the common util and make this syntax valid for xml as well.

          +1, yay! Being able to consume most XMLs easily would be great.

          Show
          Shalin Shekhar Mangar added a comment - it will be f=/batters/batter/id . omit the colon too. And I expect this to be a very common usecase Yes, you are right, the colon is not necessary. We already have avery powerful XPathRecordReader in the DIH. I'm planning to move that into the common util and make this syntax valid for xml as well. +1, yay! Being able to consume most XMLs easily would be great.
          Hide
          Noble Paul added a comment -

          A streaming parser for JSON

          Show
          Noble Paul added a comment - A streaming parser for JSON
          Hide
          Noble Paul added a comment -

          This fixes all the cases, including raw json

          I plan to commit this soon

          Show
          Noble Paul added a comment - This fixes all the cases, including raw json I plan to commit this soon
          Hide
          Erik Hatcher added a comment - - edited

          Noble Paul - looks good! It'll be really cool when this type of flattening is available for XML too. One thing, I think the "echo" debugging parameter in there should at least be "json.echo" to qualify it, though with XML flattening future, maybe "echo" is just fine, or "flatten.echo"? I'm just thinking out loud with namespacing in mind.

          Show
          Erik Hatcher added a comment - - edited Noble Paul - looks good! It'll be really cool when this type of flattening is available for XML too. One thing, I think the "echo" debugging parameter in there should at least be "json.echo" to qualify it, though with XML flattening future, maybe "echo" is just fine, or "flatten.echo"? I'm just thinking out loud with namespacing in mind.
          Hide
          Noble Paul added a comment -

          I think the "echo" debugging parameter in there should at least be "json.echo" to qualify it

          I want all paths to support it .That is why I did not use a prefix. Why can't csv too do it?

          t'll be really cool when this type of flattening is available for XML too.

          It's coming. The capability is already there . I just need to move the XPathRecordReader.java to the common util and add a path

          Show
          Noble Paul added a comment - I think the "echo" debugging parameter in there should at least be "json.echo" to qualify it I want all paths to support it .That is why I did not use a prefix. Why can't csv too do it? t'll be really cool when this type of flattening is available for XML too. It's coming. The capability is already there . I just need to move the XPathRecordReader.java to the common util and add a path
          Hide
          ASF subversion and git services added a comment -

          Commit 1617287 from Noble Paul in branch 'dev/trunk'
          [ https://svn.apache.org/r1617287 ]

          SOLR-6304 JsonLoader should be able to flatten an input JSON to multiple docs

          Show
          ASF subversion and git services added a comment - Commit 1617287 from Noble Paul in branch 'dev/trunk' [ https://svn.apache.org/r1617287 ] SOLR-6304 JsonLoader should be able to flatten an input JSON to multiple docs
          Hide
          Erik Hatcher added a comment -

          I want all paths to support it .That is why I did not use a prefix. Why can't csv too do it?

          Ok, cool. As for CSV, the echo feature is for when an incoming payload is split into multiple documents, right? So it doesn't have quite the same value/effect that it does for this flattening of JSON and XML.

          Show
          Erik Hatcher added a comment - I want all paths to support it .That is why I did not use a prefix. Why can't csv too do it? Ok, cool. As for CSV, the echo feature is for when an incoming payload is split into multiple documents, right? So it doesn't have quite the same value/effect that it does for this flattening of JSON and XML.
          Hide
          ASF subversion and git services added a comment -

          Commit 1617296 from Noble Paul in branch 'dev/branches/branch_4x'
          [ https://svn.apache.org/r1617296 ]

          SOLR-6304 JsonLoader should be able to flatten an input JSON to multiple docs

          Show
          ASF subversion and git services added a comment - Commit 1617296 from Noble Paul in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1617296 ] SOLR-6304 JsonLoader should be able to flatten an input JSON to multiple docs
          Hide
          ASF subversion and git services added a comment -

          Commit 1617424 from Noble Paul in branch 'dev/trunk'
          [ https://svn.apache.org/r1617424 ]

          SOLR-6304 wildcard fix

          Show
          ASF subversion and git services added a comment - Commit 1617424 from Noble Paul in branch 'dev/trunk' [ https://svn.apache.org/r1617424 ] SOLR-6304 wildcard fix
          Hide
          ASF subversion and git services added a comment -

          Commit 1617425 from Noble Paul in branch 'dev/branches/branch_4x'
          [ https://svn.apache.org/r1617425 ]

          SOLR-6304 wildcard fix

          Show
          ASF subversion and git services added a comment - Commit 1617425 from Noble Paul in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1617425 ] SOLR-6304 wildcard fix
          Hide
          Ingo Renner added a comment - - edited

          Just read the article on searchhub for this issue [1]. If echo is meant for debugging purposes and doesn't create documents, wouldn't it make more sense to call the parameter 'debug' or 'dryrun'?

          [1] http://searchhub.org/2014/08/12/indexing-custom-json-data/

          Show
          Ingo Renner added a comment - - edited Just read the article on searchhub for this issue [1] . If echo is meant for debugging purposes and doesn't create documents, wouldn't it make more sense to call the parameter 'debug' or 'dryrun'? [1] http://searchhub.org/2014/08/12/indexing-custom-json-data/
          Hide
          Noble Paul added a comment -

          Ingo Renner 'debug' somehow suggested that it is actually doing indexing. I thought of 'dryrun' which better describes the functionality but not as simple as a single word 'echo'

          Show
          Noble Paul added a comment - Ingo Renner 'debug' somehow suggested that it is actually doing indexing. I thought of 'dryrun' which better describes the functionality but not as simple as a single word 'echo'
          Hide
          Bryan Bende added a comment -

          Is there a way to send multiple JSON documents in a single request?

          The comments of JsonRecordReader for the splitPath say:

          • ... Any fields collected in the
          • parent tag or above will also be included in the record, but these are
          • not cleared after emitting the record.
          • <p/>
          • It uses the ' | ' syntax of PATH to pass in multiple paths.

          So if you took the example from the blog post with the exams data, and sent two json documents with different first and last names, and split on /exams, then the first document gets added correctly, but the second document gets two values for first name since it is not cleared after the first record.

          I would imagine there is some way to do this with the correct split path, but can't figure it out.

          Show
          Bryan Bende added a comment - Is there a way to send multiple JSON documents in a single request? The comments of JsonRecordReader for the splitPath say: ... Any fields collected in the parent tag or above will also be included in the record, but these are not cleared after emitting the record. <p/> It uses the ' | ' syntax of PATH to pass in multiple paths. So if you took the example from the blog post with the exams data, and sent two json documents with different first and last names, and split on /exams, then the first document gets added correctly, but the second document gets two values for first name since it is not cleared after the first record. I would imagine there is some way to do this with the correct split path, but can't figure it out.
          Hide
          Noble Paul added a comment -

          The example is correct and Works as designed.

          I'm not clear what your requirement is.

          Please give an example as to what your input is and what do you expect your output as

          Show
          Noble Paul added a comment - The example is correct and Works as designed. I'm not clear what your requirement is. Please give an example as to what your input is and what do you expect your output as
          Hide
          Bryan Bende added a comment - - edited

          Sorry I didn't mean to imply that anything was wrong with the example... I wanted to know if it was possible to send multiple JSON documents in a single request, like this:

          curl 'http://localhost:8983/solr/collection1/update/json/docs'
          '?split=/exams'
          '&f=first:/first'
          '&f=last:/last'
          '&f=grade:/grade'
          '&f=subject:/exams/subject'
          '&f=test:/exams/test'
          '&f=marks:/exams/marks'
           -H 'Content-type:application/json' -d '
          {
            "first": "John",
            "last": "Doe",
            "grade": 8,
            "exams": [
                {"subject": "Maths", "test"   : "term1", "marks":90},
                {"subject": "Biology", "test"   : "term1", "marks":86}
                ]
          }
          {
            "first": "Bob",
            "last": "Smith",
            "grade": 7,
            "exams": [
                {"subject": "Maths", "test"   : "term1", "marks":95},
                {"subject": "Biology", "test"   : "term1", "marks":92}
                ]
          }
          '
          

          And then get 4 documents added to solr:
          john, doe, maths...
          john, doe, biology...
          bob, smith, maths...
          bob, smith, biology...

          An example of the code I was trying to write is here:
          https://github.com/bbende/solrj-custom-json-update/blob/master/src/test/java/org/apache/solr/IndexJSONTest.java
          testAddMultipleJsonDocsWithContentStreamUpdateRequest

          Show
          Bryan Bende added a comment - - edited Sorry I didn't mean to imply that anything was wrong with the example... I wanted to know if it was possible to send multiple JSON documents in a single request, like this: curl 'http://localhost:8983/solr/collection1/update/json/docs' '?split=/exams' '&f=first:/first' '&f=last:/last' '&f=grade:/grade' '&f=subject:/exams/subject' '&f=test:/exams/test' '&f=marks:/exams/marks' -H 'Content-type:application/json' -d ' { "first": "John", "last": "Doe", "grade": 8, "exams": [ {"subject": "Maths", "test" : "term1", "marks":90}, {"subject": "Biology", "test" : "term1", "marks":86} ] } { "first": "Bob", "last": "Smith", "grade": 7, "exams": [ {"subject": "Maths", "test" : "term1", "marks":95}, {"subject": "Biology", "test" : "term1", "marks":92} ] } ' And then get 4 documents added to solr: john, doe, maths... john, doe, biology... bob, smith, maths... bob, smith, biology... An example of the code I was trying to write is here: https://github.com/bbende/solrj-custom-json-update/blob/master/src/test/java/org/apache/solr/IndexJSONTest.java testAddMultipleJsonDocsWithContentStreamUpdateRequest
          Hide
          Noble Paul added a comment -

          So , what was the outcome? How many docs were indexed?

          Show
          Noble Paul added a comment - So , what was the outcome? How many docs were indexed?
          Hide
          Bryan Bende added a comment -

          For the first JSON document it indexes two solr documents as expected:
          john, doe, maths...
          john, doe, biology...

          but when it hits the second JSON document it still has values left over from the first document and tries to index a document like:
          [john, bob], [doe, smith], maths....

          Show
          Bryan Bende added a comment - For the first JSON document it indexes two solr documents as expected: john, doe, maths... john, doe, biology... but when it hits the second JSON document it still has values left over from the first document and tries to index a document like: [john, bob] , [doe, smith] , maths....
          Hide
          Noble Paul added a comment - - edited

          opened a ticket SOLR-7209

          Show
          Noble Paul added a comment - - edited opened a ticket SOLR-7209
          Hide
          Kelly Kagen added a comment -

          I'm having some difficulty while indexing custom JSON data using v5.3.1. I took the same example from the documentation, but it doesn't seem to be working as expected. Can someone validate if this is a bug or there's an issue with the procedure followed? The below are the scenarios.

          Source: Indexing custom JSON data, Transforming and Indexing Custom JSON

          Note: The echo parameter has been added.

          Input:

          curl 'http://localhost:8983/solr/collection1/update/json/docs'
          '?split=/exams'
          '&f=first:/first'
          '&f=last:/last'
          '&f=grade:/grade'
          '&f=subject:/exams/subject'
          '&f=test:/exams/test'
          '&f=marks:/exams/marks'
          '&echo=true'
           -H 'Content-type:application/json' -d '
          {
            "first": "John",
            "last": "Doe",
            "grade": 8,
            "exams": [
                {
                  "subject": "Maths",
                  "test"   : "term1",
                  "marks":90},
                  {
                   "subject": "Biology",
                   "test"   : "term1",
                   "marks":86}
                ]
          }'
          

          Output:

          {
            "error":{
              "msg":"Raw data can be stored only if split=/",
              "code":400
            }
          }
          

          Say I pass only '/' to the split parameter as reported, but with different field mappping, it doesn't seem to index the data per mentioned fields. Notice the suffix 'Name' added in the input JSON and also the field mapping.

          Input:

          curl 'http://localhost:8983/solr/collection1/update/json/docs'
          '?split=/'
          '&f=first:/firstName'
          '&f=last:/lastName'
          '&f=grade:/grade'
          '&f=subject:/exams/subjectName'
          '&f=test:/exams/test'
          '&f=marks:/exams/marks'
          '&echo=true'
           -H 'Content-type:application/json' -d '
          {
            "firstName": "John",
            "lastName": "Doe",
            "grade": 8,
            "exams": [
                {
                  "subjectName": "Maths",
                  "test"   : "term1",
                  "marks":90},
                  {
                   "subject": "Biology",
                   "test"   : "term1",
                   "marks":86}
                ]
          }'
          

          Output:

          {"responseHeader":{"status":0,"QTime":0},"docs":[{"id":"3c5fa5a0-ff71-4fef-b3e9-8e279cc0d724","_src_":"{  \"firstName\": \"John\",  \"lastName\": \"Doe\",  \"grade\": 8,  \"exams\": [      {        \"subjectName\": \"Maths\",        \"test\"   : \"term1\",        \"marks\":90},        {         \"subject\": \",         \"test\"   : \"term1\",         \"marks\":86}      ]}","text":["John","Doe",8,"Maths",["term1","term1"],[90,86]]}]}
          

          If there is a field named "id" is present then that reflects in the reponse, but all other fields are ignored for some reason.

          Input:

          curl 'http://localhost:8983/solr/collection1/update/json/docs'
          '?split=/'
          '&f=first:/firstName'
          '&f=id:/lastName'
          '&f=grade:/grade'
          '&f=subject:/exams/subjectName'
          '&f=test:/exams/test'
          '&f=marks:/exams/marks'
          '&echo=true'
           -H 'Content-type:application/json' -d '
          {
            "firstName": "John",
            "lastName": "Doe",
            "grade": 8,
            "exams": [
                {
                  "subjectName": "Maths",
                  "test"   : "term1",
                  "marks":90},
                  {
                   "subject": "Biology",
                   "test"   : "term1",
                   "marks":86}
                ]
          }'
          

          Output:

          {"responseHeader":{"status":0,"QTime":1},"docs":[{"id":"Doe","_src_":"{  \"firstName\": \"John\",  \"lastName\": \"Doe\",  \"grade\": 8,  \"exams\": [      {        \"subjectName\": \"Maths\",        \"test\"   : \"term1\",        \"marks\":90},        {         \"subject\": \",         \"test\"   : \"term1\",         \"marks\":86}      ]}","text":["John","Doe",8,"Maths",["term1","term1"],[90,86]]}]}
          
          Show
          Kelly Kagen added a comment - I'm having some difficulty while indexing custom JSON data using v5.3.1. I took the same example from the documentation, but it doesn't seem to be working as expected. Can someone validate if this is a bug or there's an issue with the procedure followed? The below are the scenarios. Source: Indexing custom JSON data , Transforming and Indexing Custom JSON Note: The echo parameter has been added. Input: curl 'http: //localhost:8983/solr/collection1/update/json/docs' '?split=/exams' '&f=first:/first' '&f=last:/last' '&f=grade:/grade' '&f=subject:/exams/subject' '&f=test:/exams/test' '&f=marks:/exams/marks' '&echo= true ' -H 'Content-type:application/json' -d ' { "first" : "John" , "last" : "Doe" , "grade" : 8, "exams" : [ { "subject" : "Maths" , "test" : "term1" , "marks" :90}, { "subject" : "Biology" , "test" : "term1" , "marks" :86} ] }' Output: { "error" :{ "msg" : "Raw data can be stored only if split=/" , "code" :400 } } Say I pass only '/' to the split parameter as reported, but with different field mappping, it doesn't seem to index the data per mentioned fields. Notice the suffix 'Name' added in the input JSON and also the field mapping. Input: curl 'http: //localhost:8983/solr/collection1/update/json/docs' '?split=/' '&f=first:/firstName' '&f=last:/lastName' '&f=grade:/grade' '&f=subject:/exams/subjectName' '&f=test:/exams/test' '&f=marks:/exams/marks' '&echo= true ' -H 'Content-type:application/json' -d ' { "firstName" : "John" , "lastName" : "Doe" , "grade" : 8, "exams" : [ { "subjectName" : "Maths" , "test" : "term1" , "marks" :90}, { "subject" : "Biology" , "test" : "term1" , "marks" :86} ] }' Output: { "responseHeader" :{ "status" :0, "QTime" :0}, "docs" :[{ "id" : "3c5fa5a0-ff71-4fef-b3e9-8e279cc0d724" , "_src_" : "{ \" firstName\ ": \" John\ ", \" lastName\ ": \" Doe\ ", \" grade\ ": 8, \" exams\ ": [ { \" subjectName\ ": \" Maths\ ", \" test\ " : \" term1\ ", \" marks\ ":90}, { \" subject\ ": \" , \ "test\" : \ "term1\" , \ "marks\" :86} ]} "," text ":[" John "," Doe ",8," Maths ",[" term1 "," term1"],[90,86]]}]} If there is a field named "id" is present then that reflects in the reponse, but all other fields are ignored for some reason. Input: curl 'http: //localhost:8983/solr/collection1/update/json/docs' '?split=/' '&f=first:/firstName' '&f=id:/lastName' '&f=grade:/grade' '&f=subject:/exams/subjectName' '&f=test:/exams/test' '&f=marks:/exams/marks' '&echo= true ' -H 'Content-type:application/json' -d ' { "firstName" : "John" , "lastName" : "Doe" , "grade" : 8, "exams" : [ { "subjectName" : "Maths" , "test" : "term1" , "marks" :90}, { "subject" : "Biology" , "test" : "term1" , "marks" :86} ] }' Output: { "responseHeader" :{ "status" :0, "QTime" :1}, "docs" :[{ "id" : "Doe" , "_src_" : "{ \" firstName\ ": \" John\ ", \" lastName\ ": \" Doe\ ", \" grade\ ": 8, \" exams\ ": [ { \" subjectName\ ": \" Maths\ ", \" test\ " : \" term1\ ", \" marks\ ":90}, { \" subject\ ": \" , \ "test\" : \ "term1\" , \ "marks\" :86} ]} "," text ":[" John "," Doe ",8," Maths ",[" term1 "," term1"],[90,86]]}]}
          Hide
          Alexandre Rafalovitch added a comment -

          Seems like a conflict with SOLR-6633 feature (store JSON as a blob). Check your solrconfig.xml for srcField and remove it.

          Noble PaulI can debug, but I can't explain it. Should these two things be possible at once? Should we document the interplay somewhere?

          Show
          Alexandre Rafalovitch added a comment - Seems like a conflict with SOLR-6633 feature (store JSON as a blob). Check your solrconfig.xml for srcField and remove it. Noble Paul I can debug, but I can't explain it. Should these two things be possible at once? Should we document the interplay somewhere?
          Hide
          Noble Paul added a comment -

          I guess you are not using the schemaless example

          please go to your solrconfig.xml and edit out the two lines

                <!--this ensures that the entire json doc will be stored verbatim into one field-->
                <str name="srcField">_src_</str>
                <!--This means a the uniqueKeyField will be extracted from the fields and
                 all fields go into the 'df' field. In this config df is already configured to be 'text'
                  -->
                <str name="mapUniqueKeyOnly">true</str>
          

          Please note that, you should have all your fields specified in your schema.xml before running the example

          Show
          Noble Paul added a comment - I guess you are not using the schemaless example please go to your solrconfig.xml and edit out the two lines <!--this ensures that the entire json doc will be stored verbatim into one field--> <str name= "srcField" > _src_ </str> <!--This means a the uniqueKeyField will be extracted from the fields and all fields go into the 'df' field. In this config df is already configured to be 'text' --> <str name= "mapUniqueKeyOnly" > true </str> Please note that, you should have all your fields specified in your schema.xml before running the example
          Hide
          Kelly Kagen added a comment -

          Thank you for the note and it worked this time with defined fields in schema.xml.

          Should it have worked for dynamic fields, as these too defined in the schema? FYI, it didn't work in my case and works only with fully defined (static) fields.

          Show
          Kelly Kagen added a comment - Thank you for the note and it worked this time with defined fields in schema.xml. Should it have worked for dynamic fields, as these too defined in the schema? FYI, it didn't work in my case and works only with fully defined (static) fields.
          Hide
          Mikhail Khludnev added a comment -

          this what happen to me. I raised SOLR-8240, please let me know what you think about.

          Show
          Mikhail Khludnev added a comment - this what happen to me. I raised SOLR-8240 , please let me know what you think about.
          Hide
          Noble Paul added a comment -

          yes it could work w/ dynamic fields if your field names match the dynamic field pattern.

          eg:

          f=first_s:/firstName
          
          Show
          Noble Paul added a comment - yes it could work w/ dynamic fields if your field names match the dynamic field pattern. eg: f=first_s:/firstName
          Hide
          sriram vaithianathan added a comment -

          Hi,

          Can you please let me know if such a feature is available in importing a Json from mysql? I have given more info here: http://lucene.472066.n3.nabble.com/Solr-mysql-Json-import-td4278686.html

          Thanks,
          Sriram

          Show
          sriram vaithianathan added a comment - Hi, Can you please let me know if such a feature is available in importing a Json from mysql? I have given more info here: http://lucene.472066.n3.nabble.com/Solr-mysql-Json-import-td4278686.html Thanks, Sriram
          Hide
          Erick Erickson added a comment -

          Please don't comment on closed JIRA tickets, they're likely to have very few eyes-on.

          This kind of question is usually best brought up on the Solr user's list.

          Show
          Erick Erickson added a comment - Please don't comment on closed JIRA tickets, they're likely to have very few eyes-on. This kind of question is usually best brought up on the Solr user's list.
          Hide
          sriram vaithianathan added a comment -

          Sure Erick. I actually raised in the Solr user group. Since I didn't get reply regarding that, I thought of posting in the corresponding ticket. If you have additional info, kindly add to my post.

          Show
          sriram vaithianathan added a comment - Sure Erick. I actually raised in the Solr user group. Since I didn't get reply regarding that, I thought of posting in the corresponding ticket. If you have additional info, kindly add to my post.

            People

            • Assignee:
              Noble Paul
              Reporter:
              Noble Paul
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development