Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 4.0-ALPHA
    • Component/s: web gui
    • Labels:
      None

      Description

      In SOLR-2399, we added a new admin UI. The issue has gotten too long to follow, so this is a new issue to track remaining tasks.

      1. SOLR-2667-110722.patch
        25 kB
        Stefan Matheis (steffkes)
      2. SOLR-2667-120223-file-structure.patch
        1.73 MB
        Stefan Matheis (steffkes)

        Issue Links

          Activity

          Hide
          Ryan McKinley added a comment -

          Found a minor issue: From the analysis page, pick a numeric field and put text into it. This will return a 500 with: java.lang.NumberFormatException: For input string: "asdgasg"

          BUT the UI says "This Functionality requires the /analysis/field"

          Looks like the error handling should catch 404 vs 500 (or maybe just non 200)

          Show
          Ryan McKinley added a comment - Found a minor issue: From the analysis page, pick a numeric field and put text into it. This will return a 500 with: java.lang.NumberFormatException: For input string: "asdgasg" BUT the UI says "This Functionality requires the /analysis/field" Looks like the error handling should catch 404 vs 500 (or maybe just non 200)
          Hide
          Ryan McKinley added a comment -

          On the plugins page, I like the default accordian behavior – what do you think about adding a button at the bottom that would 'show all details' or something. It is nice to be able to see all the cache values at once and just scroll though to see if anythign looks funny, rather then having to open each one.

          Show
          Ryan McKinley added a comment - On the plugins page, I like the default accordian behavior – what do you think about adding a button at the bottom that would 'show all details' or something. It is nice to be able to see all the cache values at once and just scroll though to see if anythign looks funny, rather then having to open each one.
          Hide
          Ryan McKinley added a comment -

          On the query page... I like that it keeps the query options next to the results, and that it shows the raw URL – it would also be nice if the URL it displays was a direct link to that query.

          What about including wt as a drop down? xml/json/python/ruby/php/csv Maybe also a checkbox for &indent=true/false

          Show
          Ryan McKinley added a comment - On the query page... I like that it keeps the query options next to the results, and that it shows the raw URL – it would also be nice if the URL it displays was a direct link to that query. What about including wt as a drop down? xml/json/python/ruby/php/csv Maybe also a checkbox for &indent=true/false
          Hide
          Stefan Matheis (steffkes) added a comment -

          Replace the old patch with this new one (based on Rev 1149484), which includes the fix for the analysis-page – as well as the 'expand' link on plugins and the changes to the query-form

          Show
          Stefan Matheis (steffkes) added a comment - Replace the old patch with this new one (based on Rev 1149484 ), which includes the fix for the analysis-page – as well as the 'expand' link on plugins and the changes to the query-form
          Hide
          Stefan Matheis (steffkes) added a comment -

          And just to mention it .. it's related to the regex-matching-solution (Line 3596 & 3605) i'm using to catch the analysis-error-message – that would be much easer & more solid if we have SOLR-141!

          Show
          Stefan Matheis (steffkes) added a comment - And just to mention it .. it's related to the regex-matching-solution (Line 3596 & 3605) i'm using to catch the analysis-error-message – that would be much easer & more solid if we have SOLR-141 !
          Hide
          Simon Rosenthal added a comment -

          The status displayed for DIH indexing is not as detailed as that on the old page- I prefer the elapsed time in more precision, rather than 'n minutes ago'

          Since you're doing a status request every few seconds, would it be possible to add metrics such as 'documents processed per second' ?' (either for the last few seconds, or since the start of the import, or both)

          Show
          Simon Rosenthal added a comment - The status displayed for DIH indexing is not as detailed as that on the old page- I prefer the elapsed time in more precision, rather than 'n minutes ago' Since you're doing a status request every few seconds, would it be possible to add metrics such as 'documents processed per second' ?' (either for the last few seconds, or since the start of the import, or both)
          Hide
          Hoss Man added a comment -

          Stefan: looking at the example on trunk today i realized one oddity i've never noticed before. The left nav lists "singlecore" as the lable for getting collection specific information, even though the example configs do infact have a solr.xml containing info about the one core, and it's name is "collection1"

          I understand that you have special logic for the "single core" case because of legacy installs that might not have any solr.xml, so there can never be more then one core because no adminPath is defined – but in the case the core does in fact have a name, and new cores can be added with new names (i tested, it works and new cores show up in the admin bueatifully) – so i don't understand why "collection1" show up as "singlecore" (even after adding additional cores)

          Show
          Hoss Man added a comment - Stefan: looking at the example on trunk today i realized one oddity i've never noticed before. The left nav lists "singlecore" as the lable for getting collection specific information, even though the example configs do infact have a solr.xml containing info about the one core, and it's name is "collection1" I understand that you have special logic for the "single core" case because of legacy installs that might not have any solr.xml, so there can never be more then one core because no adminPath is defined – but in the case the core does in fact have a name, and new cores can be added with new names (i tested, it works and new cores show up in the admin bueatifully) – so i don't understand why "collection1" show up as "singlecore" (even after adding additional cores)
          Hide
          Ryan McKinley added a comment -

          I finally tested this on a bigger index... and the use of /luke makes it unusable. On a large index, collecting the top terms for every field can take a LONG time – in this case >30 secs.

          What about skipping the term list by default and just quickly get the basic info:

          /admin/luke?numTerms=0
          

          From the 'schema-browser' page, then we could load the field stats for that one field:

          /admin/luke?numTerms=100&fl=field
          

          thoughts?

          Show
          Ryan McKinley added a comment - I finally tested this on a bigger index... and the use of /luke makes it unusable. On a large index, collecting the top terms for every field can take a LONG time – in this case >30 secs. What about skipping the term list by default and just quickly get the basic info: /admin/luke?numTerms=0 From the 'schema-browser' page, then we could load the field stats for that one field: /admin/luke?numTerms=100&fl=field thoughts?
          Hide
          David Smiley added a comment -

          Ryan, I agree with your observation and suggestion. The same basic situation happens with the current/old UI for my large index when I go to the schema browser – it seems to lock up for minutes. Eventually I figured out what was going on and I edited my solrconfig.xml to put numTerms=0 by default. Ideally I would be able to request this statistic on-demand instead of by default, and with some sort of ajax-loading icon so I know its thinking. Even more ideal would be to somehow have the UI know that the statistic isn't expensive for certain fields and calculate it there. Its too bad Lucene's index format doesn't contain this metadata in a quick to lookup format.

          Show
          David Smiley added a comment - Ryan, I agree with your observation and suggestion. The same basic situation happens with the current/old UI for my large index when I go to the schema browser – it seems to lock up for minutes. Eventually I figured out what was going on and I edited my solrconfig.xml to put numTerms=0 by default. Ideally I would be able to request this statistic on-demand instead of by default, and with some sort of ajax-loading icon so I know its thinking. Even more ideal would be to somehow have the UI know that the statistic isn't expensive for certain fields and calculate it there. Its too bad Lucene's index format doesn't contain this metadata in a quick to lookup format.
          Hide
          Mark Miller added a comment -

          shouldn't QUERYHANLER instead be REQUESTHANDLER?

          Show
          Mark Miller added a comment - shouldn't QUERYHANLER instead be REQUESTHANDLER?
          Hide
          Stefan Matheis (steffkes) added a comment -

          Hi all, sorry for the break .. just had not really the time to work on it :/ will check the comments tomorrow and see what has to be done

          Show
          Stefan Matheis (steffkes) added a comment - Hi all, sorry for the break .. just had not really the time to work on it :/ will check the comments tomorrow and see what has to be done
          Hide
          Stefan Matheis (steffkes) added a comment -

          Simon,

          The status displayed for DIH indexing is not as detailed as that on the old page- I prefer the elapsed time in more precision, rather than 'n minutes ago'

          me too - but the only information we have is "started at", including a full date w/ seconds. how long the import is already running, isn't stated anywhere. at the beginning my idea was to calculate the difference manually .. but the main problem will be, that there is no information about timezones - so you could request stats from a solr-server w/ a different timezone and the calculation is no longer valid.

          the 'n minutes ago' information is generated through the used jquery.timeago plugin, which could be disabled of course - afterwards the full date/time will be visible.

          Since you're doing a status request every few seconds, would it be possible to add metrics such as 'documents processed per second' ?' (either for the last few seconds, or since the start of the import, or both)

          in general, both i'd say - but i don't know if it would make sense? because the imports that i know of, are working the following way: starting to fetch entities, their subentities and afterwards start to index the document .. so i could not calculate the documents per seconds or things like that :/

          perhaps we should consider to extend the dih-status w/ these informations? calculate the difference on the server-side should be easy .. and adding stats about docs/sec would be more detailed and also more reliable.

          Stefan

          Show
          Stefan Matheis (steffkes) added a comment - Simon, The status displayed for DIH indexing is not as detailed as that on the old page- I prefer the elapsed time in more precision, rather than 'n minutes ago' me too - but the only information we have is "started at", including a full date w/ seconds. how long the import is already running, isn't stated anywhere. at the beginning my idea was to calculate the difference manually .. but the main problem will be, that there is no information about timezones - so you could request stats from a solr-server w/ a different timezone and the calculation is no longer valid. the 'n minutes ago' information is generated through the used jquery.timeago plugin, which could be disabled of course - afterwards the full date/time will be visible. Since you're doing a status request every few seconds, would it be possible to add metrics such as 'documents processed per second' ?' (either for the last few seconds, or since the start of the import, or both) in general, both i'd say - but i don't know if it would make sense? because the imports that i know of, are working the following way: starting to fetch entities, their subentities and afterwards start to index the document .. so i could not calculate the documents per seconds or things like that :/ perhaps we should consider to extend the dih-status w/ these informations? calculate the difference on the server-side should be easy .. and adding stats about docs/sec would be more detailed and also more reliable. Stefan
          Hide
          Stefan Matheis (steffkes) added a comment -

          Hoss,

          The left nav lists "singlecore" as the lable for getting collection specific information, even though the example configs do infact have a solr.xml containing info about the one core, and it's name is "collection1"

          Perhaps the issue SOLR-2605 is not correctly named, but it's already there

          Stefan

          Show
          Stefan Matheis (steffkes) added a comment - Hoss, The left nav lists "singlecore" as the lable for getting collection specific information, even though the example configs do infact have a solr.xml containing info about the one core, and it's name is "collection1" Perhaps the issue SOLR-2605 is not correctly named, but it's already there Stefan
          Hide
          Stefan Matheis (steffkes) added a comment -

          Mark,

          shouldn't QUERYHANLER instead be REQUESTHANDLER?

          You're looking at the output of /admin/mbeans which is using the List from
          SolrInfoMBean.java:

          public enum Category { CORE, QUERYHANDLER, UPDATEHANDLER, CACHE, HIGHLIGHTING, OTHER };

          So if you think the Names should be changed?

          Stefan

          Show
          Stefan Matheis (steffkes) added a comment - Mark, shouldn't QUERYHANLER instead be REQUESTHANDLER? You're looking at the output of /admin/mbeans which is using the List from SolrInfoMBean.java : public enum Category { CORE, QUERYHANDLER, UPDATEHANDLER, CACHE, HIGHLIGHTING, OTHER }; So if you think the Names should be changed? Stefan
          Hide
          Stefan Matheis (steffkes) added a comment -

          Ryan & David,

          What about skipping the term list by default and just quickly get the basic info

          That's a quick fix.

          From the 'schema-browser' page, then we could load the field stats for that one field.

          Yes, we could .. should that be done

          1. automatically after loading the basic information
          2. or manually on button click?

          i actually don't know, how important that information will be? i guess, it's the information that people will have, when the use the schema-browser?

          Just to mention it, 'basic information' on Schema-Page is: Field-Type, Schema & Tokenizer/Filter – everything else (Index, Docs, Distinct, Terms & Histogram) is only available after requesting luke w/ numTerms.

          Stefan

          Show
          Stefan Matheis (steffkes) added a comment - Ryan & David, What about skipping the term list by default and just quickly get the basic info That's a quick fix. From the 'schema-browser' page, then we could load the field stats for that one field. Yes, we could .. should that be done automatically after loading the basic information or manually on button click? i actually don't know, how important that information will be? i guess, it's the information that people will have, when the use the schema-browser? Just to mention it, 'basic information' on Schema-Page is: Field-Type, Schema & Tokenizer/Filter – everything else (Index, Docs, Distinct, Terms & Histogram) is only available after requesting luke w/ numTerms. Stefan
          Hide
          McClain Looney added a comment -

          on both chrome and safari os/x, the results iframe is rendered in a useless way (i.e. the way xml is displayed when the content-type isn't set). The only way I can make sense of results is via the browser dev tools, which is sub-optimal.

          Am i missing a component to render pretty xml?

          Show
          McClain Looney added a comment - on both chrome and safari os/x, the results iframe is rendered in a useless way (i.e. the way xml is displayed when the content-type isn't set). The only way I can make sense of results is via the browser dev tools, which is sub-optimal. Am i missing a component to render pretty xml?
          Hide
          Stefan Matheis (steffkes) added a comment -

          Am i missing a component to render pretty xml?

          Not really .. actually it's just not completely working as expected. iirc Erick suggested an tab-navigation for all the xml-views, choosing "raw" or "rendered" for exactly these cases you've mentioned.

          Show
          Stefan Matheis (steffkes) added a comment - Am i missing a component to render pretty xml? Not really .. actually it's just not completely working as expected. iirc Erick suggested an tab-navigation for all the xml-views, choosing "raw" or "rendered" for exactly these cases you've mentioned.
          Hide
          Tom Hill added a comment -

          I think the files admin-extra.html and admin-extra.menu-top.html are intended to be optional. If that's the case, it might be nicer to not log a stack trace when they are not present. Especially at a "SEVERE" priority.

          SEVERE: org.apache.solr.common.SolrException: Can not find: admin-extra.html [/Users/tom/code/lucene_trunk/solr/example/multicore/core0/conf/admin-extra.html]
          at org.apache.solr.handler.admin.ShowFileRequestHandler.handleRequestBody(ShowFileRequestHandler.java:145)...

          Another minor note: the link to the old admin UI doesn't work in multi-core mode, as it goes to solr/admin. I don't know that it's worth fixing, but thought I'd mention it.

          Show
          Tom Hill added a comment - I think the files admin-extra.html and admin-extra.menu-top.html are intended to be optional. If that's the case, it might be nicer to not log a stack trace when they are not present. Especially at a "SEVERE" priority. SEVERE: org.apache.solr.common.SolrException: Can not find: admin-extra.html [/Users/tom/code/lucene_trunk/solr/example/multicore/core0/conf/admin-extra.html] at org.apache.solr.handler.admin.ShowFileRequestHandler.handleRequestBody(ShowFileRequestHandler.java:145)... Another minor note: the link to the old admin UI doesn't work in multi-core mode, as it goes to solr/admin. I don't know that it's worth fixing, but thought I'd mention it.
          Hide
          Joan Codina added a comment -

          some issues when using it.

          • It is a pity that one cannot indicate the number of terms to view, and only do more... more... and not modify the number (to ask for example for the top 2000 terms), we do that sometimes, to check if there are many misspelled terms.
          • A stupid issue: there is no place where the name of the current field is in plain text, so, you can cut&paste, to be sure you get the current spelling
          • finally, maybe the graphic could be done using an html5 charting tool?
          Show
          Joan Codina added a comment - some issues when using it. It is a pity that one cannot indicate the number of terms to view, and only do more... more... and not modify the number (to ask for example for the top 2000 terms), we do that sometimes, to check if there are many misspelled terms. A stupid issue: there is no place where the name of the current field is in plain text, so, you can cut&paste, to be sure you get the current spelling finally, maybe the graphic could be done using an html5 charting tool?
          Hide
          elisabeth benoit added a comment -

          On Admin Analysis interface, when I analyze some field type including NGramFilterFactory, the columns for every word are very large and no horizontal scroll bar, so I can't see.

          Show
          elisabeth benoit added a comment - On Admin Analysis interface, when I analyze some field type including NGramFilterFactory, the columns for every word are very large and no horizontal scroll bar, so I can't see.
          Hide
          Mark Miller added a comment -

          I think the files admin-extra.html and admin-extra.menu-top.html are intended to be optional. If that's the case, it might be nicer to not log a stack trace when they are not present. Especially at a "SEVERE" priority.

          We should look at this in another JIRA issue.

          Show
          Mark Miller added a comment - I think the files admin-extra.html and admin-extra.menu-top.html are intended to be optional. If that's the case, it might be nicer to not log a stack trace when they are not present. Especially at a "SEVERE" priority. We should look at this in another JIRA issue.
          Hide
          Mark Miller added a comment -

          Another note: if you don't have the sys admin handlers in solrconfig, the old admin pages work fine, but the new page will simply show a spinner. Seems like at the very least we should display an appropriate error message if we want to require the sys admin handlers for the UI to work.

          Show
          Mark Miller added a comment - Another note: if you don't have the sys admin handlers in solrconfig, the old admin pages work fine, but the new page will simply show a spinner. Seems like at the very least we should display an appropriate error message if we want to require the sys admin handlers for the UI to work.
          Hide
          Mark Miller added a comment -

          On the solrcloud branch I'm running into a strange problem...I cannot view the cloud panel because its claiming that it cannot parse the json from http://192.168.1.200:8983/solr/admin/cores?wt=json

          The json is valid though, so I am scratching my head. It produces 'JSON.parse bad escaped character'.

          {"responseHeader":{"status":0,"QTime":237},"status":{"":{"name":"","instanceDir":"solr/./","dataDir":"solr/./data/","startTime":"2012-01-07T21:44:09.427Z","uptime":4449652,"index":{"numDocs":40226,"maxDoc":49839,"version":1325969262665,"segmentCount":17,"current":true,"hasDeletions":true,"directory":"org.apache.lucene.store.MMapDirectory:org.apache.lucene.store.MMapDirectory@/media/ext3space/workspace/SolrCloud/solr/example/solr/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@7e123c26","lastModified":"2012-01-07T21:52:20Z","sizeInBytes":990880948,"size":"944.98 MB"}}}}
          
          Show
          Mark Miller added a comment - On the solrcloud branch I'm running into a strange problem...I cannot view the cloud panel because its claiming that it cannot parse the json from http://192.168.1.200:8983/solr/admin/cores?wt=json The json is valid though, so I am scratching my head. It produces 'JSON.parse bad escaped character'. {"responseHeader":{"status":0,"QTime":237},"status":{"":{"name":"","instanceDir":"solr/./","dataDir":"solr/./data/","startTime":"2012-01-07T21:44:09.427Z","uptime":4449652,"index":{"numDocs":40226,"maxDoc":49839,"version":1325969262665,"segmentCount":17,"current":true,"hasDeletions":true,"directory":"org.apache.lucene.store.MMapDirectory:org.apache.lucene.store.MMapDirectory@/media/ext3space/workspace/SolrCloud/solr/example/solr/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory@7e123c26","lastModified":"2012-01-07T21:52:20Z","sizeInBytes":990880948,"size":"944.98 MB"}}}}
          Hide
          Mark Miller added a comment -

          One thing missing from the old UI for the ZooKeeper view - you can no longer see the data at each node (or at least I have not figured out how) - just the node listing.

          Show
          Mark Miller added a comment - One thing missing from the old UI for the ZooKeeper view - you can no longer see the data at each node (or at least I have not figured out how) - just the node listing.
          Hide
          Antony Stubbs added a comment -

          It seems the ui for dataimporthandler doesn't pass through the clean, optimize or commit commands when clicking execute. No mater what I select, it only passes through the "command".

          Show
          Antony Stubbs added a comment - It seems the ui for dataimporthandler doesn't pass through the clean, optimize or commit commands when clicking execute. No mater what I select, it only passes through the "command".
          Hide
          Robert Reynolds added a comment -

          I haven't seen any discussion of the luke problem since back in August (comments by Ryan and David, with one follow-up by Stefan). I've run into this recently and wanted to add some data about just how horrible this problem is. I timed how long it took to complete loading of the "Statistics" panel, which evidently requires results from luke, which seems to read the entire index to provide them. I also looked a bit at the CPU and I/O behavior of the nodes while this operation was going on.

          At the time, my nodes had from 13 million to 23.5 million documents each. The operation took from 28 minutes to 46 minutes. During this time, significant CPU was consumed on the node; I wasn't careful in collecting this data but my recollection is 25%-50% utilization. There was significant I/O the entire time, apparently due to reading the whole index. Furthermore, navigating away from the page does not halt the operation.

          As things stand, if someone navigates to this web page they kick off an operation that will signficantly affect performance on my nodes for half an hour to an hour. Are there plans to implement any of the triage ideas floated by Ryan/David?

          Show
          Robert Reynolds added a comment - I haven't seen any discussion of the luke problem since back in August (comments by Ryan and David, with one follow-up by Stefan). I've run into this recently and wanted to add some data about just how horrible this problem is. I timed how long it took to complete loading of the "Statistics" panel, which evidently requires results from luke, which seems to read the entire index to provide them. I also looked a bit at the CPU and I/O behavior of the nodes while this operation was going on. At the time, my nodes had from 13 million to 23.5 million documents each. The operation took from 28 minutes to 46 minutes. During this time, significant CPU was consumed on the node; I wasn't careful in collecting this data but my recollection is 25%-50% utilization. There was significant I/O the entire time, apparently due to reading the whole index. Furthermore, navigating away from the page does not halt the operation. As things stand, if someone navigates to this web page they kick off an operation that will signficantly affect performance on my nodes for half an hour to an hour. Are there plans to implement any of the triage ideas floated by Ryan/David?
          Hide
          Mark Miller added a comment -

          Hey Robert - perhaps we should open up a new bug for this issue? It sounds pretty nasty.

          Show
          Mark Miller added a comment - Hey Robert - perhaps we should open up a new bug for this issue? It sounds pretty nasty.
          Hide
          Erick Erickson added a comment -

          I can reproduce this locally, and have raised a JIRA (SOLR-3094)

          Show
          Erick Erickson added a comment - I can reproduce this locally, and have raised a JIRA ( SOLR-3094 )
          Hide
          Erick Erickson added a comment -

          re: SOLR-3094. If someone with javascript skills has the time/energy to help out with SOLR-3094, it would be awesome. I'm flying blind here. I can handle the LukeRequestHandler stuff, but it'll take a long time for me to figure out the javascript side.

          Essentially, this problem makes the new UI unusable for any large index.

          Show
          Erick Erickson added a comment - re: SOLR-3094 . If someone with javascript skills has the time/energy to help out with SOLR-3094 , it would be awesome. I'm flying blind here. I can handle the LukeRequestHandler stuff, but it'll take a long time for me to figure out the javascript side. Essentially, this problem makes the new UI unusable for any large index.
          Hide
          Erick Erickson added a comment -

          How, in general, do we want to carry the new Admin UI forward? It needs some love if it's ever going to replace the old UI, meanwhile we're stuck with having to either maintain both or have features in one but not the other. My javascript skills are rudimentary, but I'd be happy to help if someone who does have js expertise wants to handle the UI side. Perhaps I can be useful on the Solr/Lucene side in terms of getting info back from Solr and committing the results...

          I'm not sure how many people can work on it simultaneously, it looks like there are just a few files so it may be pretty easy to step on each other.

          Any suggestions/volunteers?

          Show
          Erick Erickson added a comment - How, in general, do we want to carry the new Admin UI forward? It needs some love if it's ever going to replace the old UI, meanwhile we're stuck with having to either maintain both or have features in one but not the other. My javascript skills are rudimentary, but I'd be happy to help if someone who does have js expertise wants to handle the UI side. Perhaps I can be useful on the Solr/Lucene side in terms of getting info back from Solr and committing the results... I'm not sure how many people can work on it simultaneously, it looks like there are just a few files so it may be pretty easy to step on each other. Any suggestions/volunteers?
          Hide
          David Smiley added a comment -

          I'm competent with JavaScript and the popular jQuery library but I'm unfamiliar with the others used here like Sammy. Sammy appears to be key part of this UI. I don't like to do front-end development, honestly, but I do it when needed.

          I modified script.js in order to address some other issue and I was shocked to see that the javascript for this new UI is in one massive javascript file with 4632 lines! IMO that is simply unacceptable; it must be broken up to be more maintainable. Perhaps it would be broken up by navigation tabs/pages.

          Can someone (Ryan? Stefan?) articulate the overall approach to the design of the UI from an implementation perspective (not the visual)?

          Show
          David Smiley added a comment - I'm competent with JavaScript and the popular jQuery library but I'm unfamiliar with the others used here like Sammy. Sammy appears to be key part of this UI. I don't like to do front-end development, honestly, but I do it when needed. I modified script.js in order to address some other issue and I was shocked to see that the javascript for this new UI is in one massive javascript file with 4632 lines! IMO that is simply unacceptable; it must be broken up to be more maintainable. Perhaps it would be broken up by navigation tabs/pages. Can someone (Ryan? Stefan?) articulate the overall approach to the design of the UI from an implementation perspective (not the visual)?
          Hide
          Ryan McKinley added a comment -

          I added SOLR-3121 to tackle the specific issues around speed...

          Show
          Ryan McKinley added a comment - I added SOLR-3121 to tackle the specific issues around speed...
          Hide
          Stefan Matheis (steffkes) added a comment -

          One thing missing from the old UI for the ZooKeeper view - you can no longer see the data at each node (or at least I have not figured out how) - just the node listing.

          Mark, Would you mind to have a look at SOLR-3116? Erick created an Issue for that, and i've attached a quick draft.

          Show
          Stefan Matheis (steffkes) added a comment - One thing missing from the old UI for the ZooKeeper view - you can no longer see the data at each node (or at least I have not figured out how) - just the node listing. Mark, Would you mind to have a look at SOLR-3116 ? Erick created an Issue for that, and i've attached a quick draft.
          Hide
          Stefan Matheis (steffkes) added a comment -

          I modified script.js in order to address some other issue and I was shocked to see that the javascript for this new UI is in one massive javascript file with 4632 lines! IMO that is simply unacceptable; it must be broken up to be more maintainable. Perhaps it would be broken up by navigation tabs/pages.

          David, you're completed right. Initially there were no thoughts about 'how will it ever work' .. i just started to hack around to push things forward. Actually i'm trying to integrated to current svn-changes into my local version and go ahead with http://requirejs.org/ to split the files. I'll push this version to my github repo, so we could have a look at it and decide if that will be okay for the future.

          Show
          Stefan Matheis (steffkes) added a comment - I modified script.js in order to address some other issue and I was shocked to see that the javascript for this new UI is in one massive javascript file with 4632 lines! IMO that is simply unacceptable; it must be broken up to be more maintainable. Perhaps it would be broken up by navigation tabs/pages. David, you're completed right. Initially there were no thoughts about 'how will it ever work' .. i just started to hack around to push things forward. Actually i'm trying to integrated to current svn-changes into my local version and go ahead with http://requirejs.org/ to split the files. I'll push this version to my github repo, so we could have a look at it and decide if that will be okay for the future.
          Hide
          Stefan Matheis (steffkes) added a comment -

          So, there we go: https://github.com/steffkes/solr-admin/commit/67e9807c2ed6a19064fb0a0a3ad941a3b0e10852 Thoughts about this structure? more usable for people who would contribute stuff?

          Show
          Stefan Matheis (steffkes) added a comment - So, there we go: https://github.com/steffkes/solr-admin/commit/67e9807c2ed6a19064fb0a0a3ad941a3b0e10852 Thoughts about this structure? more usable for people who would contribute stuff?
          Hide
          Erick Erickson added a comment -

          Stefan:

          Warning, I'm both Git and Javascript challenged...

          But even so, just browsing the way you've broken it up makes me much less frightened about jumping in and trying to change things.

          Is there any way you could make a SVN patch and attach it to this JIRA? I'd be happy to apply it locally and put it through some paces, especially if I could try out SOLR-3116 too. Or just point me at the right Git instructions, is there a good way to just overlay a set of Git changes on an SVN checkout?

          Show
          Erick Erickson added a comment - Stefan: Warning, I'm both Git and Javascript challenged... But even so, just browsing the way you've broken it up makes me much less frightened about jumping in and trying to change things. Is there any way you could make a SVN patch and attach it to this JIRA? I'd be happy to apply it locally and put it through some paces, especially if I could try out SOLR-3116 too. Or just point me at the right Git instructions, is there a good way to just overlay a set of Git changes on an SVN checkout?
          Hide
          Stefan Matheis (steffkes) added a comment -

          Erick, the github repo was only used to expose the new structure .. will create an patch later on, ofc SOLR-3116 is already including, will require SOLR-3155 to be comitted first - otherwise the Cloud Tab will not work.

          Show
          Stefan Matheis (steffkes) added a comment - Erick, the github repo was only used to expose the new structure .. will create an patch later on, ofc SOLR-3116 is already including, will require SOLR-3155 to be comitted first - otherwise the Cloud Tab will not work.
          Hide
          Ryan McKinley added a comment -

          Stefan – this looks much better!

          Show
          Ryan McKinley added a comment - Stefan – this looks much better!
          Hide
          Stefan Matheis (steffkes) added a comment -

          So, there we go. Based on SVN Rev 1292870, mainly changing the File-Structure - hopefully no Change is missing. Otherwise please tell me

          I'll try to add an additional ant-build target for applying an r.js Step - so we'll have only one css/js File to load, for the "endusers" .. and my idea was to include a licene-paragraph in these files, would this be enough or should i just add it to every single file which ships with solr (and has no other licene yet)?

          Show
          Stefan Matheis (steffkes) added a comment - So, there we go. Based on SVN Rev 1292870, mainly changing the File-Structure - hopefully no Change is missing. Otherwise please tell me I'll try to add an additional ant-build target for applying an r.js Step - so we'll have only one css/js File to load, for the "endusers" .. and my idea was to include a licene-paragraph in these files, would this be enough or should i just add it to every single file which ships with solr (and has no other licene yet)?
          Hide
          Ryan McKinley added a comment -

          I added this in #1292908

          thanks stefan!

          Show
          Ryan McKinley added a comment - I added this in #1292908 thanks stefan!
          Hide
          Stefan Matheis (steffkes) added a comment -

          We'll need the following external Tools/Libs:

          The attached ant-target (Patch also based on Rev 1292870) is just a sample, but it's already working if the files exist.

          Could i get some help there? Especially how to package the generated files into the war's and so one?

          Show
          Stefan Matheis (steffkes) added a comment - We'll need the following external Tools/Libs: r.js closure/compiler.jar rhino/js.jar The attached ant-target (Patch also based on Rev 1292870) is just a sample, but it's already working if the files exist. Could i get some help there? Especially how to package the generated files into the war's and so one?
          Hide
          David Smiley added a comment -

          Much better Stefan. I'd like to see further refactorings:

          • I see that the code is using 4-space indentation levels whereas Lucene's standard is 2.
          • Although the code is now broken down into logically organized files, there are rather extreme levels of indentation that makes the code hard to read. cloud.js goes 14 indentation levels deep, for example. That is simply too many, see if you can keep it within 10 at most.

          r.js would be nice but I think it's low priority given this is an admin UI.

          Show
          David Smiley added a comment - Much better Stefan. I'd like to see further refactorings: I see that the code is using 4-space indentation levels whereas Lucene's standard is 2. Although the code is now broken down into logically organized files, there are rather extreme levels of indentation that makes the code hard to read. cloud.js goes 14 indentation levels deep, for example. That is simply too many, see if you can keep it within 10 at most. r.js would be nice but I think it's low priority given this is an admin UI.
          Hide
          Ryan McKinley added a comment -

          on the r.js stuff... what is the advantage? Is it just optimize the load times?

          For the admin UI, I think we should optimize readability/maintainability over load time.

          Show
          Ryan McKinley added a comment - on the r.js stuff... what is the advantage? Is it just optimize the load times? For the admin UI, I think we should optimize readability/maintainability over load time.
          Hide
          Erick Erickson added a comment -

          I'm reluctant to introduce more jars unless they're absolutely necessary, and given that the admin UI is running locally, if the new jars are only optimizing load times, I think we should skip them.

          So echoing Ryan, is there a major advantage here?

          Ryan & Stefan:
          I'm getting "Loading of zookeeper failed with "parsererror" (Unexpected token )" when I try to go into the cloud section of the admin UI, but only when I start it up with numshards=<more than 1>. Is this the problem referred to in SOLR-3155? It looks like Ryan checked all this in yesterday, so I'm assuming that an update/build today has all the patches necessary for the servlet to do it's tricks, it's just a matter of getting the JSON right...

          Show
          Erick Erickson added a comment - I'm reluctant to introduce more jars unless they're absolutely necessary, and given that the admin UI is running locally, if the new jars are only optimizing load times, I think we should skip them. So echoing Ryan, is there a major advantage here? Ryan & Stefan: I'm getting "Loading of zookeeper failed with "parsererror" (Unexpected token )" when I try to go into the cloud section of the admin UI, but only when I start it up with numshards=<more than 1>. Is this the problem referred to in SOLR-3155 ? It looks like Ryan checked all this in yesterday, so I'm assuming that an update/build today has all the patches necessary for the servlet to do it's tricks, it's just a matter of getting the JSON right...
          Hide
          Stefan Matheis (steffkes) added a comment - - edited

          David, will use the 2-space-rule from now on and replace the existing code, preparing a patch for it. regarding the indentation: i'll see what is possible

          Ryan, mainly performance yes .. but additionally it resolves the css-import statements which are (afaik) not completely supported on all internet explorer versions. if performance does not matter, we could solve that through a real <link href="file.css" /> for each needed stylesheet. for the js-part, the current loading is fine and do not need a replacement.

          – Edit

          Okay, we skip the r.js thingy, i'll update the loading of css-files.

          Show
          Stefan Matheis (steffkes) added a comment - - edited David, will use the 2-space-rule from now on and replace the existing code, preparing a patch for it. regarding the indentation: i'll see what is possible Ryan, mainly performance yes .. but additionally it resolves the css-import statements which are (afaik) not completely supported on all internet explorer versions. if performance does not matter, we could solve that through a real <link href="file.css" /> for each needed stylesheet. for the js-part, the current loading is fine and do not need a replacement. – Edit Okay, we skip the r.js thingy, i'll update the loading of css-files.
          Hide
          Stefan Matheis (steffkes) added a comment -

          Erick, yes that is (or at least, should be) SOLR-3155 .. i don't know what noggit exactly does, perhaps it's not enough to get valid json responses in every case. if it's no private stuff inside, could you capture the json-responses and attach them as file to this ticket?

          I'll build another time tomorrow and check the output for every file in the zookeeper-tree.

          Show
          Stefan Matheis (steffkes) added a comment - Erick, yes that is (or at least, should be) SOLR-3155 .. i don't know what noggit exactly does, perhaps it's not enough to get valid json responses in every case. if it's no private stuff inside, could you capture the json-responses and attach them as file to this ticket? I'll build another time tomorrow and check the output for every file in the zookeeper-tree.
          Hide
          Ryan McKinley added a comment -

          with r.js, I think <link href="file.css" /> is a better solution for this community.

          • - - -

          Erick, the zookeeper problems you see are likely based on the fact that SOLR-3155 is not yet committed.

          I have not yet built a zookeeper setup... so i have been unable to test it

          Show
          Ryan McKinley added a comment - with r.js, I think <link href="file.css" /> is a better solution for this community. - - - Erick, the zookeeper problems you see are likely based on the fact that SOLR-3155 is not yet committed. I have not yet built a zookeeper setup... so i have been unable to test it
          Hide
          Erick Erickson added a comment - - edited

          SOLR-3155 is committed now, and it looks to have fixed the issue I was having, so you should get that with an update. The patch I put up has a minor change to alphabetize stuff. One line.

          Ryan:
          Putting up a rudimentary cluster is surprisingly easy, Mark Miller's instructions here: http://wiki.apache.org/solr/SolrCloud will get you up and running in 10 minutes. I was pleasantly surprised, I expected there to be more configuration... Basically copy example to example2 and copy/paste the startup commands he's provided.

          Show
          Erick Erickson added a comment - - edited SOLR-3155 is committed now, and it looks to have fixed the issue I was having, so you should get that with an update. The patch I put up has a minor change to alphabetize stuff. One line. Ryan: Putting up a rudimentary cluster is surprisingly easy, Mark Miller's instructions here: http://wiki.apache.org/solr/SolrCloud will get you up and running in 10 minutes. I was pleasantly surprised, I expected there to be more configuration... Basically copy example to example2 and copy/paste the startup commands he's provided.
          Hide
          Erick Erickson added a comment -

          Moving the rest of the new UI development to SOLR-3162

          Show
          Erick Erickson added a comment - Moving the rest of the new UI development to SOLR-3162
          Hide
          Mark Miller added a comment -

          There are actually shell scripts in solr/cloud-dev that will auto start a small cluster.

          Show
          Mark Miller added a comment - There are actually shell scripts in solr/cloud-dev that will auto start a small cluster.

            People

            • Assignee:
              Ryan McKinley
              Reporter:
              Ryan McKinley
            • Votes:
              1 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development