Hadoop Common
  1. Hadoop Common
  2. HADOOP-2896

Using transient jetty servers as guis is a bad idea

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Duplicate
    • Affects Version/s: 0.17.0
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      Using transient jetty servers (ie. one that last 30 minutes) is a very poor replacement for a gui. I would much rather have bin/hadoop job -history out-dir print a textual summary rather than start a jetty server on the client machine that needs to be queried by the user.

        Issue Links

          Activity

          Owen O'Malley made changes -
          Component/s mapred [ 12310690 ]
          Nigel Daley made changes -
          Status Resolved [ 5 ] Closed [ 6 ]
          Nigel Daley made changes -
          Fix Version/s 0.17.0 [ 12312913 ]
          Hide
          Hudson added a comment -
          Show
          Hudson added a comment - Integrated in Hadoop-trunk #431 (See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/431/ )
          Devaraj Das made changes -
          Resolution Duplicate [ 3 ]
          Status Open [ 1 ] Resolved [ 5 ]
          Hide
          Devaraj Das added a comment -

          Fixed as part of HADOOP-2901

          Show
          Devaraj Das added a comment - Fixed as part of HADOOP-2901
          Hide
          Amareshwari Sriramadasu added a comment -

          The textual summary for command hadoop job -history can be described as follows:

          bin/hadoop job -history <outputdir> can print the useful data for the user. i.e.
          1. print Job Details
          2. print Task Summary - consisting of number of total, successful, failed and killed map/reduce tasks.
          3. print Job Analysis - similar data as in analyle this job link in job history.
          4. print Failed and Killed map/reduce tasks.
          5. print failed/killed attempts on nodes - consisting of host name, comma seperated task attempt list failed/killed on that node.

          Since job history jsps has data about all the tasks and task attempts, and that data is very huge, we can take an option from user to print all the data or not. So,
          bin/hadoop job -history all <outputdir> can print the following
          1. print Job Details
          2. print Task Summary - consisting of number of total, successful, failed and killed map/reduce tasks.
          3. print Job Analysis - similar data as in analyle this job link in job history.
          4. print Failed and Killed map/reduce tasks.
          5. print Successful map/reduce tasks.
          6. print all map/reduce task attempts - along with hostname on which it ran.
          7. print failed/killed attempts on nodes - consisting of host name, comma seperated task attempt list failed/killed on that node.

          Thoughts?

          Show
          Amareshwari Sriramadasu added a comment - The textual summary for command hadoop job -history can be described as follows: bin/hadoop job -history <outputdir> can print the useful data for the user. i.e. 1. print Job Details 2. print Task Summary - consisting of number of total, successful, failed and killed map/reduce tasks. 3. print Job Analysis - similar data as in analyle this job link in job history. 4. print Failed and Killed map/reduce tasks. 5. print failed/killed attempts on nodes - consisting of host name, comma seperated task attempt list failed/killed on that node. Since job history jsps has data about all the tasks and task attempts, and that data is very huge, we can take an option from user to print all the data or not. So, bin/hadoop job -history all <outputdir> can print the following 1. print Job Details 2. print Task Summary - consisting of number of total, successful, failed and killed map/reduce tasks. 3. print Job Analysis - similar data as in analyle this job link in job history. 4. print Failed and Killed map/reduce tasks. 5. print Successful map/reduce tasks. 6. print all map/reduce task attempts - along with hostname on which it ran. 7. print failed/killed attempts on nodes - consisting of host name, comma seperated task attempt list failed/killed on that node. Thoughts?
          Hide
          Amareshwari Sriramadasu added a comment -

          For job starter not to start two info servers, this issue has to be fixed first.

          Show
          Amareshwari Sriramadasu added a comment - For job starter not to start two info servers, this issue has to be fixed first.
          Amareshwari Sriramadasu made changes -
          Link This issue is blocked by HADOOP-2901 [ HADOOP-2901 ]
          Amareshwari Sriramadasu made changes -
          Field Original Value New Value
          Assignee Amareshwari Sri Ramadasu [ amareshwari ]
          Owen O'Malley created issue -

            People

            • Assignee:
              Amareshwari Sriramadasu
              Reporter:
              Owen O'Malley
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development