Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-6407

Add sorting and pagination in the datanode tab of the NN Web UI

    Details

    • Hadoop Flags:
      Reviewed

      Description

      old ui supported clicking on column header to sort on that column. The new ui seems to have dropped this very useful feature.

      There are a few tables in the Namenode UI to display datanodes information, directory listings and snapshots.
      When there are many items in the tables, it is useful to have ability to sort on the different columns.

      1. browse_directory.png
        67 kB
        Benoy Antony
      2. datanodes.png
        205 kB
        Benoy Antony
      3. snapshots.png
        93 kB
        Benoy Antony
      4. HDFS-6407.patch
        48 kB
        Benoy Antony
      5. 002-datanodes.png
        229 kB
        Benoy Antony
      6. 002-datanodes-sorted-capacityUsed.png
        226 kB
        Benoy Antony
      7. 002-filebrowser.png
        158 kB
        Benoy Antony
      8. 002-snapshots.png
        183 kB
        Benoy Antony
      9. HDFS-6407-002.patch
        106 kB
        Benoy Antony
      10. HDFS-6407-003.patch
        104 kB
        Benoy Antony
      11. HDFS-6407.4.patch
        95 kB
        Chang Li
      12. HDFS-6407.5.patch
        96 kB
        Chang Li
      13. sorting table.png
        227 kB
        Chang Li
      14. HDFS-6407.6.patch
        4 kB
        Chang Li
      15. sorting 2.png
        139 kB
        Chang Li
      16. HDFS-6407.7.patch
        4 kB
        Chang Li
      17. HDFS-6407.008.patch
        98 kB
        Haohui Mai
      18. HDFS-6407.009.patch
        98 kB
        Haohui Mai
      19. HDFS-6407.010.patch
        99 kB
        Haohui Mai
      20. HDFS-6407.011.patch
        99 kB
        Chang Li

        Issue Links

          Activity

          Hide
          raviprak Ravi Prakash added a comment -

          Hi Benoy! https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js#L1 the first line of the minified file contains the version. If you feel that's not adequate, please leave a comment on HDFS-9084 and I can make the change there.

          Show
          raviprak Ravi Prakash added a comment - Hi Benoy! https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js#L1 the first line of the minified file contains the version. If you feel that's not adequate, please leave a comment on HDFS-9084 and I can make the change there.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #280 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/280/)
          HDFS-6407. Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
          • hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #280 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/280/ ) HDFS-6407 . Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js hadoop-hdfs-project/hadoop-hdfs/pom.xml hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2218 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2218/)
          HDFS-6407. Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
          • hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2218 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2218/ ) HDFS-6407 . Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js hadoop-hdfs-project/hadoop-hdfs/pom.xml hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #2237 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2237/)
          HDFS-6407. Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2)

          • hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2237 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2237/ ) HDFS-6407 . Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2) hadoop-hdfs-project/hadoop-hdfs/pom.xml hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #288 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/288/)
          HDFS-6407. Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js
          • hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #288 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/288/ ) HDFS-6407 . Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js hadoop-hdfs-project/hadoop-hdfs/pom.xml hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #291 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/291/)
          HDFS-6407. Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css
          • hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #291 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/291/ ) HDFS-6407 . Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css hadoop-hdfs-project/hadoop-hdfs/pom.xml hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #1021 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1021/)
          HDFS-6407. Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2)

          • hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #1021 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/1021/ ) HDFS-6407 . Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2) hadoop-hdfs-project/hadoop-hdfs/pom.xml hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-trunk-Commit #8313 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8313/)
          HDFS-6407. Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
          • hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #8313 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8313/ ) HDFS-6407 . Add sorting and pagination in the datanode tab of the NN Web UI. Contributed by Haohui Mai. (wheat9: rev 456e901a4c5c639267ee87b8e5f1319f256d20c2) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/pom.xml hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js
          Hide
          wheat9 Haohui Mai added a comment -

          I've committed the patch to trunk and branch-2. Thanks all for the reviews and the contribution.

          Show
          wheat9 Haohui Mai added a comment - I've committed the patch to trunk and branch-2. Thanks all for the reviews and the contribution.
          Hide
          benoyantony Benoy Antony added a comment -

          It will be good to specify the version information of the datatables component. This will help in maintaining this functionality.
          For other Js components, the version information is included in the file name.

          Show
          benoyantony Benoy Antony added a comment - It will be good to specify the version information of the datatables component. This will help in maintaining this functionality. For other Js components, the version information is included in the file name.
          Hide
          raviprak Ravi Prakash added a comment -

          The patch looks good to me. +1.

          Show
          raviprak Ravi Prakash added a comment - The patch looks good to me. +1.
          Hide
          lichangleo Chang Li added a comment -

          Haohui Mai how soon could you check in this code? Are you still waiting for some more reviews?

          Show
          lichangleo Chang Li added a comment - Haohui Mai how soon could you check in this code? Are you still waiting for some more reviews?
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 15m 15s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 javac 7m 55s There were no new javac warning messages.
          +1 javadoc 10m 0s There were no new javadoc warning messages.
          +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings.
          +1 whitespace 0m 0s The patch has no lines that end in whitespace.
          +1 install 1m 22s mvn install still works.
          +1 eclipse:eclipse 0m 35s The patch built with eclipse:eclipse.
          +1 native 3m 7s Pre-build of native portion
          -1 hdfs tests 175m 1s Tests failed in hadoop-hdfs.
              213m 46s  



          Reason Tests
          Failed unit tests hadoop.hdfs.TestAppendSnapshotTruncate
            hadoop.hdfs.web.TestWebHDFS
            hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
          Timed out tests org.apache.hadoop.cli.TestHDFSCLI



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12749927/HDFS-6407.011.patch
          Optional Tests javadoc javac unit
          git revision trunk / 7c796fd
          hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/11966/artifact/patchprocess/testrun_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/11966/testReport/
          Java 1.7.0_55
          uname Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11966/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 15m 15s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac 7m 55s There were no new javac warning messages. +1 javadoc 10m 0s There were no new javadoc warning messages. +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings. +1 whitespace 0m 0s The patch has no lines that end in whitespace. +1 install 1m 22s mvn install still works. +1 eclipse:eclipse 0m 35s The patch built with eclipse:eclipse. +1 native 3m 7s Pre-build of native portion -1 hdfs tests 175m 1s Tests failed in hadoop-hdfs.     213m 46s   Reason Tests Failed unit tests hadoop.hdfs.TestAppendSnapshotTruncate   hadoop.hdfs.web.TestWebHDFS   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA Timed out tests org.apache.hadoop.cli.TestHDFSCLI Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12749927/HDFS-6407.011.patch Optional Tests javadoc javac unit git revision trunk / 7c796fd hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/11966/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/11966/testReport/ Java 1.7.0_55 uname Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11966/console This message was automatically generated.
          Hide
          lichangleo Chang Li added a comment -

          ok. +1(non binding)

          Show
          lichangleo Chang Li added a comment - ok. +1(non binding)
          Hide
          wheat9 Haohui Mai added a comment -

          The discussion on dfs usage and the sorting of the column should be separated. Please file another jira for the feature request.

          Show
          wheat9 Haohui Mai added a comment - The discussion on dfs usage and the sorting of the column should be separated. Please file another jira for the feature request.
          Hide
          lichangleo Chang Li added a comment -

          added non dfs usage back to column on .11 patch.
          Haohui Mai, Nathan Roberts please help review the .11 patch. Thanks!

          Show
          lichangleo Chang Li added a comment - added non dfs usage back to column on .11 patch. Haohui Mai , Nathan Roberts please help review the .11 patch. Thanks!
          Hide
          lichangleo Chang Li added a comment -

          Haohui Mai thanks for the patch! The code looks good. One preference I have is to add Non DFS back to column. Now it's in pop up and we can't sort that information. We rely on that data sometimes.

          Show
          lichangleo Chang Li added a comment - Haohui Mai thanks for the patch! The code looks good. One preference I have is to add Non DFS back to column. Now it's in pop up and we can't sort that information. We rely on that data sometimes.
          Hide
          wheat9 Haohui Mai added a comment -

          The v10 patch allows sorting based on the status and the name of the data node.

          Benoy Antony, Nathan Roberts. Does the patch look good to you?

          Show
          wheat9 Haohui Mai added a comment - The v10 patch allows sorting based on the status and the name of the data node. Benoy Antony , Nathan Roberts . Does the patch look good to you?
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 15m 29s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 javac 7m 46s There were no new javac warning messages.
          +1 javadoc 9m 50s There were no new javadoc warning messages.
          +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings.
          +1 whitespace 0m 0s The patch has no lines that end in whitespace.
          +1 install 1m 21s mvn install still works.
          +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse.
          +1 native 3m 3s Pre-build of native portion
          -1 hdfs tests 66m 52s Tests failed in hadoop-hdfs.
              105m 20s  



          Reason Tests
          Failed unit tests hadoop.hdfs.web.TestWebHdfsFileSystemContract
          Timed out tests org.apache.hadoop.hdfs.TestHDFSFileSystemContract



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12748116/HDFS-6407.010.patch
          Optional Tests javadoc javac unit
          git revision trunk / c5caa25
          hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/11876/artifact/patchprocess/testrun_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/11876/testReport/
          Java 1.7.0_55
          uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11876/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 15m 29s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac 7m 46s There were no new javac warning messages. +1 javadoc 9m 50s There were no new javadoc warning messages. +1 release audit 0m 22s The applied patch does not increase the total number of release audit warnings. +1 whitespace 0m 0s The patch has no lines that end in whitespace. +1 install 1m 21s mvn install still works. +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse. +1 native 3m 3s Pre-build of native portion -1 hdfs tests 66m 52s Tests failed in hadoop-hdfs.     105m 20s   Reason Tests Failed unit tests hadoop.hdfs.web.TestWebHdfsFileSystemContract Timed out tests org.apache.hadoop.hdfs.TestHDFSFileSystemContract Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12748116/HDFS-6407.010.patch Optional Tests javadoc javac unit git revision trunk / c5caa25 hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/11876/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/11876/testReport/ Java 1.7.0_55 uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11876/console This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 21m 23s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 javac 11m 33s There were no new javac warning messages.
          +1 javadoc 11m 26s There were no new javadoc warning messages.
          +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings.
          +1 whitespace 0m 0s The patch has no lines that end in whitespace.
          +1 install 1m 22s mvn install still works.
          +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
          +1 native 3m 11s Pre-build of native portion
          -1 hdfs tests 172m 48s Tests failed in hadoop-hdfs.
              222m 44s  



          Reason Tests
          Failed unit tests hadoop.hdfs.TestFileConcurrentReader
            hadoop.hdfs.server.datanode.TestDataNodeMetrics
            hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
            hadoop.hdfs.TestReplaceDatanodeOnFailure



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12747896/HDFS-6407.009.patch
          Optional Tests javadoc javac unit
          git revision trunk / ddc867ce
          hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/11871/artifact/patchprocess/testrun_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/11871/testReport/
          Java 1.7.0_55
          uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11871/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 21m 23s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac 11m 33s There were no new javac warning messages. +1 javadoc 11m 26s There were no new javadoc warning messages. +1 release audit 0m 23s The applied patch does not increase the total number of release audit warnings. +1 whitespace 0m 0s The patch has no lines that end in whitespace. +1 install 1m 22s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. +1 native 3m 11s Pre-build of native portion -1 hdfs tests 172m 48s Tests failed in hadoop-hdfs.     222m 44s   Reason Tests Failed unit tests hadoop.hdfs.TestFileConcurrentReader   hadoop.hdfs.server.datanode.TestDataNodeMetrics   hadoop.hdfs.server.namenode.ha.TestStandbyIsHot   hadoop.hdfs.TestReplaceDatanodeOnFailure Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12747896/HDFS-6407.009.patch Optional Tests javadoc javac unit git revision trunk / ddc867ce hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/11871/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/11871/testReport/ Java 1.7.0_55 uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11871/console This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          -1 patch 0m 0s The patch command could not apply the patch during dryrun.



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12746894/HDFS-6407.008.patch
          Optional Tests javadoc javac unit
          git revision trunk / ddc867ce
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11870/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 patch 0m 0s The patch command could not apply the patch during dryrun. Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12746894/HDFS-6407.008.patch Optional Tests javadoc javac unit git revision trunk / ddc867ce Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11870/console This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          -1 patch 0m 0s The patch command could not apply the patch during dryrun.



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12746894/HDFS-6407.008.patch
          Optional Tests javadoc javac unit
          git revision trunk / 1d3026e
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11815/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 patch 0m 0s The patch command could not apply the patch during dryrun. Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12746894/HDFS-6407.008.patch Optional Tests javadoc javac unit git revision trunk / 1d3026e Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11815/console This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          -1 patch 0m 0s The patch command could not apply the patch during dryrun.



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12746894/HDFS-6407.008.patch
          Optional Tests javadoc javac unit
          git revision trunk / 1d3026e
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11814/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 patch 0m 0s The patch command could not apply the patch during dryrun. Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12746894/HDFS-6407.008.patch Optional Tests javadoc javac unit git revision trunk / 1d3026e Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11814/console This message was automatically generated.
          Hide
          wheat9 Haohui Mai added a comment - - edited

          I uploaded the v8 patch that implements sorting, pagination and filter for the datanodes. The basic idea is to attach the value as an attribute (i.e. ng-value) to the cell, and to sort based on the attribute instead of the text. This approach eliminates all the parsing / rendering hacks that have to be put in to accommodate the output of various pretty printers.

          Note: the current patch depends on HDFS-8816.

          Show
          wheat9 Haohui Mai added a comment - - edited I uploaded the v8 patch that implements sorting, pagination and filter for the datanodes. The basic idea is to attach the value as an attribute (i.e. ng-value ) to the cell, and to sort based on the attribute instead of the text. This approach eliminates all the parsing / rendering hacks that have to be put in to accommodate the output of various pretty printers. Note: the current patch depends on HDFS-8816 .
          Hide
          lichangleo Chang Li added a comment -

          please see sorting 2.png for latest effect

          Show
          lichangleo Chang Li added a comment - please see sorting 2.png for latest effect
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 0m 0s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 release audit 0m 18s The applied patch does not increase the total number of release audit warnings.
          -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix.
              0m 22s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12746713/HDFS-6407.7.patch
          Optional Tests  
          git revision trunk / ee98d63
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/11799/artifact/patchprocess/whitespace.txt
          Java 1.7.0_55
          uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11799/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 0m 0s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 release audit 0m 18s The applied patch does not increase the total number of release audit warnings. -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix.     0m 22s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12746713/HDFS-6407.7.patch Optional Tests   git revision trunk / ee98d63 whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/11799/artifact/patchprocess/whitespace.txt Java 1.7.0_55 uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11799/console This message was automatically generated.
          Hide
          lichangleo Chang Li added a comment -

          Haohui Mai,
          I just updated my patch. I slightly modify my render function. My previous patch did some slight parsing due to the "Block Pool Used" column contains a compound data of "blockPoolUsed" and "blockPoolUsedPercent". There is already Used data column, and I don't see the meaning of displaying blockPoolUsed again. Moreover, it's confusing to sort blockPoolUsed and blockPoolUsedPercent compound because it's likely it has low blockPoolUsed size but high blockPoolUsedPercent. So I only display the usedPercent. That also help eliminate the only parsing I was using.
          Below is my latest render function. When it sort it only used the unformmatted data in the table. When it's displaying, I use the same format code in dfs-dust.js. Let me know what else concern you have for separating control and view. Thanks

          "columnDefs": [ {
                          "targets": [3,4,5,6,8],
                          "render": function ( data, type, full, meta) {
                            var colIndex = meta.col;
                            var v;
                            if (type == 'display') {
                              if (colIndex == 8) {
                                return Math.round(v * 100) / 100 + '%';
                              }                    
                              var UNITS = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'ZB'];
                              var prev = 0, i = 0;
                              while (Math.floor(v) > 0 && i < UNITS.length) {
                                prev = v;
                                v /= 1024;
                                i += 1;
                              }
          
                              if (i > 0 && i < UNITS.length) {
                                v = prev;
                                i -= 1;
                              }
                              return Math.round(v * 100) / 100 + ' ' + UNITS[i];
                            }
                            return v; 
                          }
                       }]
          
          Show
          lichangleo Chang Li added a comment - Haohui Mai , I just updated my patch. I slightly modify my render function. My previous patch did some slight parsing due to the "Block Pool Used" column contains a compound data of "blockPoolUsed" and "blockPoolUsedPercent". There is already Used data column, and I don't see the meaning of displaying blockPoolUsed again. Moreover, it's confusing to sort blockPoolUsed and blockPoolUsedPercent compound because it's likely it has low blockPoolUsed size but high blockPoolUsedPercent. So I only display the usedPercent. That also help eliminate the only parsing I was using. Below is my latest render function. When it sort it only used the unformmatted data in the table. When it's displaying, I use the same format code in dfs-dust.js. Let me know what else concern you have for separating control and view. Thanks "columnDefs" : [ { "targets" : [3,4,5,6,8], "render" : function ( data, type, full, meta) { var colIndex = meta.col; var v; if (type == 'display') { if (colIndex == 8) { return Math .round(v * 100) / 100 + '%'; } var UNITS = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'ZB']; var prev = 0, i = 0; while ( Math .floor(v) > 0 && i < UNITS.length) { prev = v; v /= 1024; i += 1; } if (i > 0 && i < UNITS.length) { v = prev; i -= 1; } return Math .round(v * 100) / 100 + ' ' + UNITS[i]; } return v; } }]
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 0m 0s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 release audit 0m 20s The applied patch does not increase the total number of release audit warnings.
          -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix.
              0m 24s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12746706/HDFS-6407.6.patch
          Optional Tests  
          git revision trunk / ee98d63
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/11796/artifact/patchprocess/whitespace.txt
          Java 1.7.0_55
          uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11796/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 0m 0s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 release audit 0m 20s The applied patch does not increase the total number of release audit warnings. -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix.     0m 24s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12746706/HDFS-6407.6.patch Optional Tests   git revision trunk / ee98d63 whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/11796/artifact/patchprocess/whitespace.txt Java 1.7.0_55 uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11796/console This message was automatically generated.
          Hide
          wheat9 Haohui Mai added a comment -

          Sure. I'll play around with it in the next couple days.

          Show
          wheat9 Haohui Mai added a comment - Sure. I'll play around with it in the next couple days.
          Hide
          daryn Daryn Sharp added a comment -

          That silly little bar for used capacity would be nice to have back too

          Show
          daryn Daryn Sharp added a comment - That silly little bar for used capacity would be nice to have back too
          Hide
          daryn Daryn Sharp added a comment -

          This is a rather important change to us. We rely heavily on the legacy ui for debugging storage imbalances, failed volumes, etc. When you have well over 5k nodes it's not easy to eyeball an unsorted list.

          That all said, I'm not sure I fully understand the conversation, but Haohui Mai, can you please provide example code of how you would address your concern? It almost looks like that column should only include the percent since an earlier column includes the used capacity? I think that would avoid the re-parsing of formatted data?

          Show
          daryn Daryn Sharp added a comment - This is a rather important change to us. We rely heavily on the legacy ui for debugging storage imbalances, failed volumes, etc. When you have well over 5k nodes it's not easy to eyeball an unsorted list. That all said, I'm not sure I fully understand the conversation, but Haohui Mai , can you please provide example code of how you would address your concern? It almost looks like that column should only include the percent since an earlier column includes the used capacity? I think that would avoid the re-parsing of formatted data?
          Hide
          wheat9 Haohui Mai added a comment -

          I think my .5 patch actually separate the control and views.

          If this is the case, can you tell me what the following code is trying to achieve?

          +                  var colIndex = meta.col;
          +                  var v;
          +                  if (colIndex == 8) { 
          +                    var comp = data;
          +                    var res = comp.split(" ");
          +                    var percent = res[1];
          +                    v = res[0];
          +                  } else {
          +                    v = data;
          +                  }
          +
          

          Don't get me wrong – datatable is a pretty good widget but parsing the data from the table itself is really bad. I think it might work if you specify the data attribute directly and be careful on sorting orders.

          Show
          wheat9 Haohui Mai added a comment - I think my .5 patch actually separate the control and views. If this is the case, can you tell me what the following code is trying to achieve? + var colIndex = meta.col; + var v; + if (colIndex == 8) { + var comp = data; + var res = comp.split( " " ); + var percent = res[1]; + v = res[0]; + } else { + v = data; + } + Don't get me wrong – datatable is a pretty good widget but parsing the data from the table itself is really bad. I think it might work if you specify the data attribute directly and be careful on sorting orders.
          Hide
          lichangleo Chang Li added a comment -

          This change is unacceptable as it will hinder readability as the numbers can be as large as couple terabytes

          Haohui Mai sorry about if there is misunderstanding, but it seems you are not understanding my work right.
          Though I disable the fmt in dust template, I let dataTable to do the work to render the table and display it with right unit. I attach the sorting table.png for you to see the effect.

          Approaches that really separate the controls and the views are definitely the way to go

          I think my .5 patch actually separate the control and views. Actually it's dataTable enables me to do that with its property of orthogonal data: sorting, display taking different input. It will help for you to understand my work if you can read about the orthogonal data property of dataTable https://www.datatables.net/manual/orthogonal-data.

          Show
          lichangleo Chang Li added a comment - This change is unacceptable as it will hinder readability as the numbers can be as large as couple terabytes Haohui Mai sorry about if there is misunderstanding, but it seems you are not understanding my work right. Though I disable the fmt in dust template, I let dataTable to do the work to render the table and display it with right unit. I attach the sorting table.png for you to see the effect. Approaches that really separate the controls and the views are definitely the way to go I think my .5 patch actually separate the control and views. Actually it's dataTable enables me to do that with its property of orthogonal data: sorting, display taking different input. It will help for you to understand my work if you can read about the orthogonal data property of dataTable https://www.datatables.net/manual/orthogonal-data .
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 0m 0s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 release audit 0m 15s The applied patch generated 2 release audit warnings.
          -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix.
              0m 18s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12746622/HDFS-6407.5.patch
          Optional Tests  
          git revision trunk / 06e5dd2
          Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/11791/artifact/patchprocess/patchReleaseAuditProblems.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/11791/artifact/patchprocess/whitespace.txt
          Java 1.7.0_55
          uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11791/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 0m 0s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. -1 release audit 0m 15s The applied patch generated 2 release audit warnings. -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix.     0m 18s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12746622/HDFS-6407.5.patch Optional Tests   git revision trunk / 06e5dd2 Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/11791/artifact/patchprocess/patchReleaseAuditProblems.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/11791/artifact/patchprocess/whitespace.txt Java 1.7.0_55 uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11791/console This message was automatically generated.
          Hide
          wheat9 Haohui Mai added a comment -
          -    <td>{capacity|fmt_bytes}</td>
          -    <td>{used|fmt_bytes}</td>
          -    <td>{nonDfsUsedSpace|fmt_bytes}</td>
          -    <td>{remaining|fmt_bytes}</td>
          +    <td>{capacity}</td>
          +    <td>{used}</td>
          +    <td>{nonDfsUsedSpace}</td>
          +    <td>{remaining}</td>
               <td>{numBlocks}</td>
          -    <td>{blockPoolUsed|fmt_bytes} ({blockPoolUsedPercent|fmt_percentage})</td>
          +    <td>{blockPoolUsed} ({blockPoolUsedPercent|fmt_percentage})</td>
          

          This change is unacceptable as it will hinder readability as the numbers can be as large as couple terabytes.

          Also loing the ability to sort in 2.7 and lost legacy UI is preventing my company from using 2.7

          Also can you elaborate why losing the legacy UI prevent you to use 2.7?

          Patches are definitely welcome but as I pointed out in the previous comments it is the wrong direction to put sorting abilities using datatables. Approaches that really separate the controls and the views are definitely the way to go.

          Show
          wheat9 Haohui Mai added a comment - - <td>{capacity|fmt_bytes}</td> - <td>{used|fmt_bytes}</td> - <td>{nonDfsUsedSpace|fmt_bytes}</td> - <td>{remaining|fmt_bytes}</td> + <td>{capacity}</td> + <td>{used}</td> + <td>{nonDfsUsedSpace}</td> + <td>{remaining}</td> <td>{numBlocks}</td> - <td>{blockPoolUsed|fmt_bytes} ({blockPoolUsedPercent|fmt_percentage})</td> + <td>{blockPoolUsed} ({blockPoolUsedPercent|fmt_percentage})</td> This change is unacceptable as it will hinder readability as the numbers can be as large as couple terabytes. Also loing the ability to sort in 2.7 and lost legacy UI is preventing my company from using 2.7 Also can you elaborate why losing the legacy UI prevent you to use 2.7? Patches are definitely welcome but as I pointed out in the previous comments it is the wrong direction to put sorting abilities using datatables. Approaches that really separate the controls and the views are definitely the way to go.
          Hide
          lichangleo Chang Li added a comment -
          Show
          lichangleo Chang Li added a comment - Haohui Mai
          Hide
          lichangleo Chang Li added a comment -

          parsing data pretty-formatted data and sorting them

          I do not parse the formatted the data and then sort them.
          currently dust template pull those data and format them, but I disable the format in my patch,

          -    <td>{capacity|fmt_bytes}</td>
          -    <td>{used|fmt_bytes}</td>
          -    <td>{nonDfsUsedSpace|fmt_bytes}</td>
          -    <td>{remaining|fmt_bytes}</td>
          +    <td>{capacity}</td>
          +    <td>{used}</td>
          +    <td>{nonDfsUsedSpace}</td>
          +    <td>{remaining}</td>
               <td>{numBlocks}</td>
          -    <td>{blockPoolUsed|fmt_bytes} ({blockPoolUsedPercent|fmt_percentage})</td>
          +    <td>{blockPoolUsed} ({blockPoolUsedPercent|fmt_percentage})</td>
          

          .
          Thus data pulled by dust is stored in their raw formmat as in jmx in html table.
          So dataTable sort those same data as in jmx.
          When dataTable display those data, I use the render function of dataTable, which is able to take input data in the table and parse and display the formmatted the value of file size without changing the long formmatted data stored in html table.

          +                "render": function ( data, type, full, meta) {
          +                  var colIndex = meta.col;
          +                  var v;
          +                  if (colIndex == 8) { 
          +                    var comp = data;
          +                    var res = comp.split(" ");
          +                    var percent = res[1];
          +                    v = res[0];
          +                  } else {
          +                    v = data;
          +                  }
          +
          +                  if (type == 'display') {
          +                    //var v = data;
          +                    var UNITS = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'ZB'];
          +                    var prev = 0, i = 0;
          +                    while (Math.floor(v) > 0 && i < UNITS.length) {
          +                      prev = v;
          +                      v /= 1024;
          +                      i += 1;
          +                    }
          +
          +                    if (i > 0 && i < UNITS.length) {
          +                      v = prev;
          +                      i -= 1;
          +                    }
          +                    var size = Math.round(v * 100) / 100 + ' ' + UNITS[i];
          +                    if (colIndex == 8) {
          +                      return size + ' ' + percent;
          +                    } else {
          +                      return size;
          +                    }
          +                  }
          +                  return v; 
          +                }
          +             }]
          +          });
          

          I see you are worried about the error prone parsing issue, but I don't see it exist in my patch. Also loing the ability to sort in 2.7 and lost legacy UI is preventing my company from using 2.7. I think we should treat this issue as critical and not postpone it anymore.

          Show
          lichangleo Chang Li added a comment - parsing data pretty-formatted data and sorting them I do not parse the formatted the data and then sort them. currently dust template pull those data and format them, but I disable the format in my patch, - <td>{capacity|fmt_bytes}</td> - <td>{used|fmt_bytes}</td> - <td>{nonDfsUsedSpace|fmt_bytes}</td> - <td>{remaining|fmt_bytes}</td> + <td>{capacity}</td> + <td>{used}</td> + <td>{nonDfsUsedSpace}</td> + <td>{remaining}</td> <td>{numBlocks}</td> - <td>{blockPoolUsed|fmt_bytes} ({blockPoolUsedPercent|fmt_percentage})</td> + <td>{blockPoolUsed} ({blockPoolUsedPercent|fmt_percentage})</td> . Thus data pulled by dust is stored in their raw formmat as in jmx in html table. So dataTable sort those same data as in jmx. When dataTable display those data, I use the render function of dataTable, which is able to take input data in the table and parse and display the formmatted the value of file size without changing the long formmatted data stored in html table. + "render" : function ( data, type, full, meta) { + var colIndex = meta.col; + var v; + if (colIndex == 8) { + var comp = data; + var res = comp.split( " " ); + var percent = res[1]; + v = res[0]; + } else { + v = data; + } + + if (type == 'display') { + // var v = data; + var UNITS = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'ZB']; + var prev = 0, i = 0; + while ( Math .floor(v) > 0 && i < UNITS.length) { + prev = v; + v /= 1024; + i += 1; + } + + if (i > 0 && i < UNITS.length) { + v = prev; + i -= 1; + } + var size = Math .round(v * 100) / 100 + ' ' + UNITS[i]; + if (colIndex == 8) { + return size + ' ' + percent; + } else { + return size; + } + } + return v; + } + }] + }); I see you are worried about the error prone parsing issue, but I don't see it exist in my patch. Also loing the ability to sort in 2.7 and lost legacy UI is preventing my company from using 2.7. I think we should treat this issue as critical and not postpone it anymore.
          Hide
          wheat9 Haohui Mai added a comment -

          I still use the plugin datatable...

          I made it quite explicit why using datatable is a bad idea. Please see https://issues.apache.org/jira/browse/HDFS-6407?focusedCommentId=14232267&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14232267

          The key issue here is that the view does not map directly to the data.The only reasonable approach to me is to sort internally and to generate the data to power the views, but not working around and parsing data pretty-formatted data and sorting them.

          Show
          wheat9 Haohui Mai added a comment - I still use the plugin datatable... I made it quite explicit why using datatable is a bad idea. Please see https://issues.apache.org/jira/browse/HDFS-6407?focusedCommentId=14232267&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14232267 The key issue here is that the view does not map directly to the data.The only reasonable approach to me is to sort internally and to generate the data to power the views, but not working around and parsing data pretty-formatted data and sorting them.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 0m 0s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 release audit 0m 17s The applied patch generated 2 release audit warnings.
          -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix.
              0m 20s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12746563/HDFS-6407.4.patch
          Optional Tests  
          git revision trunk / 4025326
          Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/11788/artifact/patchprocess/patchReleaseAuditProblems.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/11788/artifact/patchprocess/whitespace.txt
          Java 1.7.0_55
          uname Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11788/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 0m 0s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. -1 release audit 0m 17s The applied patch generated 2 release audit warnings. -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix.     0m 20s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12746563/HDFS-6407.4.patch Optional Tests   git revision trunk / 4025326 Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/11788/artifact/patchprocess/patchReleaseAuditProblems.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/11788/artifact/patchprocess/whitespace.txt Java 1.7.0_55 uname Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11788/console This message was automatically generated.
          Hide
          lichangleo Chang Li added a comment -

          Haohui Mai, could you please take a look at my patch and provide me some feedback. I still use the plugin datatable, but I work around the sorting concerns you have by not let dust template do the parsing, and let datatable use the raw data to sort and display data with unit.

          Show
          lichangleo Chang Li added a comment - Haohui Mai , could you please take a look at my patch and provide me some feedback. I still use the plugin datatable, but I work around the sorting concerns you have by not let dust template do the parsing, and let datatable use the raw data to sort and display data with unit.
          Hide
          nroberts Nathan Roberts added a comment -

          My understanding is that the Legacy UI was removed in 2.7. With the legacyUI gone, we've lost very valuable functionality. I use the sort capability all of the time to do things like: find nodes running different versions during a rolling upgrade, evaluate how the balancer is doing by sorting on capacity, find very full nodes to see how their disks are performing, sort on Admin state to find all decomissioning nodes . I don't think it's a blocker for a release, but a loss of commonly used functionality can be very annoying for users.

          Show
          nroberts Nathan Roberts added a comment - My understanding is that the Legacy UI was removed in 2.7. With the legacyUI gone, we've lost very valuable functionality. I use the sort capability all of the time to do things like: find nodes running different versions during a rolling upgrade, evaluate how the balancer is doing by sorting on capacity, find very full nodes to see how their disks are performing, sort on Admin state to find all decomissioning nodes . I don't think it's a blocker for a release, but a loss of commonly used functionality can be very annoying for users.
          Hide
          wheat9 Haohui Mai added a comment -

          Though it's nice to fix, it is not a core HDFS functionality though. Changing the priority back to Minor. Please feel free to bump the priority if you feel differently. Contributions are appreciated.

          Show
          wheat9 Haohui Mai added a comment - Though it's nice to fix, it is not a core HDFS functionality though. Changing the priority back to Minor. Please feel free to bump the priority if you feel differently. Contributions are appreciated.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 0m 0s Pre-patch trunk compilation is healthy.
          -1 @author 0m 0s The patch appears to contain 2 @author tags which the Hadoop community has agreed to not allow in code contributions.
          -1 release audit 0m 14s The applied patch generated 3 release audit warnings.
          +1 whitespace 0m 0s The patch has no lines that end in whitespace.
              0m 17s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12730214/HDFS-6407-003.patch
          Optional Tests  
          git revision trunk / 98c2bc8
          Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/11750/artifact/patchprocess/patchReleaseAuditProblems.txt
          Java 1.7.0_55
          uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11750/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 0m 0s Pre-patch trunk compilation is healthy. -1 @author 0m 0s The patch appears to contain 2 @author tags which the Hadoop community has agreed to not allow in code contributions. -1 release audit 0m 14s The applied patch generated 3 release audit warnings. +1 whitespace 0m 0s The patch has no lines that end in whitespace.     0m 17s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12730214/HDFS-6407-003.patch Optional Tests   git revision trunk / 98c2bc8 Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/11750/artifact/patchprocess/patchReleaseAuditProblems.txt Java 1.7.0_55 uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/11750/console This message was automatically generated.
          Hide
          lichangleo Chang Li added a comment -

          Benoy Antony is there any status on this issue?

          Show
          lichangleo Chang Li added a comment - Benoy Antony is there any status on this issue?
          Hide
          wheat9 Haohui Mai added a comment -
          +jQuery.fn.dataTable.ext.type.detect.unshift( function ( data ) {
          +        if ( typeof data !== 'string' ) {
          +                return null;
          +        }
          +
          +        var units = data.replace( /[\d\.]/g, '' ).toLowerCase().trim();
          +        if ( units !== '' && units !== 'b' && units !== 'kb' && units !== 'mb' && units !== 'gb' && units !== 'tb' && units !== 'pb' ) {
          +                return null;
          +        }
          +        return isNaN( parseFloat( data.trim() ) ) ?
          +                null :
          +                'file-size';
          +} );
          +
          +jQuery.fn.dataTable.ext.type.detect.unshift( function ( data ) {
          +        if ( typeof data !== 'string' ) {
          +                return null;
          +        }
          +
          +        var units = data.replace( /[\d\.\(\%\)]/g, '' ).toLowerCase().trim();
          +        if ( units !== '' && units !== 'b' && units !== 'kb' && units !== 'mb' && units !== 'gb' && units !== 'tb' && units !== 'pb' ) {
          +                return null;
          +        }
          +        return isNaN( parseFloat( data.trim() ) ) ?
          +                null :
          +                'file-size-percent';
          +} );
          +
          +/**
          + * When dealing with computer file sizes, it is common to append a post fix
          + * such as B, KB, MB or GB to a string in order to easily denote the order of
          + * magnitude of the file size. This plug-in allows sorting to take these
          + * indicates of size into account.
          + * 
          + * A counterpart type detection plug-in is also available.
          + *
          + *  @name File size
          + *  @summary Sort abbreviated file sizes correctly (8MB, 4KB, etc)
          + *  @author Allan Jardine - datatables.net
          + *
          + *  @example
          + *    $('#example').DataTable( {
          + *       columnDefs: [
          + *         { type: 'file-size', targets: 0 }
          + *       ]
          + *    } );
          + */
          +
          +jQuery.fn.dataTable.ext.type.order['file-size-pre'] = function ( data ) {
          +    var units = data.replace( /[\d\.]/g, '' ).toLowerCase().trim();
          +    var multiplier = 1;
          +
          +    if ( units === 'kb' ) {
          +        multiplier = 1000;
          +    }
          +    else if ( units === 'mb' ) {
          +        multiplier = 1000000;
          +    }
          +    else if ( units === 'gb' ) {
          +        multiplier = 1000000000;
          +    }
          +    else if ( units === 'tb' ) {
          +        multiplier = 1000000000000;
          +    }
          +    else if ( units === 'pb' ) {
          +        multiplier = 1000000000000000;
          +    }
          +    return parseFloat( data ) * multiplier;
          +};
          +
          +jQuery.fn.dataTable.ext.type.order['file-size-percent-pre'] = function ( data ) {
          +    var units = data.replace( /[\d\.\(\%\)]/g, '' ).toLowerCase().trim();
          +    var multiplier = 1;
          +
          +    if ( units === 'kb' ) {
          +        multiplier = 1000;
          +    }
          +    else if ( units === 'mb' ) {
          +        multiplier = 1000000;
          +    }
          +    else if ( units === 'gb' ) {
          +        multiplier = 1000000000;
          +    }
          +    else if ( units === 'tb' ) {
          +        multiplier = 1000000000000;
          +    }
          +    else if ( units === 'pb' ) {
          +        multiplier = 1000000000000000;
          +    }
          +    return parseFloat( data ) * multiplier;
          

          The data is available as 64-bit longs in the JMX output. The template engine generates the strings.

          Just to echo my previous comments – reparsing the string is error-prone and should be avoided.

          Show
          wheat9 Haohui Mai added a comment - +jQuery.fn.dataTable.ext.type.detect.unshift( function ( data ) { + if ( typeof data !== 'string' ) { + return null ; + } + + var units = data.replace( /[\d\.]/g, '' ).toLowerCase().trim(); + if ( units !== '' && units !== 'b' && units !== 'kb' && units !== 'mb' && units !== 'gb' && units !== 'tb' && units !== 'pb' ) { + return null ; + } + return isNaN( parseFloat( data.trim() ) ) ? + null : + 'file-size'; +} ); + +jQuery.fn.dataTable.ext.type.detect.unshift( function ( data ) { + if ( typeof data !== 'string' ) { + return null ; + } + + var units = data.replace( /[\d\.\(\%\)]/g, '' ).toLowerCase().trim(); + if ( units !== '' && units !== 'b' && units !== 'kb' && units !== 'mb' && units !== 'gb' && units !== 'tb' && units !== 'pb' ) { + return null ; + } + return isNaN( parseFloat( data.trim() ) ) ? + null : + 'file-size-percent'; +} ); + +/** + * When dealing with computer file sizes, it is common to append a post fix + * such as B, KB, MB or GB to a string in order to easily denote the order of + * magnitude of the file size. This plug-in allows sorting to take these + * indicates of size into account. + * + * A counterpart type detection plug-in is also available. + * + * @name File size + * @summary Sort abbreviated file sizes correctly (8MB, 4KB, etc) + * @author Allan Jardine - datatables.net + * + * @example + * $('#example').DataTable( { + * columnDefs: [ + * { type: 'file-size', targets: 0 } + * ] + * } ); + */ + +jQuery.fn.dataTable.ext.type.order['file-size-pre'] = function ( data ) { + var units = data.replace( /[\d\.]/g, '' ).toLowerCase().trim(); + var multiplier = 1; + + if ( units === 'kb' ) { + multiplier = 1000; + } + else if ( units === 'mb' ) { + multiplier = 1000000; + } + else if ( units === 'gb' ) { + multiplier = 1000000000; + } + else if ( units === 'tb' ) { + multiplier = 1000000000000; + } + else if ( units === 'pb' ) { + multiplier = 1000000000000000; + } + return parseFloat( data ) * multiplier; +}; + +jQuery.fn.dataTable.ext.type.order['file-size-percent-pre'] = function ( data ) { + var units = data.replace( /[\d\.\(\%\)]/g, '' ).toLowerCase().trim(); + var multiplier = 1; + + if ( units === 'kb' ) { + multiplier = 1000; + } + else if ( units === 'mb' ) { + multiplier = 1000000; + } + else if ( units === 'gb' ) { + multiplier = 1000000000; + } + else if ( units === 'tb' ) { + multiplier = 1000000000000; + } + else if ( units === 'pb' ) { + multiplier = 1000000000000000; + } + return parseFloat( data ) * multiplier; The data is available as 64-bit longs in the JMX output. The template engine generates the strings. Just to echo my previous comments – reparsing the string is error-prone and should be avoided.
          Hide
          raviprak Ravi Prakash added a comment -

          Thanks for all your work Benoy! I would also like to point out https://issues.apache.org/jira/browse/HDFS-8291 .

          Show
          raviprak Ravi Prakash added a comment - Thanks for all your work Benoy! I would also like to point out https://issues.apache.org/jira/browse/HDFS-8291 .
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 0m 0s Pre-patch trunk compilation is healthy.
          -1 @author 0m 0s The patch appears to contain 2 @author tags which the Hadoop community has agreed to not allow in code contributions.
          -1 release audit 0m 14s The applied patch generated 3 release audit warnings.
          +1 whitespace 0m 0s The patch has no lines that end in whitespace.
              0m 20s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12730214/HDFS-6407-003.patch
          Optional Tests  
          git revision trunk / 8f65c79
          Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/10783/artifact/patchprocess/patchReleaseAuditProblems.txt
          Java 1.7.0_55
          uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10783/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 0m 0s Pre-patch trunk compilation is healthy. -1 @author 0m 0s The patch appears to contain 2 @author tags which the Hadoop community has agreed to not allow in code contributions. -1 release audit 0m 14s The applied patch generated 3 release audit warnings. +1 whitespace 0m 0s The patch has no lines that end in whitespace.     0m 20s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12730214/HDFS-6407-003.patch Optional Tests   git revision trunk / 8f65c79 Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/10783/artifact/patchprocess/patchReleaseAuditProblems.txt Java 1.7.0_55 uname Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10783/console This message was automatically generated.
          Hide
          benoyantony Benoy Antony added a comment -

          Adjusted the page size to a lower number (instead of All) to ensure that pages get painted fast when there are lots of items.
          This is an existing problem when there are thousands of items to display.

          Sorting thousands of items has no noticeable delay to require any optimization. Tested with 8500 files in one directory.

          Show
          benoyantony Benoy Antony added a comment - Adjusted the page size to a lower number (instead of All) to ensure that pages get painted fast when there are lots of items. This is an existing problem when there are thousands of items to display. Sorting thousands of items has no noticeable delay to require any optimization. Tested with 8500 files in one directory.
          Hide
          benoyantony Benoy Antony added a comment -

          The sorting seems to be correct for all the fields. Please let me know what makes you think otherwise and I can fix it.

          The purpose of this jira is to put back sorting on data nodes tab. Since it was easy to enable sorting on other tables using the plugin, sorting and pagination was added to those tables as well.

          The processing is done completely on the client side and so there is no impact on the server side. At this point, the process of enabling sorting pagination for a table is simple and uniform. To give up this simplicity, there should be some noticeable delay with the current approach.

          Tested with 6,000 files. It took time to render 6,000 items irrespective of sorting is enabled or not.
          If pagination is set , then items are painted instantly.

          In any case, no noticeable delay is caused by sorting. So I am not sure whether there is any point in optimizing this client side logic.

          One enhancement which enables the page to render fast , will be to set the default pagination to reasonably higher number like 200. In cases where there are 10,000+ items to paint, the pages gets rendered fast if page size is 200 items. If the user wants all items, then he can select "All" display all files. Or User can search for the file in other pages.I will update the patch with page size set to 200 items for all tables.

          Show
          benoyantony Benoy Antony added a comment - The sorting seems to be correct for all the fields. Please let me know what makes you think otherwise and I can fix it. The purpose of this jira is to put back sorting on data nodes tab. Since it was easy to enable sorting on other tables using the plugin, sorting and pagination was added to those tables as well. The processing is done completely on the client side and so there is no impact on the server side. At this point, the process of enabling sorting pagination for a table is simple and uniform. To give up this simplicity, there should be some noticeable delay with the current approach. Tested with 6,000 files. It took time to render 6,000 items irrespective of sorting is enabled or not. If pagination is set , then items are painted instantly. In any case, no noticeable delay is caused by sorting. So I am not sure whether there is any point in optimizing this client side logic. One enhancement which enables the page to render fast , will be to set the default pagination to reasonably higher number like 200. In cases where there are 10,000+ items to paint, the pages gets rendered fast if page size is 200 items. If the user wants all items, then he can select "All" display all files. Or User can search for the file in other pages.I will update the patch with page size set to 200 items for all tables.
          Hide
          wheat9 Haohui Mai added a comment -

          IMO this is a correctness issue.

          I would prefer to addressing at the very beginning since it is error-prone to parse the string.

          Show
          wheat9 Haohui Mai added a comment - IMO this is a correctness issue. I would prefer to addressing at the very beginning since it is error-prone to parse the string.
          Hide
          benoyantony Benoy Antony added a comment -

          That will be a good optimization. Can we please do it as an enhancement in a separate jira ?

          Show
          benoyantony Benoy Antony added a comment - That will be a good optimization. Can we please do it as an enhancement in a separate jira ?
          Hide
          wheat9 Haohui Mai added a comment -

          The sizes of file are available in via webhdfs in the format of long – thus sorting is much better to be done at that level instead of parsing the output and using the plugin.

          Show
          wheat9 Haohui Mai added a comment - The sizes of file are available in via webhdfs in the format of long – thus sorting is much better to be done at that level instead of parsing the output and using the plugin.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 0m 0s Pre-patch trunk compilation is healthy.
          -1 @author 0m 0s The patch appears to contain 3 @author tags which the Hadoop community has agreed to not allow in code contributions.
          -1 release audit 0m 16s The applied patch generated 4 release audit warnings.
          +1 whitespace 0m 0s The patch has no lines that end in whitespace.
              0m 22s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12729784/HDFS-6407-002.patch
          Optional Tests  
          git revision trunk / 279958b
          Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/10506/artifact/patchprocess/patchReleaseAuditProblems.txt
          Java 1.7.0_55
          uname Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10506/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 0m 0s Pre-patch trunk compilation is healthy. -1 @author 0m 0s The patch appears to contain 3 @author tags which the Hadoop community has agreed to not allow in code contributions. -1 release audit 0m 16s The applied patch generated 4 release audit warnings. +1 whitespace 0m 0s The patch has no lines that end in whitespace.     0m 22s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12729784/HDFS-6407-002.patch Optional Tests   git revision trunk / 279958b Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/10506/artifact/patchprocess/patchReleaseAuditProblems.txt Java 1.7.0_55 uname Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10506/console This message was automatically generated.
          Hide
          benoyantony Benoy Antony added a comment - - edited

          The patch is updated to use the data tables plugin (http://www.datatables.net/). The same plugin is used in the Resource Manager UI also. Please check the attached screenshots to see the sorting in action.

          The patch takes care of sorting based on file sizes. Also supports pagination on the client side.

          Notes about the sorting and pagination features added to the different tables on Namenode UI:

          1. Uses data tables plugin (http://www.datatables.net/).
          2. Uses latest version 1.10.7 (Yarn uses 1.9.4 and Yarn can potentially migrate to the latest version)
          3. Added logic to detect columns showing file sizes and sort based on file sizes.
          4. Added custom logic detect columns showing file size and percentage and sort based on file sizes. This is used to detect and sort “BlockPool Used” column which displays value like 7.74 MB (10%).
          5. The default pagination is to display all the rows to match the existing behavior. The user can further select smaller page sizes using the drop down.
          6. The following tables are set up to be sortable and pageable : datanodes, decommissioning nodes, datanode volume failures, snapshot table directories, snapshotted directories and file/dir listing.
          7. For file/dir listings , default sort happens on the name column (last column). For snapshotted directories, the default sort happens on the snapshot directory (second column). For all other tables, the sort happens on the first column.
          Show
          benoyantony Benoy Antony added a comment - - edited The patch is updated to use the data tables plugin ( http://www.datatables.net/ ). The same plugin is used in the Resource Manager UI also. Please check the attached screenshots to see the sorting in action. The patch takes care of sorting based on file sizes. Also supports pagination on the client side. Notes about the sorting and pagination features added to the different tables on Namenode UI: Uses data tables plugin ( http://www.datatables.net/ ). Uses latest version 1.10.7 (Yarn uses 1.9.4 and Yarn can potentially migrate to the latest version) Added logic to detect columns showing file sizes and sort based on file sizes. Added custom logic detect columns showing file size and percentage and sort based on file sizes. This is used to detect and sort “BlockPool Used” column which displays value like 7.74 MB (10%). The default pagination is to display all the rows to match the existing behavior. The user can further select smaller page sizes using the drop down. The following tables are set up to be sortable and pageable : datanodes, decommissioning nodes, datanode volume failures, snapshot table directories, snapshotted directories and file/dir listing. For file/dir listings , default sort happens on the name column (last column). For snapshotted directories, the default sort happens on the snapshot directory (second column). For all other tables, the sort happens on the first column.
          Hide
          benoyantony Benoy Antony added a comment - - edited

          It is possible to add custom sorting to datatables plugin (used in yarn) to address these cases. A similar example is shown here:
          http://www.datatables.net/plug-ins/sorting/file-size.
          This can be modified to include TB and PB.

          Show
          benoyantony Benoy Antony added a comment - - edited It is possible to add custom sorting to datatables plugin (used in yarn) to address these cases. A similar example is shown here: http://www.datatables.net/plug-ins/sorting/file-size . This can be modified to include TB and PB.
          Hide
          wheat9 Haohui Mai added a comment -

          It will be a good idea to standardize on one sort/pagination plugin across hdfs and yarn UIs.

          Just to reiterate, I don't see a way to do it correctly by simply sorting rows in the HTML tables. What are the expected results of the following case?

          || DN || DFS Usage ||
          | dn1 | 92 GB |
          | dn2 | 110 GB |
          | dn3 | 4 MB |
          

          If you sort them by DFS Usage alphabetically (which is what the plugin does), you'll get:

          || DN || DFS Usage ||
          | dn2 | 110 GB |
          | dn3 | 4 MB |
          | dn1 | 92 GB |
          

          The results do not make sense to me. What it implies is that the sorting should be done at the JSON data, and it should be done at the same time when pagination is done.

          Show
          wheat9 Haohui Mai added a comment - It will be a good idea to standardize on one sort/pagination plugin across hdfs and yarn UIs. Just to reiterate, I don't see a way to do it correctly by simply sorting rows in the HTML tables. What are the expected results of the following case? || DN || DFS Usage || | dn1 | 92 GB | | dn2 | 110 GB | | dn3 | 4 MB | If you sort them by DFS Usage alphabetically (which is what the plugin does), you'll get: || DN || DFS Usage || | dn2 | 110 GB | | dn3 | 4 MB | | dn1 | 92 GB | The results do not make sense to me. What it implies is that the sorting should be done at the JSON data, and it should be done at the same time when pagination is done.
          Hide
          benoyantony Benoy Antony added a comment -

          Just saw that Yarn UI uses datatables plugin and it supports pagination as well. (http://www.datatables.net/examples/index)
          It will be a good idea to standardize on one sort/pagination plugin across hdfs and yarn UIs.

          Haohui Mai, Let me know what you think in terms of using datatables plugin. I can use it to achieve sorting as part of this jira.
          Pagination can be tackled separately, if that's a requirement.

          Show
          benoyantony Benoy Antony added a comment - Just saw that Yarn UI uses datatables plugin and it supports pagination as well. ( http://www.datatables.net/examples/index ) It will be a good idea to standardize on one sort/pagination plugin across hdfs and yarn UIs. Haohui Mai , Let me know what you think in terms of using datatables plugin. I can use it to achieve sorting as part of this jira. Pagination can be tackled separately, if that's a requirement.
          Hide
          benoyantony Benoy Antony added a comment -

          #1 can be fixed pretty easily by applying special logic for the columns which needs custom sorting.
          #2 is not a plugin specific issue as the major delay is for client-server communication. Since sorting and regeneration happens at the client side, it should be quick. This can be easily tested.

          A standard solution for sorting is better than a custom solution for many reasons including maintainability, usability and free enhancements.
          Pagination can be done independently of sorting.

          Show
          benoyantony Benoy Antony added a comment - #1 can be fixed pretty easily by applying special logic for the columns which needs custom sorting. #2 is not a plugin specific issue as the major delay is for client-server communication. Since sorting and regeneration happens at the client side, it should be quick. This can be easily tested. A standard solution for sorting is better than a custom solution for many reasons including maintainability, usability and free enhancements. Pagination can be done independently of sorting.
          Hide
          wheat9 Haohui Mai added a comment -

          There are a couple issues that I can see in the current patch:

          • It fails to sort the DN table correctly. The NN UI has customized formats on columns like disk space usages, as the plugin sorts it based on alphabetical order, it does not give correct result.
          • It has responsiveness issues when you have thousands of DNs. With the plugin, the workflow is the following: the dust engine renders the information of all DNs into a table, and the plugin sorts all the rows then regenerates the table. As people has observed some lags when the cluster has thousands of DNs, the patch can make it worse.

          While I think (1) can be fixed fairly easily, for (2) I think that the only reasonable solution is to bake both pagination and sorting when rendering the templates, but not at the time when table has been propagated to the DOM. As I believe it does not require a lot of work, I should be able to spend some time to play around with it and see how far I can go.

          Show
          wheat9 Haohui Mai added a comment - There are a couple issues that I can see in the current patch: It fails to sort the DN table correctly. The NN UI has customized formats on columns like disk space usages, as the plugin sorts it based on alphabetical order, it does not give correct result. It has responsiveness issues when you have thousands of DNs. With the plugin, the workflow is the following: the dust engine renders the information of all DNs into a table, and the plugin sorts all the rows then regenerates the table. As people has observed some lags when the cluster has thousands of DNs, the patch can make it worse. While I think (1) can be fixed fairly easily, for (2) I think that the only reasonable solution is to bake both pagination and sorting when rendering the templates, but not at the time when table has been propagated to the DOM. As I believe it does not require a lot of work, I should be able to spend some time to play around with it and see how far I can go.
          Hide
          benoyantony Benoy Antony added a comment -

          Why not use the plugin ?
          Its a widely used plugin and used in other Apache projects like Storm. Based on history, it's quite mature. It has a small size of 30 Kb. It seemed to function very well during my testing. Instead of re-inventing the wheel, why not use a standard solution ?

          Enabling sorting on a specific table with the plugin is quite straight forward as I explained in the comment above.
          It is desirable to sort any table. Currently , I added sorting on data nodes, snapshots and filesystem browsing. But if a table doesn't need sorting for some reason, it can be easily removed.

          BTW, pagination was never supported. It is normally required because client doesn't have all the data which is not the case here.

          Show
          benoyantony Benoy Antony added a comment - Why not use the plugin ? Its a widely used plugin and used in other Apache projects like Storm. Based on history, it's quite mature. It has a small size of 30 Kb. It seemed to function very well during my testing. Instead of re-inventing the wheel, why not use a standard solution ? Enabling sorting on a specific table with the plugin is quite straight forward as I explained in the comment above. It is desirable to sort any table. Currently , I added sorting on data nodes, snapshots and filesystem browsing. But if a table doesn't need sorting for some reason, it can be easily removed. BTW, pagination was never supported. It is normally required because client doesn't have all the data which is not the case here.
          Hide
          wheat9 Haohui Mai added a comment -

          Thanks for working on this.

          My understanding is that only the datanode tab needs to be sorted and paginated, thus it should not affect other tables. Therefore I'm leaning towards a simpler solution instead of introducing a plugin. Let me experiment a little bit.

          Show
          wheat9 Haohui Mai added a comment - Thanks for working on this. My understanding is that only the datanode tab needs to be sorted and paginated, thus it should not affect other tables. Therefore I'm leaning towards a simpler solution instead of introducing a plugin. Let me experiment a little bit.
          Hide
          benoyantony Benoy Antony added a comment -

          Haohui Mai, Could you please review this enhancement ?

          Show
          benoyantony Benoy Antony added a comment - Haohui Mai , Could you please review this enhancement ?
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12683166/HDFS-6407.patch
          against trunk revision a4df9ee.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          -1 release audit. The applied patch generated 2 release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8810//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/8810//artifact/patchprocess/patchReleaseAuditProblems.txt
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8810//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12683166/HDFS-6407.patch against trunk revision a4df9ee. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. -1 release audit . The applied patch generated 2 release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8810//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/8810//artifact/patchprocess/patchReleaseAuditProblems.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8810//console This message is automatically generated.
          Hide
          benoyantony Benoy Antony added a comment -

          To add sorting to do a new table, the following needs to be done. (More details in http://mottie.github.io/tablesorter/docs/index.html)

          1. In the page containing table, include the css and js file

          <link rel="stylesheet" type="text/css" href="/static/tablesorter.default.css" />
          <script type="text/javascript" src="/static/jquery.tablesorter.min.js"> </script>
          

          2. Attach table sorter style on the table

          <table class="tablesorter-default">
          

          3. Make sure that the table has THEAD and TBODY tags

          4. Enable tablesorter to sort the table when the document is loaded.

          $(".tablesorter-default").tablesorter({sortList: [[0,0]]});
          
          Show
          benoyantony Benoy Antony added a comment - To add sorting to do a new table, the following needs to be done. (More details in http://mottie.github.io/tablesorter/docs/index.html ) 1. In the page containing table, include the css and js file <link rel= "stylesheet" type= "text/css" href= "/ static /tablesorter. default .css" /> <script type= "text/javascript" src= "/ static /jquery.tablesorter.min.js" > </script> 2. Attach table sorter style on the table <table class= "tablesorter- default " > 3. Make sure that the table has THEAD and TBODY tags 4. Enable tablesorter to sort the table when the document is loaded. $( ".tablesorter- default " ).tablesorter({sortList: [[0,0]]});
          Hide
          benoyantony Benoy Antony added a comment -

          Attaching a patch which makes the tables on Namenode UI sortable.
          Tablesorter jquery plugin is used (http://mottie.github.io/tablesorter/docs/index.html). This plugin is also used in Apache Storm project as well as in several other products.
          LICENCE.txt is updated with license information related to Tablesorter jquery plugin.

          The actual changes in the html/js pages are minimal and mostly related to indenting.

          Testing is done manually.
          The screenshots are added to illustrate the feature.
          Both ascending and descending sorting is supported.
          Multiple columns can sorted simultaneously by holding down the SHIFT key and clicking a second, third or fourth column header.

          Show
          benoyantony Benoy Antony added a comment - Attaching a patch which makes the tables on Namenode UI sortable. Tablesorter jquery plugin is used ( http://mottie.github.io/tablesorter/docs/index.html ). This plugin is also used in Apache Storm project as well as in several other products. LICENCE.txt is updated with license information related to Tablesorter jquery plugin. The actual changes in the html/js pages are minimal and mostly related to indenting. Testing is done manually. The screenshots are added to illustrate the feature. Both ascending and descending sorting is supported. Multiple columns can sorted simultaneously by holding down the SHIFT key and clicking a second, third or fourth column header.
          Hide
          benoyantony Benoy Antony added a comment - - edited

          Assigning this feature to me since I was working on the feature and was about to file another jira.

          Show
          benoyantony Benoy Antony added a comment - - edited Assigning this feature to me since I was working on the feature and was about to file another jira.

            People

            • Assignee:
              wheat9 Haohui Mai
              Reporter:
              nroberts Nathan Roberts
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development