Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-10986

DFSAdmin should log detailed error message if any

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha2
    • Component/s: tools
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      There are some subcommands in DFSAdmin that swallow IOException and give very limited error message, if any, to the stderr.

      $ hdfs dfsadmin -getBalancerBandwidth 127.0.0.1:9866
      Datanode unreachable.
      $ hdfs dfsadmin -getDatanodeInfo localhost:9866
      Datanode unreachable.
      $ hdfs dfsadmin -evictWriters 127.0.0.1:9866
      $ echo $?
      -1
      

      User is not able to get the exception stack even the LOG level is DEBUG. This is not very user friendly. Fortunately, if the port number is not accessible (say 9999), users can infer the detailed error message by IPC logs:

      $ hdfs dfsadmin -getBalancerBandwidth 127.0.0.1:9999
      2016-10-07 18:01:35,115 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      2016-10-07 18:01:36,335 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9999. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
      .....
      2016-10-07 18:01:45,361 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9999. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
      2016-10-07 18:01:45,362 WARN ipc.Client: Failed to connect to server: localhost/127.0.0.1:9999: retries get failed due to exceeded maximum allowed retries number: 10
      java.net.ConnectException: Connection refused
      	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
      	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
              ...
      	at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2073)
      	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
      	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
      	at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2225)
      Datanode unreachable.
      

      We should fix this by providing detailed error message. Actually, the DFSAdmin#run already handles exception carefully, including:

      1. set the exit ret value to -1
      2. print the error message
      3. log the exception stack trace (in DEBUG level)

      All we need to do is to not swallow exceptions without good reason.

        Attachments

        1. HDFS-10986.000.patch
          2 kB
          Mingliang Liu
        2. HDFS-10986.001.patch
          8 kB
          Mingliang Liu
        3. HDFS-10986.002.patch
          8 kB
          Mingliang Liu
        4. HDFS-10986.003.patch
          8 kB
          Mingliang Liu
        5. HDFS-10986.004.patch
          7 kB
          Brahma Reddy Battula
        6. HDFS-10986-branch-2.8.002.patch
          8 kB
          Mingliang Liu
        7. HDFS-10986-branch-2.8.003.patch
          8 kB
          Mingliang Liu
        8. HDFS-10986-branch-2.8.004.patch
          7 kB
          Brahma Reddy Battula

          Issue Links

            Activity

              People

              • Assignee:
                liuml07 Mingliang Liu
                Reporter:
                liuml07 Mingliang Liu
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: