Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 0.23.0
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: hdfs-client
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Will it be better to catch the exception and throw a small reasonable messege to user when they exceed quota?

      $hdfs dfs -mkdir testDir
      $hdfs dfsadmin -setSpaceQuota 191M testDir
      $hdfs dfs -count -q testDir
      none inf 200278016 200278016 1 0 0
      hdfs://<NN hostname>:<port>/user/hdfsqa/testDir
      $hdfs dfs -put /etc/passwd /user/hadoopqa/testDir
      11/09/19 08:08:15 WARN hdfs.DFSClient: DataStreamer Exception
      org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /user/hdfsqa/testDir is exceeded:
      quota=191.0m diskspace consumed=768.0m
      at org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.verifyQuota(INodeDirectoryWithQuota.java:159)
      at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:1609)
      at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1383)
      at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:370)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allocateBlock(FSNamesystem.java:1681)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1476)
      at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:389)
      at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      at java.lang.reflect.Method.invoke(Method.java:597)
      at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:365)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1496)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1492)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:396)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1490)

      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
      at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
      at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1100)
      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:972)
      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)
      Caused by: org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /user/hdfsqa/testDir is
      exceeded: quota=191.0m diskspace consumed=768.0m
      at org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.verifyQuota(INodeDirectoryWithQuota.java:159)
      at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:1609)
      at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1383)
      at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:370)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allocateBlock(FSNamesystem.java:1681)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1476)
      at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:389)
      at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      at java.lang.reflect.Method.invoke(Method.java:597)
      at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:365)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1496)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1492)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:396)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1490)

      at org.apache.hadoop.ipc.Client.call(Client.java:1084)
      at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:193)
      at $Proxy6.addBlock(Unknown Source)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      at java.lang.reflect.Method.invoke(Method.java:597)
      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:100)
      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:65)
      at $Proxy6.addBlock(Unknown Source)
      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1098)
      ... 2 more
      put: The DiskSpace quota of /user/hdfsqa/testDir is exceeded: quota=191.0m diskspace consumed=768.0m
      11/09/19 08:08:15 ERROR hdfs.DFSClient: Failed to close file /user/hdfsqa/testDir/passwd
      org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /user/hdfsqa/testDir is exceeded:
      quota=191.0m diskspace consumed=768.0m
      at org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.verifyQuota(INodeDirectoryWithQuota.java:159)
      at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:1609)
      at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1383)
      at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:370)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allocateBlock(FSNamesystem.java:1681)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1476)
      at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:389)
      at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      at java.lang.reflect.Method.invoke(Method.java:597)
      at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:365)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1496)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1492)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:396)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1490)

      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
      at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
      at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1100)
      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:972)
      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)
      Caused by: org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /user/hdfsqa/testDir is
      exceeded: quota=191.0m diskspace consumed=768.0m
      at org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.verifyQuota(INodeDirectoryWithQuota.java:159)
      at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:1609)
      at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1383)
      at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:370)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allocateBlock(FSNamesystem.java:1681)
      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1476)
      at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:389)
      at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      at java.lang.reflect.Method.invoke(Method.java:597)
      at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:365)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1496)
      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1492)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:396)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1490)

      at org.apache.hadoop.ipc.Client.call(Client.java:1084)
      at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:193)
      at $Proxy6.addBlock(Unknown Source)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      at java.lang.reflect.Method.invoke(Method.java:597)
      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:100)
      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:65)
      at $Proxy6.addBlock(Unknown Source)
      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1098)
      ... 2 more

      1. HDFS-2360.patch
        2 kB
        Harsh J
      2. HDFS-2360.patch
        1 kB
        Harsh J

        Issue Links

          Activity

          Hide
          qwertymaniac Harsh J added a comment -

          The hard part here is that these are logs. We'd have to selectively log lesser for these kinda exceptions, and it can get ugly doing that given how the data streamer handles exceptions today.

          Show
          qwertymaniac Harsh J added a comment - The hard part here is that these are logs. We'd have to selectively log lesser for these kinda exceptions, and it can get ugly doing that given how the data streamer handles exceptions today.
          Hide
          qwertymaniac Harsh J added a comment -

          The last line of the command (excluding the log and its stack trace via the WARN) does today print the base message reason that should catch the eye clearly:

          put: The DiskSpace quota of /testDir is exceeded: quota = 1024 B = 1 KB but diskspace consumed = 402653184 B = 384 MB
          

          Resolving this as it should be clear enough. To get rid of the WARN, the client logger can be nullified, but the catch layer is rather generic today to specifically turn it off without causing other impact (for other use-cases and troubles) I think.

          As always though, feel free to reopen with any counter-point.

          Show
          qwertymaniac Harsh J added a comment - The last line of the command (excluding the log and its stack trace via the WARN) does today print the base message reason that should catch the eye clearly: put: The DiskSpace quota of /testDir is exceeded: quota = 1024 B = 1 KB but diskspace consumed = 402653184 B = 384 MB Resolving this as it should be clear enough. To get rid of the WARN, the client logger can be nullified, but the catch layer is rather generic today to specifically turn it off without causing other impact (for other use-cases and troubles) I think. As always though, feel free to reopen with any counter-point.
          Hide
          aw Allen Wittenauer added a comment -

          OK, then let me re-open it.

          Having oodles of useless stack trace here is incredibly user-unfriendly. Users do miss this message very very often because, believe it or not, they aren't Java programmers who are used to reading these things.

          Show
          aw Allen Wittenauer added a comment - OK, then let me re-open it. Having oodles of useless stack trace here is incredibly user-unfriendly. Users do miss this message very very often because, believe it or not, they aren't Java programmers who are used to reading these things.
          Hide
          qwertymaniac Harsh J added a comment -

          Allen Wittenauer - I agree with that sentiment, but the previous state was the lack of message. Now we do have the message, like my comment example indicates. Is that still unclear, or is the reopen just to get rid of the stack trace WARN as well on clients?

          Show
          qwertymaniac Harsh J added a comment - Allen Wittenauer - I agree with that sentiment, but the previous state was the lack of message. Now we do have the message, like my comment example indicates. Is that still unclear, or is the reopen just to get rid of the stack trace WARN as well on clients?
          Hide
          aw Allen Wittenauer added a comment -

          I'm re-opening to get rid of the stack trace as well. I see that someone else has also duped that request to this issue as well.

          Show
          aw Allen Wittenauer added a comment - I'm re-opening to get rid of the stack trace as well. I see that someone else has also duped that request to this issue as well.
          Hide
          qwertymaniac Harsh J added a comment -

          OK, before:

          [root@host ~]# hdfs dfs -put /etc/passwd /testDir
          15/03/16 09:19:17 WARN hdfs.DFSClient: DataStreamer Exception
          org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /testDir is exceeded: quota = 1024 B = 1 KB but diskspace consumed = 402653184 B = 384 MB
          	at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyDiskspaceQuota(DirectoryWithQuotaFeature.java:145)
          	at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:155)
          	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:1924)
          	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1759)
          	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1734)
          	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:388)
          	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3702)
          	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3285)
          	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:644)
          	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:212)
          	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:483)
          	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
          	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
          	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
          	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
          	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
          	at java.security.AccessController.doPrivileged(Native Method)
          	at javax.security.auth.Subject.doAs(Subject.java:415)
          	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
          	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
          
          	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
          	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
          	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
          	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
          	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
          	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
          	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1547)
          	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
          	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:600)
          Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.DSQuotaExceededException): The DiskSpace quota of /testDir is exceeded: quota = 1024 B = 1 KB but diskspace consumed = 402653184 B = 384 MB
          	at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyDiskspaceQuota(DirectoryWithQuotaFeature.java:145)
          	at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:155)
          	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:1924)
          	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1759)
          	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1734)
          	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:388)
          	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3702)
          	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3285)
          	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:644)
          	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:212)
          	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:483)
          	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
          	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
          	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
          	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
          	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
          	at java.security.AccessController.doPrivileged(Native Method)
          	at javax.security.auth.Subject.doAs(Subject.java:415)
          	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
          	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
          
          	at org.apache.hadoop.ipc.Client.call(Client.java:1468)
          	at org.apache.hadoop.ipc.Client.call(Client.java:1399)
          	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
          	at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
          	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:606)
          	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
          	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
          	at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
          	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1544)
          	... 2 more
          put: The DiskSpace quota of /testDir is exceeded: quota = 1024 B = 1 KB but diskspace consumed = 402653184 B = 384 MB
          

          After:

          [root@host ~]# hadoop fs -put /etc/passwd /testDir
          put: The DiskSpace quota of /testDir is exceeded: quota = 1024 B = 1 KB but diskspace consumed = 402653184 B = 384 MB
          

          Do the changes look good to you Allen Wittenauer? Just logger changes however, so cannot add a test case, but above test was done with a branch-2 backport (had to use -F3 with patch command to apply for some reason, but otherwise straight-forward). I've left in a debug for those that may need the stack trace as well.

          Show
          qwertymaniac Harsh J added a comment - OK, before: [root@host ~]# hdfs dfs -put /etc/passwd /testDir 15/03/16 09:19:17 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /testDir is exceeded: quota = 1024 B = 1 KB but diskspace consumed = 402653184 B = 384 MB at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyDiskspaceQuota(DirectoryWithQuotaFeature.java:145) at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:155) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:1924) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1759) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1734) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:388) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3702) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3285) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:644) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:212) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:483) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1547) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:600) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.DSQuotaExceededException): The DiskSpace quota of /testDir is exceeded: quota = 1024 B = 1 KB but diskspace consumed = 402653184 B = 384 MB at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyDiskspaceQuota(DirectoryWithQuotaFeature.java:145) at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:155) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:1924) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1759) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:1734) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:388) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3702) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3285) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:644) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:212) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:483) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at org.apache.hadoop.ipc.Client.call(Client.java:1468) at org.apache.hadoop.ipc.Client.call(Client.java:1399) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy14.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy15.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1544) ... 2 more put: The DiskSpace quota of /testDir is exceeded: quota = 1024 B = 1 KB but diskspace consumed = 402653184 B = 384 MB After: [root@host ~]# hadoop fs -put /etc/passwd /testDir put: The DiskSpace quota of /testDir is exceeded: quota = 1024 B = 1 KB but diskspace consumed = 402653184 B = 384 MB Do the changes look good to you Allen Wittenauer ? Just logger changes however, so cannot add a test case, but above test was done with a branch-2 backport (had to use -F3 with patch command to apply for some reason, but otherwise straight-forward). I've left in a debug for those that may need the stack trace as well.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Hi Harsh J, the approach looks good. Can you just check for QuotaExceededException so we include QuotaByStorageTypeExceededException and other derivatives. I checked and they all have useful error messages.

          Thanks.

          Show
          arpitagarwal Arpit Agarwal added a comment - Hi Harsh J , the approach looks good. Can you just check for QuotaExceededException so we include QuotaByStorageTypeExceededException and other derivatives. I checked and they all have useful error messages. Thanks.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12704854/HDFS-2360.patch
          against trunk revision bf3275d.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend

          The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9901//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9901//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12704854/HDFS-2360.patch against trunk revision bf3275d. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9901//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9901//console This message is automatically generated.
          Hide
          qwertymaniac Harsh J added a comment -

          Thanks Arpit Agarwal, good idea, addressed in this attachment.

          Show
          qwertymaniac Harsh J added a comment - Thanks Arpit Agarwal , good idea, addressed in this attachment.
          Hide
          qwertymaniac Harsh J added a comment -

          Inspected the failed tests but they are unrelated. The fix also only changes the logger levels conditionally, no changes in what's being thrown/etc..

          Show
          qwertymaniac Harsh J added a comment - Inspected the failed tests but they are unrelated. The fix also only changes the logger levels conditionally, no changes in what's being thrown/etc..
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          +1 for the updated patch pending Jenkins.

          Thanks for addressing that Harsh J.

          Show
          arpitagarwal Arpit Agarwal added a comment - +1 for the updated patch pending Jenkins. Thanks for addressing that Harsh J .
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12704900/HDFS-2360.patch
          against trunk revision 2681ed9.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.TestFileCreation

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9906//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9906//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12704900/HDFS-2360.patch against trunk revision 2681ed9. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestFileCreation Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9906//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9906//console This message is automatically generated.
          Hide
          qwertymaniac Harsh J added a comment -

          Thanks again Arpit (and Allen).

          Failing test was unrelated again, so I went ahead and committed this to branch-2 and trunk.

          Show
          qwertymaniac Harsh J added a comment - Thanks again Arpit (and Allen). Failing test was unrelated again, so I went ahead and committed this to branch-2 and trunk.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #7340 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7340/)
          HDFS-2360. Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #7340 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7340/ ) HDFS-2360 . Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #869 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/869/)
          HDFS-2360. Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #869 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/869/ ) HDFS-2360 . Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #135 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/135/)
          HDFS-2360. Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #135 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/135/ ) HDFS-2360 . Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #126 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/126/)
          HDFS-2360. Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #126 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/126/ ) HDFS-2360 . Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2067 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2067/)
          HDFS-2360. Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2067 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2067/ ) HDFS-2360 . Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #135 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/135/)
          HDFS-2360. Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #135 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/135/ ) HDFS-2360 . Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #2085 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2085/)
          HDFS-2360. Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2085 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2085/ ) HDFS-2360 . Ugly stacktrce when quota exceeds. (harsh) (harsh: rev 046521cd6511b7fc6d9478cb2bed90d8e75fca20) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java

            People

            • Assignee:
              qwertymaniac Harsh J
              Reporter:
              rajsaha Rajit Saha
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development