Details
Description
We noticed recently in our environment that, when writing data to HDFS via WebHDFS, a quota exception is returned to the client as:
java.io.IOException: Error writing request body to server at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3536) ~[?:1.8.0_172] at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3519) ~[?:1.8.0_172] at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[?:1.8.0_172] at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[?:1.8.0_172] at java.io.FilterOutputStream.flush(FilterOutputStream.java:140) ~[?:1.8.0_172] at java.io.DataOutputStream.flush(DataOutputStream.java:123) ~[?:1.8.0_172]
It is entirely opaque to the user that this exception was caused because they exceeded their quota. Yet in the DataNode logs:
2019-04-24 02:13:09,639 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /foo/path/here is exceeded: quota = XXXXXXXXXXXX B = X TB but diskspace consumed = XXXXXXXXXXXXXXXX B = X TB at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211) at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239)
This was on a 2.7.x cluster, but I verified that the same logic exists on trunk. I believe we need to fix some of the logic within the ExceptionHandler to add special handling for the quota exception.
Attachments
Attachments
Issue Links
- depends upon
-
HDFS-11195 Return error when appending files by webhdfs rest api fails
- Resolved