Details
Description
It's good to expose truncate in libhdfs.
Attachments
Attachments
- HDFS-7838.001.patch
- 3 kB
- Yi Liu
- HDFS-7838.002.patch
- 3 kB
- Yi Liu
- HDFS-7838.003.patch
- 3 kB
- Yi Liu
- HDFS-7838.004.patch
- 3 kB
- Yi Liu
Issue Links
- is related to
-
HDFS-10991 Export hdfsTruncateFile symbol in libhdfs
- Resolved
-
HDFS-7902 Expose truncate API for libwebhdfs
- Open
Activity
Thanks for looking at this, Yi.
1080 errno = ret;
We shouldn't set errno on success, only on failure.
Also, it seems like we can accidentally return success (1) here even on failure if this returns 1:
1067 if (jthr) { 1068 ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL, 1069 "hdfsTruncateFile(%s): FileSystem#truncate", path); 1070 goto done; 1071 }
Keep in mind that 1 is a valid errno code. On Linux:
#ifndef _ASM_GENERIC_ERRNO_BASE_H #define _ASM_GENERIC_ERRNO_BASE_H #define EPERM 1 /* Operation not permitted */ #define ENOENT 2 /* No such file or directory */ #define ESRCH 3 /* No such process */ ...
This also violates the comment in the header file that the function returns -1 on failure. It seems that the function actually returns a positive number on failure, currently.
78 typedef int64_t tNewLength;/// new length to be truncated to
We don't need another type for this... just use tOffset. Or better yet, just use int64_t, since that's what it is always going to be.
Also, can you add a stub hdfsTruncateFile function to libwebhdfs that returns ENOTSUP, and file a jira to add truncate support to libwebhdfs? or just implement it in this patch, your choice.
Thanks Colin for review, update the patch to address your comments.
Also, can you add a stub hdfsTruncateFile function to libwebhdfs that returns ENOTSUP, and file a jira to add truncate support to libwebhdfs? or just implement it in this patch, your choice.
I file HDFS-7902 for it, thanks.
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12703223/HDFS-7838.002.patch
against trunk revision 608ebd5.
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 javadoc. There were no new javadoc warning messages.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
-1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.balancer.TestBalancer
org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
org.apache.hadoop.hdfs.web.TestWebHDFS
Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9787//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9787//console
This message is automatically generated.
FileSystem.java says:
* @return <code>true</code> if the file has been truncated to the desired * <code>newLength</code> and is immediately available to be reused for * write operations such as <code>append</code>, or * <code>false</code> if a background process of adjusting the length of * the last block has been started, and clients should wait for it to * complete before proceeding with further file updates. */ public boolean truncate(Path f, long newLength) throws IOException {
hdfs.h says:
403 * @return 0 if the file has been truncated to the desired newlength 404 * and is immediately available to be reused for write operations 405 * such as append. 406 * 1 if a background process of adjusting the length of the last 407 * block has been started, and clients should wait for it to 408 * complete before proceeding with further file updates. 409 * -1 on error.
code says:
1063 jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS, 1064 "truncate", JMETHOD2(JPARAM(HADOOP_PATH), "J", "Z"), 1065 jPath, newlength); 1066 destroyLocalReference(env, jPath); 1067 if (jthr) { 1068 errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL, 1069 "hdfsTruncateFile(%s): FileSystem#truncate", path); 1070 return -1; 1071 } 1072 if (!jVal.z) { 1073 return 1; 1074 } 1075 return 0; 1076 }
It seems confusing to translate false => 1 and true => 0. I realize that returning 0 on success is an old C convention, but both returns of truncate are "successful". It seems a lot more intuitive to have true => 1, false => 0, error => negative.
Also, this is a little pedantic, but please check against JNI_TRUE rather than doing !jVal.z or similar.
I file HDFS-7902 for [libwebhdfs], thanks.
Thanks for that. In the meantime, can you add a stub function to libwebhdfs that does nothing but set errno to ENOTSUP and return -1? Thanks.
+1 once those are addressed.
It seems confusing to translate false => 1 and true => 0. I realize that returning 0 on success is an old C convention, but both returns of truncate are "successful". It seems a lot more intuitive to have true => 1, false => 0, error => negative.
Also, this is a little pedantic, but please check against JNI_TRUE rather than doing !jVal.z or similar.
Thanks Colin for the review. Update the patch to address the comments.
>> I file HDFS-7902 for [libwebhdfs], thanks.
Thanks for that. In the meantime, can you add a stub function to libwebhdfs that does nothing but set errno to ENOTSUP and return -1? Thanks.
I add truncate support for libwebhdfs instead of setting errno to ENOTSUP and returning -1. Is there any reason we don't support it?
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12703648/HDFS-7838.003.patch
against trunk revision 7711049.
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 javadoc. There were no new javadoc warning messages.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
-1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9817//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9817//console
This message is automatically generated.
+1 once those are addressed.
Hi, cmccabe, will you take another look at the latest patch or I can commit it? Thanks.
The libhdfs part looks correct, but you didn't add a stub function to libwebhdfs which returns ENOTSUP like I asked.
I understand that you are planning on adding full support in libwebhdfs in a follow-on jira, but it would be better to have the function introduced there in this JIRA, so that people would not get linker failures when trying to link a binary using hdfsTruncate against libwebhdfs. libwebhdfs must implement every function in hdfs.h.
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12704742/HDFS-7838.004.patch
against trunk revision 3ff1ba2.
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 javadoc. There were no new javadoc warning messages.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
-1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
org.apache.hadoop.hdfs.server.balancer.TestBalancer
The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr
Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9894//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9894//console
This message is automatically generated.
FAILURE: Integrated in Hadoop-trunk-Commit #7344 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7344/)
HDFS-7838. Expose truncate API for libhdfs. (yliu) (yliu: rev 48c2db34eff376c0f3a72587a5540b1e3dffafd2)
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
- hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
- hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
FAILURE: Integrated in Hadoop-Yarn-trunk #869 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/869/)
HDFS-7838. Expose truncate API for libhdfs. (yliu) (yliu: rev 48c2db34eff376c0f3a72587a5540b1e3dffafd2)
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
- hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
- hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #135 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/135/)
HDFS-7838. Expose truncate API for libhdfs. (yliu) (yliu: rev 48c2db34eff376c0f3a72587a5540b1e3dffafd2)
- hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
- hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #126 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/126/)
HDFS-7838. Expose truncate API for libhdfs. (yliu) (yliu: rev 48c2db34eff376c0f3a72587a5540b1e3dffafd2)
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
- hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
- hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
FAILURE: Integrated in Hadoop-Hdfs-trunk #2067 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2067/)
HDFS-7838. Expose truncate API for libhdfs. (yliu) (yliu: rev 48c2db34eff376c0f3a72587a5540b1e3dffafd2)
- hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
- hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #135 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/135/)
HDFS-7838. Expose truncate API for libhdfs. (yliu) (yliu: rev 48c2db34eff376c0f3a72587a5540b1e3dffafd2)
- hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
- hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
FAILURE: Integrated in Hadoop-Mapreduce-trunk #2085 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2085/)
HDFS-7838. Expose truncate API for libhdfs. (yliu) (yliu: rev 48c2db34eff376c0f3a72587a5540b1e3dffafd2)
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
- hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
- hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12702663/HDFS-7838.001.patch
against trunk revision 53947f3.
+1 @author. The patch does not contain any @author tags.
-1 tests included. The patch doesn't appear to include any new or modified tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.
+1 javac. The applied patch does not increase the total number of javac compiler warnings.
+1 javadoc. There were no new javadoc warning messages.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.
+1 release audit. The applied patch does not increase the total number of release audit warnings.
-1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9738//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9738//console
This message is automatically generated.