Hadoop Common
  1. Hadoop Common
  2. HADOOP-8319

FileContext does not support setWriteChecksum

    Details

    • Type: Bug Bug
    • Status: Patch Available
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 2.0.0-alpha
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      File Context does not support setWriteChecksum and hence users trying
      to use this functionality fails.

      1. HADOOP-8319.patch
        25 kB
        John George
      2. HADOOP-8319.patch
        21 kB
        John George
      3. HADOOP-8319.patch
        13 kB
        John George
      4. HADOOP-8319.patch
        22 kB
        John George
      5. HADOOP-8319.patch
        22 kB
        John George
      6. HADOOP-8319.patch
        19 kB
        John George

        Activity

        Hide
        Harsh J added a comment -

        Hi Daryn - Is this patch ready to go?

        Show
        Harsh J added a comment - Hi Daryn - Is this patch ready to go?
        Hide
        John George added a comment -

        Sorry I missed this comment.

        The side-effect is the very solution to the bug. If I tell my context: "disable writing checksums because I really don't care about integrity", then I implicitly don't care where the data is being written by the context. With the patch as-is, the user can't reliably control the writing or verification of checksums w/o explicitly resolving paths and creating a new context for the resolved path – which kind of defeats the purpose of the context.

        The user can set/reset the values specific to the path. But like you said, the other option is to allow user to only set/reset per file context (as opposed to per AFS that each path belongs to). That might simplify the implementation and make it less complex for the user as well.

        Do you have another solution in mind to control the flags?

        The other solution is to cache the state per AFS which is the latest patch I have uploaded. But, I am leaning towards your proposal of per FC, especially since the setWD and getWD are implemented per FC.

        Let me know your thoughts either way. Thanks for all your inputs Daryn.

        Show
        John George added a comment - Sorry I missed this comment. The side-effect is the very solution to the bug. If I tell my context: "disable writing checksums because I really don't care about integrity", then I implicitly don't care where the data is being written by the context. With the patch as-is, the user can't reliably control the writing or verification of checksums w/o explicitly resolving paths and creating a new context for the resolved path – which kind of defeats the purpose of the context. The user can set/reset the values specific to the path. But like you said, the other option is to allow user to only set/reset per file context (as opposed to per AFS that each path belongs to). That might simplify the implementation and make it less complex for the user as well. Do you have another solution in mind to control the flags? The other solution is to cache the state per AFS which is the latest patch I have uploaded. But, I am leaning towards your proposal of per FC, especially since the setWD and getWD are implemented per FC. Let me know your thoughts either way. Thanks for all your inputs Daryn.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12526037/HADOOP-8319.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified test files.

        -1 javadoc. The javadoc tool appears to have generated -6 warning messages.

        -1 javac. The patch appears to cause tar ant target to fail.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/961//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/961//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12526037/HADOOP-8319.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified test files. -1 javadoc. The javadoc tool appears to have generated -6 warning messages. -1 javac. The patch appears to cause tar ant target to fail. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/961//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/961//console This message is automatically generated.
        Hide
        John George added a comment -

        Attaching a patch with the ability to cache AFS states so that states are not lost as pointed by Daryn.

        Show
        John George added a comment - Attaching a patch with the ability to cache AFS states so that states are not lost as pointed by Daryn.
        Hide
        Daryn Sharp added a comment -

        The reason setWriteChecksum requires a patch argument is because FileContext, just like ViewFS, works on paths and resolves the path to an FS. If we remove the path, setWriteChecksum will have the side effect that it apply to all the FS that the filecontext resolves in a path.

        The side-effect is the very solution to the bug. If I tell my context: "disable writing checksums because I really don't care about integrity", then I implicitly don't care where the data is being written by the context. With the patch as-is, the user can't reliably control the writing or verification of checksums w/o explicitly resolving paths and creating a new context for the resolved path – which kind of defeats the purpose of the context.

        Do you have another solution in mind to control the flags?

        Show
        Daryn Sharp added a comment - The reason setWriteChecksum requires a patch argument is because FileContext, just like ViewFS, works on paths and resolves the path to an FS. If we remove the path, setWriteChecksum will have the side effect that it apply to all the FS that the filecontext resolves in a path. The side-effect is the very solution to the bug. If I tell my context: "disable writing checksums because I really don't care about integrity", then I implicitly don't care where the data is being written by the context. With the patch as-is, the user can't reliably control the writing or verification of checksums w/o explicitly resolving paths and creating a new context for the resolved path – which kind of defeats the purpose of the context. Do you have another solution in mind to control the flags?
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12525491/HADOOP-8319.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified test files.

        -1 javadoc. The javadoc tool appears to have generated -6 warning messages.

        -1 javac. The patch appears to cause tar ant target to fail.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/930//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/930//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12525491/HADOOP-8319.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified test files. -1 javadoc. The javadoc tool appears to have generated -6 warning messages. -1 javac. The patch appears to cause tar ant target to fail. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/930//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/930//console This message is automatically generated.
        Hide
        John George added a comment -

        Attaching a patch with all the test cases etc

        Show
        John George added a comment - Attaching a patch with all the test cases etc
        Hide
        John George added a comment -

        First, a general question: why change the visible of LocalFs ctor to public?

        It is needed by one of the tests (that is not in the same package), which unfortunately was missing in the latest patch I uploaded. Will upload the right patch soon

        This patch doesn't work when a path symlinks off to a different afs. fc.setWriteChecksum(boolean, Path) will instantiate the resolved afs, set the flag, the afs instance goes "poof", and the subsequent create will get another instance of the afs with the default value. One solution might be to cache all the afs used by the context so the value persists.

        Good catch Daryn. You are right that the patch will not work on paths when it leads to a different afs than the defaultFS already cached in FC. I guess a discussion to resolve this is happening in HADOOP-6356.

        However, I'm not comfortable with the misleading semantics of fc.setWriteChecksum(boolean, Path). The path argument implies the flag applies only to the given path, but in this patch it applies to the entire afs of the given path. I'd be more comfortable with mimicing fs and just have fc.setWriteChecksum(boolean) set an instance var in the fc. fc.create calls the resolved afs.setWriteChecksum, and then afs.create.

        The reason setWriteChecksum requires a patch argument is because FileContext, just like ViewFS, works on paths and resolves the path to an FS. If we remove the path, setWriteChecksum will have the side effect that it apply to all the FS that the filecontext resolves in a path.

        What I think ultimately needs to be done is for fs/afs create to accept a flag for whether checksums are written. This provides more granularity, avoids threading issues where one thread changes the value for all threads, prevents viewfs from applying the setting to all its mounts, and would solve the unexpected behavior in FileSystem where the cache causes the value to change in every instance/reference to the fs.

        Passing the flag as part of create sounds like a good idea to me. If you feel strongly for this, we could file a JIRA to address this at a later point in time.

        Show
        John George added a comment - First, a general question: why change the visible of LocalFs ctor to public? It is needed by one of the tests (that is not in the same package), which unfortunately was missing in the latest patch I uploaded. Will upload the right patch soon This patch doesn't work when a path symlinks off to a different afs. fc.setWriteChecksum(boolean, Path) will instantiate the resolved afs, set the flag, the afs instance goes "poof", and the subsequent create will get another instance of the afs with the default value. One solution might be to cache all the afs used by the context so the value persists. Good catch Daryn. You are right that the patch will not work on paths when it leads to a different afs than the defaultFS already cached in FC. I guess a discussion to resolve this is happening in HADOOP-6356 . However, I'm not comfortable with the misleading semantics of fc.setWriteChecksum(boolean, Path). The path argument implies the flag applies only to the given path, but in this patch it applies to the entire afs of the given path. I'd be more comfortable with mimicing fs and just have fc.setWriteChecksum(boolean) set an instance var in the fc. fc.create calls the resolved afs.setWriteChecksum, and then afs.create. The reason setWriteChecksum requires a patch argument is because FileContext, just like ViewFS, works on paths and resolves the path to an FS. If we remove the path, setWriteChecksum will have the side effect that it apply to all the FS that the filecontext resolves in a path. What I think ultimately needs to be done is for fs/afs create to accept a flag for whether checksums are written. This provides more granularity, avoids threading issues where one thread changes the value for all threads, prevents viewfs from applying the setting to all its mounts, and would solve the unexpected behavior in FileSystem where the cache causes the value to change in every instance/reference to the fs. Passing the flag as part of create sounds like a good idea to me. If you feel strongly for this, we could file a JIRA to address this at a later point in time.
        Hide
        Daryn Sharp added a comment -

        First, a general question: why change the visible of LocalFs ctor to public?

        This patch doesn't work when a path symlinks off to a different afs. fc.setWriteChecksum(boolean, Path) will instantiate the resolved afs, set the flag, the afs instance goes "poof", and the subsequent create will get another instance of the afs with the default value. One solution might be to cache all the afs used by the context so the value persists.

        However, I'm not comfortable with the misleading semantics of fc.setWriteChecksum(boolean, Path). The path argument implies the flag applies only to the given path, but in this patch it applies to the entire afs of the given path. I'd be more comfortable with mimicing fs and just have fc.setWriteChecksum(boolean) set an instance var in the fc. fc.create calls the resolved afs.setWriteChecksum, and then afs.create.

        What I think ultimately needs to be done is for fs/afs create to accept a flag for whether checksums are written. This provides more granularity, avoids threading issues where one thread changes the value for all threads, prevents viewfs from applying the setting to all its mounts, and would solve the unexpected behavior in FileSystem where the cache causes the value to change in every instance/reference to the fs.

        Show
        Daryn Sharp added a comment - First, a general question: why change the visible of LocalFs ctor to public? This patch doesn't work when a path symlinks off to a different afs. fc.setWriteChecksum(boolean, Path) will instantiate the resolved afs, set the flag, the afs instance goes "poof", and the subsequent create will get another instance of the afs with the default value. One solution might be to cache all the afs used by the context so the value persists. However, I'm not comfortable with the misleading semantics of fc.setWriteChecksum(boolean, Path) . The path argument implies the flag applies only to the given path, but in this patch it applies to the entire afs of the given path. I'd be more comfortable with mimicing fs and just have fc.setWriteChecksum(boolean) set an instance var in the fc. fc.create calls the resolved afs.setWriteChecksum , and then afs.create . What I think ultimately needs to be done is for fs/afs create to accept a flag for whether checksums are written. This provides more granularity, avoids threading issues where one thread changes the value for all threads, prevents viewfs from applying the setting to all its mounts, and would solve the unexpected behavior in FileSystem where the cache causes the value to change in every instance/reference to the fs.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12525450/HADOOP-8319.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        -1 javadoc. The javadoc tool appears to have generated -6 warning messages.

        -1 javac. The patch appears to cause tar ant target to fail.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/927//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/927//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12525450/HADOOP-8319.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. -1 javadoc. The javadoc tool appears to have generated -6 warning messages. -1 javac. The patch appears to cause tar ant target to fail. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/927//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/927//console This message is automatically generated.
        Hide
        Daryn Sharp added a comment -

        The patch contains a change to o.a.h.fs.Hdfs in hadoop-common-project. You should have a separate hdfs jira for it.

        Show
        Daryn Sharp added a comment - The patch contains a change to o.a.h.fs.Hdfs in hadoop-common-project. You should have a separate hdfs jira for it.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12525299/HADOOP-8319.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 1 new or modified test files.

        -1 javadoc. The javadoc tool appears to have generated -6 warning messages.

        -1 javac. The patch appears to cause tar ant target to fail.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/919//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/919//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12525299/HADOOP-8319.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified test files. -1 javadoc. The javadoc tool appears to have generated -6 warning messages. -1 javac. The patch appears to cause tar ant target to fail. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/919//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/919//console This message is automatically generated.
        Hide
        John George added a comment -

        The following is what I get when I run from the root directory on my trunk with this patch. This is the same when I run without my patch.

        -1 overall.  
        
            +1 @author.  The patch does not contain any @author tags.
        
            +1 tests included.  The patch appears to include 3 new or modified tests.
        
            -1 javadoc.  The javadoc tool appears to have generated 18 warning messages.
        
            +1 javac.  The applied patch does not increase the total number of javac compiler warnings.
        
            +1 eclipse:eclipse.  The patch built with eclipse:eclipse.
        
            -1 findbugs.  The patch appears to introduce 20 new Findbugs (version 1.3.9) warnings.
        
            +1 release audit.  The applied patch does not increase the total number of release audit warnings.
        
        Show
        John George added a comment - The following is what I get when I run from the root directory on my trunk with this patch. This is the same when I run without my patch. -1 overall. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. -1 javadoc. The javadoc tool appears to have generated 18 warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to introduce 20 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12525174/HADOOP-8319.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 4 new or modified test files.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        -1 javac. The patch appears to cause tar ant target to fail.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/910//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/910//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12525174/HADOOP-8319.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 4 new or modified test files. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The patch appears to cause tar ant target to fail. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/910//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/910//console This message is automatically generated.
        Hide
        John George added a comment -

        Retrying the same patch again to see if HADOOP-8308 can probably get this patch to run

        Show
        John George added a comment - Retrying the same patch again to see if HADOOP-8308 can probably get this patch to run
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12525093/HADOOP-8319.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 4 new or modified test files.

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/908//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12525093/HADOOP-8319.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 4 new or modified test files. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/908//console This message is automatically generated.
        Hide
        John George added a comment -

        Attaching a better patch.
        Does everything like the previous patch except for the changes to renameInternal. I thought that the change was necessary but I was probably testing incorrectly. Hence, taking it off.

        Show
        John George added a comment - Attaching a better patch. Does everything like the previous patch except for the changes to renameInternal . I thought that the change was necessary but I was probably testing incorrectly. Hence, taking it off.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12524931/HADOOP-8319.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified test files.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        -1 javac. The patch appears to cause tar ant target to fail.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        +1 core tests. The patch passed unit tests in .

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/899//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/899//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12524931/HADOOP-8319.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified test files. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The patch appears to cause tar ant target to fail. +1 eclipse:eclipse. The patch built with eclipse:eclipse. -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in . +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/899//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/899//console This message is automatically generated.
        Hide
        John George added a comment -

        Attaching a patch that does the following:

        • Implement set {Write|Verify} Checksum for FileContext and viewFS.
          * Tests for set {Write|Verify}

          Checksum for FileContext and viewFS.

        • Make a LocalFs constructor public so that it can be extended by the test.
        • remove one of the renameInternal implementation from FilterFs since it is not being overridden by ChecksumFs.
        • Modify ChecksumFs to create checksum files only in cases when the flag was set to true.
        • Add a method to resolve a path as much as possible (in cases where the full path does not exist). This is to support cases where a user does a set {Write|Verify}

          Checksum on a file that does not exist yet. This is a valid usecase since the user should be able to write a checksum before a file is created or verify it in cases where a file is being copied.

        Appreciate reviews and open to suggestions and comments. I will also be running all the tests in hadoop common and posting the results here.

        Show
        John George added a comment - Attaching a patch that does the following: Implement set {Write|Verify} Checksum for FileContext and viewFS. * Tests for set {Write|Verify} Checksum for FileContext and viewFS. Make a LocalFs constructor public so that it can be extended by the test. remove one of the renameInternal implementation from FilterFs since it is not being overridden by ChecksumFs. Modify ChecksumFs to create checksum files only in cases when the flag was set to true. Add a method to resolve a path as much as possible (in cases where the full path does not exist). This is to support cases where a user does a set {Write|Verify} Checksum on a file that does not exist yet. This is a valid usecase since the user should be able to write a checksum before a file is created or verify it in cases where a file is being copied. Appreciate reviews and open to suggestions and comments. I will also be running all the tests in hadoop common and posting the results here.

          People

          • Assignee:
            John George
            Reporter:
            John George
          • Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

            • Created:
              Updated:

              Development