Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-5100

TestNamenodeRetryCache fails on Windows due to incorrect cleanup

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • 2.1.1-beta, 3.0.0-alpha1
    • 2.1.1-beta
    • test
    • None
    • Reviewed

    Description

      The test case fails on Windows with the following exceptions.

      java.io.IOException: Could not fully delete C:\hdc\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1
      	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:759)
      	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:644)
      	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:334)
      	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
      	at org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits.setupCluster(TestInitializeSharedEdits.java:68)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      ...
      

      The root cause is that the cleanup() only try to delete root directory instead of shutting down the MiniDFSCluster. Every test case in this unit test will create a new MiniDFSCluster during setup() step. Without shutting down the previous cluster, the new cluster creation will fail with the above exception due to blocking file handling on Windows.

      Attachments

        1. HDFS-5100-trunk.patch
          0.7 kB
          Chuan Liu
        2. HDFS-5100-trunk.patch
          0.7 kB
          Chris Nauroth

        Issue Links

          Activity

            People

              chuanliu Chuan Liu
              chuanliu Chuan Liu
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: