Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-9429

Tests in TestDFSAdminWithHA intermittently fail with EOFException

    Details

    • Type: Test
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: test
    • Labels:
      None

      Description

      I have seen this fail a handful of times for testMetaSave, but from my understanding this is from setUpHaCluster so theoretically it could fail for any cases in the class.

      1. HDFS-9429.001.patch
        3 kB
        Xiao Chen
      2. HDFS-9429.002.patch
        12 kB
        Xiao Chen
      3. HDFS-9429.003.patch
        12 kB
        Xiao Chen
      4. HDFS-9429.reproduce
        7 kB
        Xiao Chen

        Issue Links

          Activity

          Hide
          xiaochen Xiao Chen added a comment -

          A sample failure below:
          Error Message

          Unable to check if JNs are ready for formatting. 1 exceptions thrown:
          127.0.0.1:42901: End of File Exception between local host is: "172.26.21.176"; destination host is: "localhost":42901; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
          

          Stacktrace

          org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
          127.0.0.1:42901: End of File Exception between local host is: "172.26.21.176"; destination host is: "localhost":42901; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
          	at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:899)
          	at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:986)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
          	at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:173)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:969)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:807)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:467)
          	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:426)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:104)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:40)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69)
          	at org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.setUpHaCluster(TestDFSAdminWithHA.java:84)
          	at org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testMetaSave(TestDFSAdminWithHA.java:197)
          

          Standard Output

          2015-11-09 19:26:41,365 INFO  qjournal.MiniJournalCluster (MiniJournalCluster.java:<init>(87)) - Starting MiniJournalCluster with 3 journal nodes
          2015-11-09 19:26:41,366 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
          2015-11-09 19:26:41,367 INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1687)) - Starting Web-server for journal at: http://localhost:0
          2015-11-09 19:26:41,368 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined
          2015-11-09 19:26:41,368 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(700)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
          2015-11-09 19:26:41,369 INFO  http.HttpServer2 (HttpServer2.java:addFilter(678)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal
          2015-11-09 19:26:41,369 INFO  http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
          2015-11-09 19:26:41,369 INFO  http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
          2015-11-09 19:26:41,369 INFO  http.HttpServer2 (HttpServer2.java:openListeners(888)) - Jetty bound to port 53757
          2015-11-09 19:26:41,380 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:53757
          2015-11-09 19:26:41,380 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(53)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
          2015-11-09 19:26:41,381 INFO  ipc.Server (Server.java:run(605)) - Starting Socket Reader #1 for port 42901
          2015-11-09 19:26:41,383 INFO  ipc.Server (Server.java:run(827)) - IPC Server Responder: starting
          2015-11-09 19:26:41,383 INFO  ipc.Server (Server.java:run(674)) - IPC Server listener on 42901: starting
          2015-11-09 19:26:41,384 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
          2015-11-09 19:26:41,385 INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1687)) - Starting Web-server for journal at: http://localhost:0
          2015-11-09 19:26:41,386 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined
          2015-11-09 19:26:41,386 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(700)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
          2015-11-09 19:26:41,387 INFO  http.HttpServer2 (HttpServer2.java:addFilter(678)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal
          2015-11-09 19:26:41,387 INFO  http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
          2015-11-09 19:26:41,387 INFO  http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
          2015-11-09 19:26:41,387 INFO  http.HttpServer2 (HttpServer2.java:openListeners(888)) - Jetty bound to port 45615
          2015-11-09 19:26:41,398 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45615
          2015-11-09 19:26:41,398 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(53)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
          2015-11-09 19:26:41,399 INFO  ipc.Server (Server.java:run(605)) - Starting Socket Reader #1 for port 60192
          2015-11-09 19:26:41,401 INFO  ipc.Server (Server.java:run(827)) - IPC Server Responder: starting
          2015-11-09 19:26:41,401 INFO  ipc.Server (Server.java:run(674)) - IPC Server listener on 60192: starting
          2015-11-09 19:26:41,402 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
          2015-11-09 19:26:41,404 INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1687)) - Starting Web-server for journal at: http://localhost:0
          2015-11-09 19:26:41,404 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined
          2015-11-09 19:26:41,404 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(700)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
          2015-11-09 19:26:41,405 INFO  http.HttpServer2 (HttpServer2.java:addFilter(678)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal
          2015-11-09 19:26:41,405 INFO  http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
          2015-11-09 19:26:41,405 INFO  http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
          2015-11-09 19:26:41,405 INFO  http.HttpServer2 (HttpServer2.java:openListeners(888)) - Jetty bound to port 43021
          2015-11-09 19:26:41,417 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43021
          2015-11-09 19:26:41,418 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(53)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
          2015-11-09 19:26:41,418 INFO  ipc.Server (Server.java:run(605)) - Starting Socket Reader #1 for port 43930
          2015-11-09 19:26:41,420 INFO  ipc.Server (Server.java:run(827)) - IPC Server Responder: starting
          2015-11-09 19:26:41,420 INFO  ipc.Server (Server.java:run(674)) - IPC Server listener on 43930: starting
          2015-11-09 19:26:41,422 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(442)) - starting cluster: numNameNodes=2, numDataNodes=0
          Formatting using clusterid: testClusterID
          2015-11-09 19:26:41,424 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(800)) - No KeyProvider found.
          2015-11-09 19:26:41,425 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(812)) - fsLock is fair:true
          2015-11-09 19:26:41,425 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(237)) - dfs.block.invalidate.limit=1000
          2015-11-09 19:26:41,425 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(243)) - dfs.namenode.datanode.registration.ip-hostname-check=true
          2015-11-09 19:26:41,426 INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(71)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
          2015-11-09 19:26:41,426 INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - The block deletion will start around 2015 Nov 09 19:26:41
          2015-11-09 19:26:41,426 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map BlocksMap
          2015-11-09 19:26:41,426 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
          2015-11-09 19:26:41,426 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 3.6 GB = 72.8 MB
          2015-11-09 19:26:41,427 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^23 = 8388608 entries
          2015-11-09 19:26:41,437 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(366)) - dfs.block.access.token.enable=false
          2015-11-09 19:26:41,438 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(351)) - defaultReplication         = 0
          2015-11-09 19:26:41,438 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(352)) - maxReplication             = 512
          2015-11-09 19:26:41,438 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(353)) - minReplication             = 1
          2015-11-09 19:26:41,438 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(354)) - maxReplicationStreams      = 2
          2015-11-09 19:26:41,438 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(355)) - shouldCheckForEnoughRacks  = false
          2015-11-09 19:26:41,439 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(356)) - replicationRecheckInterval = 3000
          2015-11-09 19:26:41,439 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(357)) - encryptDataTransfer        = false
          2015-11-09 19:26:41,439 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(358)) - maxNumBlocksToLog          = 1000
          2015-11-09 19:26:41,439 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(837)) - fsOwner             = jenkins (auth:SIMPLE)
          2015-11-09 19:26:41,439 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(838)) - supergroup          = supergroup
          2015-11-09 19:26:41,439 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(839)) - isPermissionEnabled = true
          2015-11-09 19:26:41,440 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(848)) - Determined nameservice ID: ns1
          2015-11-09 19:26:41,440 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(850)) - HA Enabled: true
          2015-11-09 19:26:41,440 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(887)) - Append Enabled: true
          2015-11-09 19:26:41,440 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map INodeMap
          2015-11-09 19:26:41,441 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
          2015-11-09 19:26:41,441 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 1.0% max memory 3.6 GB = 36.4 MB
          2015-11-09 19:26:41,441 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^22 = 4194304 entries
          2015-11-09 19:26:41,443 INFO  namenode.NameNode (FSDirectory.java:<init>(234)) - Caching file names occuring more than 10 times
          2015-11-09 19:26:41,443 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map cachedBlocks
          2015-11-09 19:26:41,444 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
          2015-11-09 19:26:41,444 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.25% max memory 3.6 GB = 9.1 MB
          2015-11-09 19:26:41,444 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^20 = 1048576 entries
          2015-11-09 19:26:41,445 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(5771)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
          2015-11-09 19:26:41,445 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(5772)) - dfs.namenode.safemode.min.datanodes = 0
          2015-11-09 19:26:41,445 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(5773)) - dfs.namenode.safemode.extension     = 0
          2015-11-09 19:26:41,445 INFO  metrics.TopMetrics (TopMetrics.java:logConf(65)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
          2015-11-09 19:26:41,445 INFO  metrics.TopMetrics (TopMetrics.java:logConf(67)) - NNTop conf: dfs.namenode.top.num.users = 10
          2015-11-09 19:26:41,446 INFO  metrics.TopMetrics (TopMetrics.java:logConf(69)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
          2015-11-09 19:26:41,446 INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(991)) - Retry cache on namenode is enabled
          2015-11-09 19:26:41,446 INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(999)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
          2015-11-09 19:26:41,446 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map NameNodeRetryCache
          2015-11-09 19:26:41,446 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
          2015-11-09 19:26:41,447 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.029999999329447746% max memory 3.6 GB = 1.1 MB
          2015-11-09 19:26:41,447 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^17 = 131072 entries
          2015-11-09 19:26:41,448 INFO  namenode.NNConf (NNConf.java:<init>(62)) - ACLs enabled? false
          2015-11-09 19:26:41,448 INFO  namenode.NNConf (NNConf.java:<init>(66)) - XAttrs enabled? true
          2015-11-09 19:26:41,448 INFO  namenode.NNConf (NNConf.java:<init>(74)) - Maximum size of an xattr: 16384
          2015-11-09 19:26:41,466 WARN  namenode.NameNode (NameNode.java:format(992)) - Encountered exception during format: 
          org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
          127.0.0.1:42901: End of File Exception between local host is: "172.26.21.176"; destination host is: "localhost":42901; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
          	at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:899)
          	at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:986)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
          	at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:173)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:969)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:807)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:467)
          	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:426)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:104)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:40)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69)
          	at org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.setUpHaCluster(TestDFSAdminWithHA.java:84)
          	at org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testMetaSave(TestDFSAdminWithHA.java:197)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:606)
          	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
          	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
          	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
          	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
          	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          2015-11-09 19:26:41,467 INFO  server.JournalNode (JournalNode.java:getOrCreateJournal(89)) - Initializing journal in directory /data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/ns1
          2015-11-09 19:26:41,467 INFO  server.JournalNode (JournalNode.java:getOrCreateJournal(89)) - Initializing journal in directory /data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/ns1
          2015-11-09 19:26:41,467 WARN  common.Storage (Storage.java:analyzeStorage(477)) - Storage directory /data/jenkins/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/ns1 does not exist
          2015-11-09 19:26:41,468 WARN  common.Storage (Storage.java:analyzeStorage(477)) - Storage directory /data/jenkins/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/ns1 does not exist
          2015-11-09 19:26:41,468 ERROR hdfs.MiniDFSCluster (MiniDFSCluster.java:initMiniDFSCluster(812)) - IOE creating namenodes. Permissions dump:
          path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data
          	permissions: ----
          path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs
          	permissions: drwx
          path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data
          	permissions: drwx
          path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test
          	permissions: drwx
          path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target
          	permissions: drwx
          path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs
          	permissions: drwx
          path '/data/jenkins/workspace/hadoop-hdfs-project': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project
          	permissions: drwx
          path '/data/jenkins/workspace': 
          	absolute:/data/jenkins/workspace
          	permissions: drwx
          path '/data/jenkins/workspace': 
          	absolute:/data/jenkins/workspace
          	permissions: drwx
          path '/data/jenkins': 
          	absolute:/data/jenkins
          	permissions: drwx
          path '/data': 
          	absolute:/data
          	permissions: dr-x
          path '/': 
          	absolute:/
          	permissions: dr-x
          
          2015-11-09 19:26:41,468 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
          
          Show
          xiaochen Xiao Chen added a comment - A sample failure below: Error Message Unable to check if JNs are ready for formatting. 1 exceptions thrown: 127.0.0.1:42901: End of File Exception between local host is: "172.26.21.176"; destination host is: "localhost":42901; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException Stacktrace org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown: 127.0.0.1:42901: End of File Exception between local host is: "172.26.21.176"; destination host is: "localhost":42901; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232) at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:899) at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:986) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342) at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:173) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:969) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:807) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:467) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:426) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:104) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:40) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69) at org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.setUpHaCluster(TestDFSAdminWithHA.java:84) at org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testMetaSave(TestDFSAdminWithHA.java:197) Standard Output 2015-11-09 19:26:41,365 INFO qjournal.MiniJournalCluster (MiniJournalCluster.java:<init>(87)) - Starting MiniJournalCluster with 3 journal nodes 2015-11-09 19:26:41,366 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again) 2015-11-09 19:26:41,367 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1687)) - Starting Web-server for journal at: http://localhost:0 2015-11-09 19:26:41,368 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined 2015-11-09 19:26:41,368 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(700)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2015-11-09 19:26:41,369 INFO http.HttpServer2 (HttpServer2.java:addFilter(678)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal 2015-11-09 19:26:41,369 INFO http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2015-11-09 19:26:41,369 INFO http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2015-11-09 19:26:41,369 INFO http.HttpServer2 (HttpServer2.java:openListeners(888)) - Jetty bound to port 53757 2015-11-09 19:26:41,380 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:53757 2015-11-09 19:26:41,380 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(53)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue 2015-11-09 19:26:41,381 INFO ipc.Server (Server.java:run(605)) - Starting Socket Reader #1 for port 42901 2015-11-09 19:26:41,383 INFO ipc.Server (Server.java:run(827)) - IPC Server Responder: starting 2015-11-09 19:26:41,383 INFO ipc.Server (Server.java:run(674)) - IPC Server listener on 42901: starting 2015-11-09 19:26:41,384 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again) 2015-11-09 19:26:41,385 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1687)) - Starting Web-server for journal at: http://localhost:0 2015-11-09 19:26:41,386 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined 2015-11-09 19:26:41,386 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(700)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2015-11-09 19:26:41,387 INFO http.HttpServer2 (HttpServer2.java:addFilter(678)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal 2015-11-09 19:26:41,387 INFO http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2015-11-09 19:26:41,387 INFO http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2015-11-09 19:26:41,387 INFO http.HttpServer2 (HttpServer2.java:openListeners(888)) - Jetty bound to port 45615 2015-11-09 19:26:41,398 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45615 2015-11-09 19:26:41,398 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(53)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue 2015-11-09 19:26:41,399 INFO ipc.Server (Server.java:run(605)) - Starting Socket Reader #1 for port 60192 2015-11-09 19:26:41,401 INFO ipc.Server (Server.java:run(827)) - IPC Server Responder: starting 2015-11-09 19:26:41,401 INFO ipc.Server (Server.java:run(674)) - IPC Server listener on 60192: starting 2015-11-09 19:26:41,402 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again) 2015-11-09 19:26:41,404 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1687)) - Starting Web-server for journal at: http://localhost:0 2015-11-09 19:26:41,404 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined 2015-11-09 19:26:41,404 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(700)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2015-11-09 19:26:41,405 INFO http.HttpServer2 (HttpServer2.java:addFilter(678)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal 2015-11-09 19:26:41,405 INFO http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2015-11-09 19:26:41,405 INFO http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2015-11-09 19:26:41,405 INFO http.HttpServer2 (HttpServer2.java:openListeners(888)) - Jetty bound to port 43021 2015-11-09 19:26:41,417 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43021 2015-11-09 19:26:41,418 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(53)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue 2015-11-09 19:26:41,418 INFO ipc.Server (Server.java:run(605)) - Starting Socket Reader #1 for port 43930 2015-11-09 19:26:41,420 INFO ipc.Server (Server.java:run(827)) - IPC Server Responder: starting 2015-11-09 19:26:41,420 INFO ipc.Server (Server.java:run(674)) - IPC Server listener on 43930: starting 2015-11-09 19:26:41,422 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(442)) - starting cluster: numNameNodes=2, numDataNodes=0 Formatting using clusterid: testClusterID 2015-11-09 19:26:41,424 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(800)) - No KeyProvider found. 2015-11-09 19:26:41,425 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(812)) - fsLock is fair:true 2015-11-09 19:26:41,425 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(237)) - dfs.block.invalidate.limit=1000 2015-11-09 19:26:41,425 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(243)) - dfs.namenode.datanode.registration.ip-hostname-check=true 2015-11-09 19:26:41,426 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(71)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2015-11-09 19:26:41,426 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - The block deletion will start around 2015 Nov 09 19:26:41 2015-11-09 19:26:41,426 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map BlocksMap 2015-11-09 19:26:41,426 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit 2015-11-09 19:26:41,426 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 3.6 GB = 72.8 MB 2015-11-09 19:26:41,427 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^23 = 8388608 entries 2015-11-09 19:26:41,437 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(366)) - dfs.block.access.token.enable=false 2015-11-09 19:26:41,438 INFO blockmanagement.BlockManager (BlockManager.java:<init>(351)) - defaultReplication = 0 2015-11-09 19:26:41,438 INFO blockmanagement.BlockManager (BlockManager.java:<init>(352)) - maxReplication = 512 2015-11-09 19:26:41,438 INFO blockmanagement.BlockManager (BlockManager.java:<init>(353)) - minReplication = 1 2015-11-09 19:26:41,438 INFO blockmanagement.BlockManager (BlockManager.java:<init>(354)) - maxReplicationStreams = 2 2015-11-09 19:26:41,438 INFO blockmanagement.BlockManager (BlockManager.java:<init>(355)) - shouldCheckForEnoughRacks = false 2015-11-09 19:26:41,439 INFO blockmanagement.BlockManager (BlockManager.java:<init>(356)) - replicationRecheckInterval = 3000 2015-11-09 19:26:41,439 INFO blockmanagement.BlockManager (BlockManager.java:<init>(357)) - encryptDataTransfer = false 2015-11-09 19:26:41,439 INFO blockmanagement.BlockManager (BlockManager.java:<init>(358)) - maxNumBlocksToLog = 1000 2015-11-09 19:26:41,439 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(837)) - fsOwner = jenkins (auth:SIMPLE) 2015-11-09 19:26:41,439 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(838)) - supergroup = supergroup 2015-11-09 19:26:41,439 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(839)) - isPermissionEnabled = true 2015-11-09 19:26:41,440 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(848)) - Determined nameservice ID: ns1 2015-11-09 19:26:41,440 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(850)) - HA Enabled: true 2015-11-09 19:26:41,440 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(887)) - Append Enabled: true 2015-11-09 19:26:41,440 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map INodeMap 2015-11-09 19:26:41,441 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit 2015-11-09 19:26:41,441 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 1.0% max memory 3.6 GB = 36.4 MB 2015-11-09 19:26:41,441 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^22 = 4194304 entries 2015-11-09 19:26:41,443 INFO namenode.NameNode (FSDirectory.java:<init>(234)) - Caching file names occuring more than 10 times 2015-11-09 19:26:41,443 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map cachedBlocks 2015-11-09 19:26:41,444 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit 2015-11-09 19:26:41,444 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.25% max memory 3.6 GB = 9.1 MB 2015-11-09 19:26:41,444 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^20 = 1048576 entries 2015-11-09 19:26:41,445 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(5771)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033 2015-11-09 19:26:41,445 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(5772)) - dfs.namenode.safemode.min.datanodes = 0 2015-11-09 19:26:41,445 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(5773)) - dfs.namenode.safemode.extension = 0 2015-11-09 19:26:41,445 INFO metrics.TopMetrics (TopMetrics.java:logConf(65)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10 2015-11-09 19:26:41,445 INFO metrics.TopMetrics (TopMetrics.java:logConf(67)) - NNTop conf: dfs.namenode.top.num.users = 10 2015-11-09 19:26:41,446 INFO metrics.TopMetrics (TopMetrics.java:logConf(69)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2015-11-09 19:26:41,446 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(991)) - Retry cache on namenode is enabled 2015-11-09 19:26:41,446 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(999)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2015-11-09 19:26:41,446 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map NameNodeRetryCache 2015-11-09 19:26:41,446 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit 2015-11-09 19:26:41,447 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.029999999329447746% max memory 3.6 GB = 1.1 MB 2015-11-09 19:26:41,447 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^17 = 131072 entries 2015-11-09 19:26:41,448 INFO namenode.NNConf (NNConf.java:<init>(62)) - ACLs enabled? false 2015-11-09 19:26:41,448 INFO namenode.NNConf (NNConf.java:<init>(66)) - XAttrs enabled? true 2015-11-09 19:26:41,448 INFO namenode.NNConf (NNConf.java:<init>(74)) - Maximum size of an xattr: 16384 2015-11-09 19:26:41,466 WARN namenode.NameNode (NameNode.java:format(992)) - Encountered exception during format: org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown: 127.0.0.1:42901: End of File Exception between local host is: "172.26.21.176"; destination host is: "localhost":42901; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232) at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:899) at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:986) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342) at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:173) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:969) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:807) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:467) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:426) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:104) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:40) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69) at org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.setUpHaCluster(TestDFSAdminWithHA.java:84) at org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testMetaSave(TestDFSAdminWithHA.java:197) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2015-11-09 19:26:41,467 INFO server.JournalNode (JournalNode.java:getOrCreateJournal(89)) - Initializing journal in directory /data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/ns1 2015-11-09 19:26:41,467 INFO server.JournalNode (JournalNode.java:getOrCreateJournal(89)) - Initializing journal in directory /data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/ns1 2015-11-09 19:26:41,467 WARN common.Storage (Storage.java:analyzeStorage(477)) - Storage directory /data/jenkins/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/ns1 does not exist 2015-11-09 19:26:41,468 WARN common.Storage (Storage.java:analyzeStorage(477)) - Storage directory /data/jenkins/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/ns1 does not exist 2015-11-09 19:26:41,468 ERROR hdfs.MiniDFSCluster (MiniDFSCluster.java:initMiniDFSCluster(812)) - IOE creating namenodes. Permissions dump: path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data': absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data permissions: ---- path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs': absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs permissions: drwx path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data': absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data permissions: drwx path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test': absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test permissions: drwx path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target': absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target permissions: drwx path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs': absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs permissions: drwx path '/data/jenkins/workspace/hadoop-hdfs-project': absolute:/data/jenkins/workspace/hadoop-hdfs-project permissions: drwx path '/data/jenkins/workspace': absolute:/data/jenkins/workspace permissions: drwx path '/data/jenkins/workspace': absolute:/data/jenkins/workspace permissions: drwx path '/data/jenkins': absolute:/data/jenkins permissions: drwx path '/data': absolute:/data permissions: dr-x path '/': absolute:/ permissions: dr-x 2015-11-09 19:26:41,468 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
          Hide
          xiaochen Xiao Chen added a comment -

          Another reoccurrence that looks to be the same cause:
          Error Message

          Unable to check if JNs are ready for formatting. 1 exceptions thrown:
          127.0.0.1:47894: End of File Exception between local host is: "172.26.21.235"; destination host is: "localhost":47894; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
          

          Stacktrace

          org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
          127.0.0.1:47894: End of File Exception between local host is: "172.26.21.235"; destination host is: "localhost":47894; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
          	at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:900)
          	at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1021)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:347)
          	at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:173)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:974)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:812)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:472)
          	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:431)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:104)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:40)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69)
          	at org.apache.hadoop.hdfs.TestRollingUpgrade.testQuery(TestRollingUpgrade.java:453)
          

          Standard Output

          2015-11-16 15:21:19,177 INFO  qjournal.MiniJournalCluster (MiniJournalCluster.java:<init>(87)) - Starting MiniJournalCluster with 3 journal nodes
          2015-11-16 15:21:19,194 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
          2015-11-16 15:21:19,195 INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1697)) - Starting Web-server for journal at: http://localhost:0
          2015-11-16 15:21:19,196 INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(282)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
          2015-11-16 15:21:19,197 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined
          2015-11-16 15:21:19,198 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(752)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
          2015-11-16 15:21:19,198 INFO  http.HttpServer2 (HttpServer2.java:addFilter(730)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal
          2015-11-16 15:21:19,199 INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
          2015-11-16 15:21:19,199 INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
          2015-11-16 15:21:19,200 INFO  http.HttpServer2 (HttpServer2.java:openListeners(940)) - Jetty bound to port 54447
          2015-11-16 15:21:19,219 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:54447
          2015-11-16 15:21:19,220 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
          2015-11-16 15:21:19,220 INFO  ipc.Server (Server.java:run(616)) - Starting Socket Reader #1 for port 49879
          2015-11-16 15:21:19,224 INFO  ipc.Server (Server.java:run(839)) - IPC Server Responder: starting
          2015-11-16 15:21:19,224 INFO  ipc.Server (Server.java:run(686)) - IPC Server listener on 49879: starting
          2015-11-16 15:21:19,239 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
          2015-11-16 15:21:19,241 INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1697)) - Starting Web-server for journal at: http://localhost:0
          2015-11-16 15:21:19,242 INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(282)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
          2015-11-16 15:21:19,243 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined
          2015-11-16 15:21:19,243 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(752)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
          2015-11-16 15:21:19,244 INFO  http.HttpServer2 (HttpServer2.java:addFilter(730)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal
          2015-11-16 15:21:19,244 INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
          2015-11-16 15:21:19,244 INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
          2015-11-16 15:21:19,245 INFO  http.HttpServer2 (HttpServer2.java:openListeners(940)) - Jetty bound to port 41815
          2015-11-16 15:21:19,262 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41815
          2015-11-16 15:21:19,262 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
          2015-11-16 15:21:19,263 INFO  ipc.Server (Server.java:run(616)) - Starting Socket Reader #1 for port 47894
          2015-11-16 15:21:19,267 INFO  ipc.Server (Server.java:run(839)) - IPC Server Responder: starting
          2015-11-16 15:21:19,267 INFO  ipc.Server (Server.java:run(686)) - IPC Server listener on 47894: starting
          2015-11-16 15:21:19,288 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
          2015-11-16 15:21:19,290 INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1697)) - Starting Web-server for journal at: http://localhost:0
          2015-11-16 15:21:19,291 INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(282)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
          2015-11-16 15:21:19,291 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined
          2015-11-16 15:21:19,292 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(752)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
          2015-11-16 15:21:19,293 INFO  http.HttpServer2 (HttpServer2.java:addFilter(730)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal
          2015-11-16 15:21:19,293 INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
          2015-11-16 15:21:19,293 INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
          2015-11-16 15:21:19,294 INFO  http.HttpServer2 (HttpServer2.java:openListeners(940)) - Jetty bound to port 53577
          2015-11-16 15:21:19,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:53577
          2015-11-16 15:21:19,310 INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue
          2015-11-16 15:21:19,311 INFO  ipc.Server (Server.java:run(616)) - Starting Socket Reader #1 for port 40415
          2015-11-16 15:21:19,314 INFO  ipc.Server (Server.java:run(839)) - IPC Server Responder: starting
          2015-11-16 15:21:19,314 INFO  ipc.Server (Server.java:run(686)) - IPC Server listener on 40415: starting
          2015-11-16 15:21:19,327 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(447)) - starting cluster: numNameNodes=2, numDataNodes=0
          Formatting using clusterid: testClusterID
          2015-11-16 15:21:19,332 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(800)) - No KeyProvider found.
          2015-11-16 15:21:19,333 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(812)) - fsLock is fair:true
          2015-11-16 15:21:19,333 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(237)) - dfs.block.invalidate.limit=1000
          2015-11-16 15:21:19,334 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(243)) - dfs.namenode.datanode.registration.ip-hostname-check=true
          2015-11-16 15:21:19,334 INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(71)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
          2015-11-16 15:21:19,334 INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - The block deletion will start around 2015 Nov 16 15:21:19
          2015-11-16 15:21:19,335 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map BlocksMap
          2015-11-16 15:21:19,335 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
          2015-11-16 15:21:19,335 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 3.6 GB = 72.8 MB
          2015-11-16 15:21:19,335 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^23 = 8388608 entries
          2015-11-16 15:21:19,368 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(369)) - dfs.block.access.token.enable=false
          2015-11-16 15:21:19,368 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(354)) - defaultReplication         = 0
          2015-11-16 15:21:19,368 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(355)) - maxReplication             = 512
          2015-11-16 15:21:19,368 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(356)) - minReplication             = 1
          2015-11-16 15:21:19,369 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(357)) - maxReplicationStreams      = 2
          2015-11-16 15:21:19,369 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(358)) - shouldCheckForEnoughRacks  = false
          2015-11-16 15:21:19,369 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(359)) - replicationRecheckInterval = 3000
          2015-11-16 15:21:19,369 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(360)) - encryptDataTransfer        = false
          2015-11-16 15:21:19,369 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(361)) - maxNumBlocksToLog          = 1000
          2015-11-16 15:21:19,370 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(837)) - fsOwner             = jenkins (auth:SIMPLE)
          2015-11-16 15:21:19,370 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(838)) - supergroup          = supergroup
          2015-11-16 15:21:19,370 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(839)) - isPermissionEnabled = true
          2015-11-16 15:21:19,370 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(848)) - Determined nameservice ID: ns1
          2015-11-16 15:21:19,371 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(850)) - HA Enabled: true
          2015-11-16 15:21:19,371 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(887)) - Append Enabled: true
          2015-11-16 15:21:19,371 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map INodeMap
          2015-11-16 15:21:19,371 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
          2015-11-16 15:21:19,372 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 1.0% max memory 3.6 GB = 36.4 MB
          2015-11-16 15:21:19,372 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^22 = 4194304 entries
          2015-11-16 15:21:19,388 INFO  namenode.NameNode (FSDirectory.java:<init>(238)) - Caching file names occuring more than 10 times
          2015-11-16 15:21:19,389 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map cachedBlocks
          2015-11-16 15:21:19,389 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
          2015-11-16 15:21:19,389 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.25% max memory 3.6 GB = 9.1 MB
          2015-11-16 15:21:19,389 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^20 = 1048576 entries
          2015-11-16 15:21:19,394 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(5776)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
          2015-11-16 15:21:19,394 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(5777)) - dfs.namenode.safemode.min.datanodes = 0
          2015-11-16 15:21:19,394 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(5778)) - dfs.namenode.safemode.extension     = 0
          2015-11-16 15:21:19,394 INFO  metrics.TopMetrics (TopMetrics.java:logConf(65)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
          2015-11-16 15:21:19,395 INFO  metrics.TopMetrics (TopMetrics.java:logConf(67)) - NNTop conf: dfs.namenode.top.num.users = 10
          2015-11-16 15:21:19,395 INFO  metrics.TopMetrics (TopMetrics.java:logConf(69)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
          2015-11-16 15:21:19,395 INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(991)) - Retry cache on namenode is enabled
          2015-11-16 15:21:19,395 INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(999)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
          2015-11-16 15:21:19,395 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map NameNodeRetryCache
          2015-11-16 15:21:19,396 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
          2015-11-16 15:21:19,396 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.029999999329447746% max memory 3.6 GB = 1.1 MB
          2015-11-16 15:21:19,396 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^17 = 131072 entries
          2015-11-16 15:21:19,398 INFO  namenode.NNConf (NNConf.java:<init>(62)) - ACLs enabled? false
          2015-11-16 15:21:19,398 INFO  namenode.NNConf (NNConf.java:<init>(66)) - XAttrs enabled? true
          2015-11-16 15:21:19,399 INFO  namenode.NNConf (NNConf.java:<init>(74)) - Maximum size of an xattr: 16384
          2015-11-16 15:21:19,821 WARN  namenode.NameNode (NameNode.java:format(1027)) - Encountered exception during format: 
          org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
          127.0.0.1:47894: End of File Exception between local host is: "172.26.21.235"; destination host is: "localhost":47894; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
          	at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:900)
          	at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1021)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:347)
          	at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:173)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:974)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:812)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:472)
          	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:431)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:104)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:40)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69)
          	at org.apache.hadoop.hdfs.TestRollingUpgrade.testQuery(TestRollingUpgrade.java:453)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:606)
          	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
          	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
          	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
          	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
          	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
          2015-11-16 15:21:19,822 INFO  server.JournalNode (JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory /data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/ns1
          2015-11-16 15:21:19,823 WARN  common.Storage (Storage.java:analyzeStorage(477)) - Storage directory /data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/ns1 does not exist
          2015-11-16 15:21:19,823 INFO  server.JournalNode (JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory /data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/ns1
          2015-11-16 15:21:19,823 ERROR hdfs.MiniDFSCluster (MiniDFSCluster.java:initMiniDFSCluster(817)) - IOE creating namenodes. Permissions dump:
          path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data
          	permissions: ----
          path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs
          	permissions: drwx
          path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data
          	permissions: drwx
          path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test
          	permissions: drwx
          path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target
          	permissions: drwx
          path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs
          	permissions: drwx
          path '/data/jenkins/workspace/hadoop-hdfs-project': 
          	absolute:/data/jenkins/workspace/hadoop-hdfs-project
          	permissions: drwx
          path '/data/jenkins/workspace': 
          	absolute:/data/jenkins/workspace
          	permissions: drwx
          path '/data/jenkins/workspace': 
          	absolute:/data/jenkins/workspace
          	permissions: drwx
          path '/data/jenkins': 
          	absolute:/data/jenkins
          	permissions: drwx
          path '/data': 
          	absolute:/data
          	permissions: dr-x
          path '/': 
          	absolute:/
          	permissions: dr-x
          
          2015-11-16 15:21:19,823 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1713)) - Shutting down the Mini HDFS Cluster
          2015-11-16 15:21:19,823 WARN  common.Storage (Storage.java:analyzeStorage(477)) - Storage directory /data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/ns1 does not exist
          
          Show
          xiaochen Xiao Chen added a comment - Another reoccurrence that looks to be the same cause: Error Message Unable to check if JNs are ready for formatting. 1 exceptions thrown: 127.0.0.1:47894: End of File Exception between local host is: "172.26.21.235"; destination host is: "localhost":47894; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException Stacktrace org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown: 127.0.0.1:47894: End of File Exception between local host is: "172.26.21.235"; destination host is: "localhost":47894; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232) at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:900) at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1021) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:347) at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:173) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:974) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:812) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:472) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:431) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:104) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:40) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69) at org.apache.hadoop.hdfs.TestRollingUpgrade.testQuery(TestRollingUpgrade.java:453) Standard Output 2015-11-16 15:21:19,177 INFO qjournal.MiniJournalCluster (MiniJournalCluster.java:<init>(87)) - Starting MiniJournalCluster with 3 journal nodes 2015-11-16 15:21:19,194 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again) 2015-11-16 15:21:19,195 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1697)) - Starting Web-server for journal at: http://localhost:0 2015-11-16 15:21:19,196 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(282)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2015-11-16 15:21:19,197 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined 2015-11-16 15:21:19,198 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(752)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2015-11-16 15:21:19,198 INFO http.HttpServer2 (HttpServer2.java:addFilter(730)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal 2015-11-16 15:21:19,199 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2015-11-16 15:21:19,199 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2015-11-16 15:21:19,200 INFO http.HttpServer2 (HttpServer2.java:openListeners(940)) - Jetty bound to port 54447 2015-11-16 15:21:19,219 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:54447 2015-11-16 15:21:19,220 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue 2015-11-16 15:21:19,220 INFO ipc.Server (Server.java:run(616)) - Starting Socket Reader #1 for port 49879 2015-11-16 15:21:19,224 INFO ipc.Server (Server.java:run(839)) - IPC Server Responder: starting 2015-11-16 15:21:19,224 INFO ipc.Server (Server.java:run(686)) - IPC Server listener on 49879: starting 2015-11-16 15:21:19,239 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again) 2015-11-16 15:21:19,241 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1697)) - Starting Web-server for journal at: http://localhost:0 2015-11-16 15:21:19,242 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(282)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2015-11-16 15:21:19,243 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined 2015-11-16 15:21:19,243 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(752)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2015-11-16 15:21:19,244 INFO http.HttpServer2 (HttpServer2.java:addFilter(730)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal 2015-11-16 15:21:19,244 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2015-11-16 15:21:19,244 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2015-11-16 15:21:19,245 INFO http.HttpServer2 (HttpServer2.java:openListeners(940)) - Jetty bound to port 41815 2015-11-16 15:21:19,262 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41815 2015-11-16 15:21:19,262 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue 2015-11-16 15:21:19,263 INFO ipc.Server (Server.java:run(616)) - Starting Socket Reader #1 for port 47894 2015-11-16 15:21:19,267 INFO ipc.Server (Server.java:run(839)) - IPC Server Responder: starting 2015-11-16 15:21:19,267 INFO ipc.Server (Server.java:run(686)) - IPC Server listener on 47894: starting 2015-11-16 15:21:19,288 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again) 2015-11-16 15:21:19,290 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1697)) - Starting Web-server for journal at: http://localhost:0 2015-11-16 15:21:19,291 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(282)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2015-11-16 15:21:19,291 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined 2015-11-16 15:21:19,292 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(752)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2015-11-16 15:21:19,293 INFO http.HttpServer2 (HttpServer2.java:addFilter(730)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal 2015-11-16 15:21:19,293 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2015-11-16 15:21:19,293 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2015-11-16 15:21:19,294 INFO http.HttpServer2 (HttpServer2.java:openListeners(940)) - Jetty bound to port 53577 2015-11-16 15:21:19,309 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:53577 2015-11-16 15:21:19,310 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(56)) - Using callQueue class java.util.concurrent.LinkedBlockingQueue 2015-11-16 15:21:19,311 INFO ipc.Server (Server.java:run(616)) - Starting Socket Reader #1 for port 40415 2015-11-16 15:21:19,314 INFO ipc.Server (Server.java:run(839)) - IPC Server Responder: starting 2015-11-16 15:21:19,314 INFO ipc.Server (Server.java:run(686)) - IPC Server listener on 40415: starting 2015-11-16 15:21:19,327 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(447)) - starting cluster: numNameNodes=2, numDataNodes=0 Formatting using clusterid: testClusterID 2015-11-16 15:21:19,332 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(800)) - No KeyProvider found. 2015-11-16 15:21:19,333 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(812)) - fsLock is fair:true 2015-11-16 15:21:19,333 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(237)) - dfs.block.invalidate.limit=1000 2015-11-16 15:21:19,334 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(243)) - dfs.namenode.datanode.registration.ip-hostname-check=true 2015-11-16 15:21:19,334 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(71)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2015-11-16 15:21:19,334 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - The block deletion will start around 2015 Nov 16 15:21:19 2015-11-16 15:21:19,335 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map BlocksMap 2015-11-16 15:21:19,335 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit 2015-11-16 15:21:19,335 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 3.6 GB = 72.8 MB 2015-11-16 15:21:19,335 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^23 = 8388608 entries 2015-11-16 15:21:19,368 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(369)) - dfs.block.access.token.enable=false 2015-11-16 15:21:19,368 INFO blockmanagement.BlockManager (BlockManager.java:<init>(354)) - defaultReplication = 0 2015-11-16 15:21:19,368 INFO blockmanagement.BlockManager (BlockManager.java:<init>(355)) - maxReplication = 512 2015-11-16 15:21:19,368 INFO blockmanagement.BlockManager (BlockManager.java:<init>(356)) - minReplication = 1 2015-11-16 15:21:19,369 INFO blockmanagement.BlockManager (BlockManager.java:<init>(357)) - maxReplicationStreams = 2 2015-11-16 15:21:19,369 INFO blockmanagement.BlockManager (BlockManager.java:<init>(358)) - shouldCheckForEnoughRacks = false 2015-11-16 15:21:19,369 INFO blockmanagement.BlockManager (BlockManager.java:<init>(359)) - replicationRecheckInterval = 3000 2015-11-16 15:21:19,369 INFO blockmanagement.BlockManager (BlockManager.java:<init>(360)) - encryptDataTransfer = false 2015-11-16 15:21:19,369 INFO blockmanagement.BlockManager (BlockManager.java:<init>(361)) - maxNumBlocksToLog = 1000 2015-11-16 15:21:19,370 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(837)) - fsOwner = jenkins (auth:SIMPLE) 2015-11-16 15:21:19,370 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(838)) - supergroup = supergroup 2015-11-16 15:21:19,370 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(839)) - isPermissionEnabled = true 2015-11-16 15:21:19,370 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(848)) - Determined nameservice ID: ns1 2015-11-16 15:21:19,371 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(850)) - HA Enabled: true 2015-11-16 15:21:19,371 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(887)) - Append Enabled: true 2015-11-16 15:21:19,371 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map INodeMap 2015-11-16 15:21:19,371 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit 2015-11-16 15:21:19,372 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 1.0% max memory 3.6 GB = 36.4 MB 2015-11-16 15:21:19,372 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^22 = 4194304 entries 2015-11-16 15:21:19,388 INFO namenode.NameNode (FSDirectory.java:<init>(238)) - Caching file names occuring more than 10 times 2015-11-16 15:21:19,389 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map cachedBlocks 2015-11-16 15:21:19,389 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit 2015-11-16 15:21:19,389 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.25% max memory 3.6 GB = 9.1 MB 2015-11-16 15:21:19,389 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^20 = 1048576 entries 2015-11-16 15:21:19,394 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(5776)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033 2015-11-16 15:21:19,394 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(5777)) - dfs.namenode.safemode.min.datanodes = 0 2015-11-16 15:21:19,394 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(5778)) - dfs.namenode.safemode.extension = 0 2015-11-16 15:21:19,394 INFO metrics.TopMetrics (TopMetrics.java:logConf(65)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10 2015-11-16 15:21:19,395 INFO metrics.TopMetrics (TopMetrics.java:logConf(67)) - NNTop conf: dfs.namenode.top.num.users = 10 2015-11-16 15:21:19,395 INFO metrics.TopMetrics (TopMetrics.java:logConf(69)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2015-11-16 15:21:19,395 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(991)) - Retry cache on namenode is enabled 2015-11-16 15:21:19,395 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(999)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2015-11-16 15:21:19,395 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map NameNodeRetryCache 2015-11-16 15:21:19,396 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit 2015-11-16 15:21:19,396 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.029999999329447746% max memory 3.6 GB = 1.1 MB 2015-11-16 15:21:19,396 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^17 = 131072 entries 2015-11-16 15:21:19,398 INFO namenode.NNConf (NNConf.java:<init>(62)) - ACLs enabled? false 2015-11-16 15:21:19,398 INFO namenode.NNConf (NNConf.java:<init>(66)) - XAttrs enabled? true 2015-11-16 15:21:19,399 INFO namenode.NNConf (NNConf.java:<init>(74)) - Maximum size of an xattr: 16384 2015-11-16 15:21:19,821 WARN namenode.NameNode (NameNode.java:format(1027)) - Encountered exception during format: org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown: 127.0.0.1:47894: End of File Exception between local host is: "172.26.21.235"; destination host is: "localhost":47894; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81) at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223) at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232) at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:900) at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1021) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:347) at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:173) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:974) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:812) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:472) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:431) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:104) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:40) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69) at org.apache.hadoop.hdfs.TestRollingUpgrade.testQuery(TestRollingUpgrade.java:453) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) 2015-11-16 15:21:19,822 INFO server.JournalNode (JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory /data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/ns1 2015-11-16 15:21:19,823 WARN common.Storage (Storage.java:analyzeStorage(477)) - Storage directory /data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/ns1 does not exist 2015-11-16 15:21:19,823 INFO server.JournalNode (JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory /data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/ns1 2015-11-16 15:21:19,823 ERROR hdfs.MiniDFSCluster (MiniDFSCluster.java:initMiniDFSCluster(817)) - IOE creating namenodes. Permissions dump: path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data': absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data permissions: ---- path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs': absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs permissions: drwx path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data': absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data permissions: drwx path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test': absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test permissions: drwx path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target': absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target permissions: drwx path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs': absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs permissions: drwx path '/data/jenkins/workspace/hadoop-hdfs-project': absolute:/data/jenkins/workspace/hadoop-hdfs-project permissions: drwx path '/data/jenkins/workspace': absolute:/data/jenkins/workspace permissions: drwx path '/data/jenkins/workspace': absolute:/data/jenkins/workspace permissions: drwx path '/data/jenkins': absolute:/data/jenkins permissions: drwx path '/data': absolute:/data permissions: dr-x path '/': absolute:/ permissions: dr-x 2015-11-16 15:21:19,823 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1713)) - Shutting down the Mini HDFS Cluster 2015-11-16 15:21:19,823 WARN common.Storage (Storage.java:analyzeStorage(477)) - Storage directory /data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/ns1 does not exist
          Hide
          xiaochen Xiao Chen added a comment -

          It seems to me that the root cause is there's no wait-for-active mechanism for journal nodes in MinijournalCluster.
          Attached patch 1 to add the waitActive method.
          Also, the failures are very hard to reproduce, since the rpc race has to be very exact. I'm not sure how to programmatically reproduce them yet....
          Any feedback/comments are high appreciated.

          Show
          xiaochen Xiao Chen added a comment - It seems to me that the root cause is there's no wait-for-active mechanism for journal nodes in MinijournalCluster . Attached patch 1 to add the waitActive method. Also, the failures are very hard to reproduce, since the rpc race has to be very exact. I'm not sure how to programmatically reproduce them yet.... Any feedback/comments are high appreciated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 7s docker + precommit patch detected.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 8m 27s trunk passed
          +1 compile 0m 47s trunk passed with JDK v1.8.0_66
          +1 compile 0m 44s trunk passed with JDK v1.7.0_85
          +1 checkstyle 0m 17s trunk passed
          +1 mvnsite 0m 57s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 2m 9s trunk passed
          +1 javadoc 1m 16s trunk passed with JDK v1.8.0_66
          +1 javadoc 2m 2s trunk passed with JDK v1.7.0_85
          +1 mvninstall 0m 54s the patch passed
          +1 compile 0m 46s the patch passed with JDK v1.8.0_66
          +1 javac 0m 46s the patch passed
          +1 compile 0m 46s the patch passed with JDK v1.7.0_85
          +1 javac 0m 46s the patch passed
          +1 checkstyle 0m 17s the patch passed
          +1 mvnsite 0m 54s the patch passed
          +1 mvneclipse 0m 15s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 2m 16s the patch passed
          +1 javadoc 1m 9s the patch passed with JDK v1.8.0_66
          +1 javadoc 1m 58s the patch passed with JDK v1.7.0_85
          -1 unit 62m 31s hadoop-hdfs in the patch failed with JDK v1.8.0_66.
          -1 unit 59m 0s hadoop-hdfs in the patch failed with JDK v1.7.0_85.
          -1 asflicense 0m 22s Patch generated 58 ASF License warnings.
          151m 16s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110
            hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
            hadoop.hdfs.server.datanode.TestBlockScanner
            hadoop.hdfs.server.mover.TestMover
          JDK v1.7.0_85 Failed junit tests hadoop.hdfs.server.namenode.TestDecommissioningStatus
            hadoop.hdfs.server.datanode.TestBlockScanner



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:date2015-11-19
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12773345/HDFS-9429.001.patch
          JIRA Issue HDFS-9429
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 57b28dba2a5d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-3f4279a/precommit/personality/hadoop.sh
          git revision trunk / 747455a
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13567/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13567/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13567/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13567/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13567/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13567/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Max memory used 77MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13567/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 7s docker + precommit patch detected. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 8m 27s trunk passed +1 compile 0m 47s trunk passed with JDK v1.8.0_66 +1 compile 0m 44s trunk passed with JDK v1.7.0_85 +1 checkstyle 0m 17s trunk passed +1 mvnsite 0m 57s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 2m 9s trunk passed +1 javadoc 1m 16s trunk passed with JDK v1.8.0_66 +1 javadoc 2m 2s trunk passed with JDK v1.7.0_85 +1 mvninstall 0m 54s the patch passed +1 compile 0m 46s the patch passed with JDK v1.8.0_66 +1 javac 0m 46s the patch passed +1 compile 0m 46s the patch passed with JDK v1.7.0_85 +1 javac 0m 46s the patch passed +1 checkstyle 0m 17s the patch passed +1 mvnsite 0m 54s the patch passed +1 mvneclipse 0m 15s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 16s the patch passed +1 javadoc 1m 9s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 58s the patch passed with JDK v1.7.0_85 -1 unit 62m 31s hadoop-hdfs in the patch failed with JDK v1.8.0_66. -1 unit 59m 0s hadoop-hdfs in the patch failed with JDK v1.7.0_85. -1 asflicense 0m 22s Patch generated 58 ASF License warnings. 151m 16s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110   hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock   hadoop.hdfs.server.datanode.TestBlockScanner   hadoop.hdfs.server.mover.TestMover JDK v1.7.0_85 Failed junit tests hadoop.hdfs.server.namenode.TestDecommissioningStatus   hadoop.hdfs.server.datanode.TestBlockScanner Subsystem Report/Notes Docker Image:yetus/hadoop:date2015-11-19 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12773345/HDFS-9429.001.patch JIRA Issue HDFS-9429 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 57b28dba2a5d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-3f4279a/precommit/personality/hadoop.sh git revision trunk / 747455a findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/13567/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13567/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13567/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13567/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13567/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13567/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Max memory used 77MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13567/console This message was automatically generated.
          Hide
          xiaochen Xiao Chen added a comment -

          Attached patch to reproduce the failure to a same stack trace but with a different type of exception. As mentioned above, EOFE needs to be very exact to reproduce. I think this reproduce patch is sufficient to prove that a waitActive-ish method is needed.

          The reproduced failure is caused by JN rpc server starting later than the rpc call inside the said stack trace. Un-commenting the journalCluster.waitActive(); in MiniQJMHACluster#MiniQJMHACluster at line 101 will make the unit test pass, due to the introduced waitActive.

          Below is a sample failure stack trace using the attached patch.

          java.io.IOException: Timed out waiting for response from loggers
          	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:229)
          	at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:916)
          	at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:180)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1067)
          	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:370)
          	at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:228)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1005)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
          	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
          	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:111)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:37)
          	at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:65)
          	at org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.setUpHaCluster(TestDFSAdminWithHA.java:84)
          	at org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testMetaSave(TestDFSAdminWithHA.java:205)
          

          Please kindly review patch 1. Thanks.

          Show
          xiaochen Xiao Chen added a comment - Attached patch to reproduce the failure to a same stack trace but with a different type of exception. As mentioned above, EOFE needs to be very exact to reproduce. I think this reproduce patch is sufficient to prove that a waitActive -ish method is needed. The reproduced failure is caused by JN rpc server starting later than the rpc call inside the said stack trace. Un-commenting the journalCluster.waitActive(); in MiniQJMHACluster#MiniQJMHACluster at line 101 will make the unit test pass, due to the introduced waitActive . Below is a sample failure stack trace using the attached patch. java.io.IOException: Timed out waiting for response from loggers at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:229) at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:916) at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:180) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1067) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:370) at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:228) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1005) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:111) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:37) at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:65) at org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.setUpHaCluster(TestDFSAdminWithHA.java:84) at org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testMetaSave(TestDFSAdminWithHA.java:205) Please kindly review patch 1. Thanks.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          0 patch 0m 17s The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          +1 mvninstall 8m 21s trunk passed
          +1 compile 0m 49s trunk passed with JDK v1.8.0_66
          +1 compile 0m 52s trunk passed with JDK v1.7.0_85
          +1 checkstyle 0m 19s trunk passed
          +1 mvnsite 1m 5s trunk passed
          +1 mvneclipse 0m 15s trunk passed
          +1 findbugs 2m 13s trunk passed
          +1 javadoc 1m 22s trunk passed with JDK v1.8.0_66
          +1 javadoc 2m 4s trunk passed with JDK v1.7.0_85
          +1 mvninstall 0m 59s the patch passed
          +1 compile 0m 56s the patch passed with JDK v1.8.0_66
          +1 javac 0m 56s the patch passed
          +1 compile 0m 51s the patch passed with JDK v1.7.0_85
          +1 javac 0m 51s the patch passed
          +1 checkstyle 0m 20s the patch passed
          +1 mvnsite 1m 6s the patch passed
          +1 mvneclipse 0m 17s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 2m 31s the patch passed
          +1 javadoc 1m 20s the patch passed with JDK v1.8.0_66
          +1 javadoc 2m 13s the patch passed with JDK v1.7.0_85
          -1 unit 72m 13s hadoop-hdfs in the patch failed with JDK v1.8.0_66.
          -1 unit 59m 44s hadoop-hdfs in the patch failed with JDK v1.7.0_85.
          -1 asflicense 0m 22s Patch generated 58 ASF License warnings.
          163m 33s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.hdfs.TestDatanodeDeath
            hadoop.hdfs.qjournal.TestNNWithQJM
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140
            hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
            hadoop.hdfs.server.namenode.ha.TestHAAppend
            hadoop.hdfs.TestReplication
            hadoop.hdfs.TestRollingUpgrade
            hadoop.hdfs.qjournal.TestSecureNNWithQJM
            hadoop.hdfs.TestRollingUpgradeDowngrade
            hadoop.hdfs.server.namenode.TestBackupNode
            hadoop.security.TestPermission
            hadoop.hdfs.TestMissingBlocksAlert
            hadoop.hdfs.tools.TestDFSAdminWithHA
            hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
            hadoop.hdfs.TestRollingUpgradeRollback
            hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
            hadoop.hdfs.TestDFSStorageStateRecovery
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
            hadoop.hdfs.TestDFSInotifyEventInputStream
            hadoop.hdfs.server.datanode.TestBlockScanner
            hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM
          JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
          JDK v1.7.0_85 Failed junit tests hadoop.hdfs.qjournal.TestNNWithQJM
            hadoop.hdfs.TestRollingUpgrade
            hadoop.hdfs.qjournal.TestSecureNNWithQJM
            hadoop.hdfs.TestRollingUpgradeDowngrade
            hadoop.security.TestPermission
            hadoop.hdfs.tools.TestDFSAdminWithHA
            hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
            hadoop.hdfs.TestRollingUpgradeRollback
            hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
            hadoop.hdfs.TestDFSInotifyEventInputStream
            hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM
          JDK v1.7.0_85 Timed out junit tests org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12773760/HDFS-9429.reproduce
          JIRA Issue HDFS-9429
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 0e2d689e9be1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / aac260f
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13606/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13606/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13606/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13606/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13606/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13606/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Max memory used 76MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13606/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. 0 patch 0m 17s The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. +1 mvninstall 8m 21s trunk passed +1 compile 0m 49s trunk passed with JDK v1.8.0_66 +1 compile 0m 52s trunk passed with JDK v1.7.0_85 +1 checkstyle 0m 19s trunk passed +1 mvnsite 1m 5s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 2m 13s trunk passed +1 javadoc 1m 22s trunk passed with JDK v1.8.0_66 +1 javadoc 2m 4s trunk passed with JDK v1.7.0_85 +1 mvninstall 0m 59s the patch passed +1 compile 0m 56s the patch passed with JDK v1.8.0_66 +1 javac 0m 56s the patch passed +1 compile 0m 51s the patch passed with JDK v1.7.0_85 +1 javac 0m 51s the patch passed +1 checkstyle 0m 20s the patch passed +1 mvnsite 1m 6s the patch passed +1 mvneclipse 0m 17s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 31s the patch passed +1 javadoc 1m 20s the patch passed with JDK v1.8.0_66 +1 javadoc 2m 13s the patch passed with JDK v1.7.0_85 -1 unit 72m 13s hadoop-hdfs in the patch failed with JDK v1.8.0_66. -1 unit 59m 44s hadoop-hdfs in the patch failed with JDK v1.7.0_85. -1 asflicense 0m 22s Patch generated 58 ASF License warnings. 163m 33s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.hdfs.TestDatanodeDeath   hadoop.hdfs.qjournal.TestNNWithQJM   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140   hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes   hadoop.hdfs.server.namenode.ha.TestHAAppend   hadoop.hdfs.TestReplication   hadoop.hdfs.TestRollingUpgrade   hadoop.hdfs.qjournal.TestSecureNNWithQJM   hadoop.hdfs.TestRollingUpgradeDowngrade   hadoop.hdfs.server.namenode.TestBackupNode   hadoop.security.TestPermission   hadoop.hdfs.TestMissingBlocksAlert   hadoop.hdfs.tools.TestDFSAdminWithHA   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits   hadoop.hdfs.TestRollingUpgradeRollback   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA   hadoop.hdfs.TestDFSStorageStateRecovery   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure   hadoop.hdfs.TestDFSInotifyEventInputStream   hadoop.hdfs.server.datanode.TestBlockScanner   hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM JDK v1.8.0_66 Timed out junit tests org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults JDK v1.7.0_85 Failed junit tests hadoop.hdfs.qjournal.TestNNWithQJM   hadoop.hdfs.TestRollingUpgrade   hadoop.hdfs.qjournal.TestSecureNNWithQJM   hadoop.hdfs.TestRollingUpgradeDowngrade   hadoop.security.TestPermission   hadoop.hdfs.tools.TestDFSAdminWithHA   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits   hadoop.hdfs.TestRollingUpgradeRollback   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA   hadoop.hdfs.TestDFSInotifyEventInputStream   hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM JDK v1.7.0_85 Timed out junit tests org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12773760/HDFS-9429.reproduce JIRA Issue HDFS-9429 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 0e2d689e9be1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / aac260f findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/13606/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13606/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13606/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13606/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13606/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13606/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Max memory used 76MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13606/console This message was automatically generated.
          Hide
          jojochuang Wei-Chiu Chuang added a comment -

          Hi Xiao Chen thanks for finding this issue and making the patch. I think you are on the right track of waiting for QJM to start. Just one small issue: it is not a good idea to simply return false and swallow an exception. If there's an IOException, it's better to log it.

          Show
          jojochuang Wei-Chiu Chuang added a comment - Hi Xiao Chen thanks for finding this issue and making the patch. I think you are on the right track of waiting for QJM to start. Just one small issue: it is not a good idea to simply return false and swallow an exception. If there's an IOException, it's better to log it.
          Hide
          zhz Zhe Zhang added a comment -

          Thanks Xiao for the work. MiniJournalCluster#waitActive LGTM. Regarding Wei-Chiu Chuang's question: I think the exception is expected if the IPC server is not up yet. So the logic if catching such an exception and returning false to waitFor – meaning it should keep waiting – looks OK.

          The only question I have is whether we should call waitActive in all places using MiniJournalCluster.

          Show
          zhz Zhe Zhang added a comment - Thanks Xiao for the work. MiniJournalCluster#waitActive LGTM. Regarding Wei-Chiu Chuang 's question: I think the exception is expected if the IPC server is not up yet. So the logic if catching such an exception and returning false to waitFor – meaning it should keep waiting – looks OK. The only question I have is whether we should call waitActive in all places using MiniJournalCluster .
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks Wei-Chiu and Zhe for the review.
          Yes the IOE in waitActive is expected, and I fear printing it would cause noise. Thanks Zhe for explaining. I added a comment here in the code.

          The only question I have is whether we should call waitActive in all places using MiniJournalCluster.

          I think the answer is yes. Although the test failures are only due to MiniQJMHACluster, it's safer to call waitActive before proceeding. I added the call to all found MiniJournalCluster.
          Arguably we may call waitActive within MiniJournalCluster#Builder#build, but that mixes the builder pattern with additional logic, so I didn't do this. I guess this is why MiniDFSCluster#Builder#build doesn't do it this way too.

          Patch 2 is attached to reflex the above.

          Show
          xiaochen Xiao Chen added a comment - Thanks Wei-Chiu and Zhe for the review. Yes the IOE in waitActive is expected, and I fear printing it would cause noise. Thanks Zhe for explaining. I added a comment here in the code. The only question I have is whether we should call waitActive in all places using MiniJournalCluster . I think the answer is yes. Although the test failures are only due to MiniQJMHACluster , it's safer to call waitActive before proceeding. I added the call to all found MiniJournalCluster . Arguably we may call waitActive within MiniJournalCluster#Builder#build , but that mixes the builder pattern with additional logic, so I didn't do this. I guess this is why MiniDFSCluster#Builder#build doesn't do it this way too. Patch 2 is attached to reflex the above.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 11 new or modified test files.
          +1 mvninstall 8m 48s trunk passed
          +1 compile 0m 49s trunk passed with JDK v1.8.0_66
          +1 compile 0m 49s trunk passed with JDK v1.7.0_85
          +1 checkstyle 0m 19s trunk passed
          +1 mvnsite 1m 0s trunk passed
          +1 mvneclipse 0m 15s trunk passed
          +1 findbugs 2m 14s trunk passed
          +1 javadoc 1m 17s trunk passed with JDK v1.8.0_66
          +1 javadoc 2m 5s trunk passed with JDK v1.7.0_85
          +1 mvninstall 0m 57s the patch passed
          +1 compile 0m 48s the patch passed with JDK v1.8.0_66
          +1 javac 0m 48s the patch passed
          +1 compile 0m 49s the patch passed with JDK v1.7.0_85
          +1 javac 0m 49s the patch passed
          +1 checkstyle 0m 19s the patch passed
          +1 mvnsite 1m 1s the patch passed
          +1 mvneclipse 0m 15s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 2m 27s the patch passed
          +1 javadoc 1m 17s the patch passed with JDK v1.8.0_66
          +1 javadoc 2m 5s the patch passed with JDK v1.7.0_85
          -1 unit 62m 4s hadoop-hdfs in the patch failed with JDK v1.8.0_66.
          -1 unit 59m 55s hadoop-hdfs in the patch failed with JDK v1.7.0_85.
          -1 asflicense 0m 22s Patch generated 56 ASF License warnings.
          152m 55s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.security.TestPermission
            hadoop.hdfs.server.datanode.TestBlockScanner
            hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
            hadoop.hdfs.shortcircuit.TestShortCircuitCache
          JDK v1.7.0_85 Failed junit tests hadoop.security.TestPermission
            hadoop.hdfs.TestDistributedFileSystem
            hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider
            hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12774290/HDFS-9429.002.patch
          JIRA Issue HDFS-9429
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux d5ce59be524e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 177975e
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13658/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13658/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13658/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13658/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13658/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13658/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Max memory used 75MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13658/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 11 new or modified test files. +1 mvninstall 8m 48s trunk passed +1 compile 0m 49s trunk passed with JDK v1.8.0_66 +1 compile 0m 49s trunk passed with JDK v1.7.0_85 +1 checkstyle 0m 19s trunk passed +1 mvnsite 1m 0s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 2m 14s trunk passed +1 javadoc 1m 17s trunk passed with JDK v1.8.0_66 +1 javadoc 2m 5s trunk passed with JDK v1.7.0_85 +1 mvninstall 0m 57s the patch passed +1 compile 0m 48s the patch passed with JDK v1.8.0_66 +1 javac 0m 48s the patch passed +1 compile 0m 49s the patch passed with JDK v1.7.0_85 +1 javac 0m 49s the patch passed +1 checkstyle 0m 19s the patch passed +1 mvnsite 1m 1s the patch passed +1 mvneclipse 0m 15s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 27s the patch passed +1 javadoc 1m 17s the patch passed with JDK v1.8.0_66 +1 javadoc 2m 5s the patch passed with JDK v1.7.0_85 -1 unit 62m 4s hadoop-hdfs in the patch failed with JDK v1.8.0_66. -1 unit 59m 55s hadoop-hdfs in the patch failed with JDK v1.7.0_85. -1 asflicense 0m 22s Patch generated 56 ASF License warnings. 152m 55s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.security.TestPermission   hadoop.hdfs.server.datanode.TestBlockScanner   hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes   hadoop.hdfs.shortcircuit.TestShortCircuitCache JDK v1.7.0_85 Failed junit tests hadoop.security.TestPermission   hadoop.hdfs.TestDistributedFileSystem   hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider   hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12774290/HDFS-9429.002.patch JIRA Issue HDFS-9429 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d5ce59be524e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 177975e findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/13658/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13658/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13658/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13658/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13658/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13658/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Max memory used 75MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13658/console This message was automatically generated.
          Hide
          xiaochen Xiao Chen added a comment -

          The test failures are not related - TestPermission is being addressed in another JIRA, others passed locally.

          Show
          xiaochen Xiao Chen added a comment - The test failures are not related - TestPermission is being addressed in another JIRA, others passed locally.
          Hide
          cmccabe Colin P. McCabe added a comment -

          This looks good. Just one comment, though: can we decrease the 100 ms polling timeout in MiniJournalCluster#waitActive to 50 ms?

          Show
          cmccabe Colin P. McCabe added a comment - This looks good. Just one comment, though: can we decrease the 100 ms polling timeout in MiniJournalCluster#waitActive to 50 ms?
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks Colin for the comment! I'd love to make improvements but could you explain your concern here? Is this to make waitActive to finish sooner and reduce the overall wait time?

          Show
          xiaochen Xiao Chen added a comment - Thanks Colin for the comment! I'd love to make improvements but could you explain your concern here? Is this to make waitActive to finish sooner and reduce the overall wait time?
          Hide
          xiaochen Xiao Chen added a comment -

          Attached patch 3 to do what Colin suggested.

          Show
          xiaochen Xiao Chen added a comment - Attached patch 3 to do what Colin suggested.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 11 new or modified test files.
          +1 mvninstall 8m 14s trunk passed
          +1 compile 0m 45s trunk passed with JDK v1.8.0_66
          +1 compile 0m 45s trunk passed with JDK v1.7.0_85
          +1 checkstyle 0m 17s trunk passed
          +1 mvnsite 0m 56s trunk passed
          +1 mvneclipse 0m 15s trunk passed
          +1 findbugs 2m 6s trunk passed
          +1 javadoc 1m 11s trunk passed with JDK v1.8.0_66
          +1 javadoc 1m 53s trunk passed with JDK v1.7.0_85
          +1 mvninstall 0m 52s the patch passed
          +1 compile 0m 45s the patch passed with JDK v1.8.0_66
          +1 javac 0m 45s the patch passed
          +1 compile 0m 47s the patch passed with JDK v1.7.0_85
          +1 javac 0m 47s the patch passed
          +1 checkstyle 0m 17s the patch passed
          +1 mvnsite 0m 59s the patch passed
          +1 mvneclipse 0m 15s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 2m 18s the patch passed
          +1 javadoc 1m 12s the patch passed with JDK v1.8.0_66
          +1 javadoc 1m 56s the patch passed with JDK v1.7.0_85
          -1 unit 57m 48s hadoop-hdfs in the patch failed with JDK v1.8.0_66.
          -1 unit 53m 55s hadoop-hdfs in the patch failed with JDK v1.7.0_85.
          -1 asflicense 0m 19s Patch generated 58 ASF License warnings.
          140m 37s



          Reason Tests
          JDK v1.8.0_66 Failed junit tests hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
          JDK v1.7.0_85 Failed junit tests hadoop.hdfs.server.namenode.ha.TestEditLogTailer



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12774916/HDFS-9429.003.patch
          JIRA Issue HDFS-9429
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux ef2f02a59f7c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9b8e50b
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13693/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/13693/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13693/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13693/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt
          JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13693/testReport/
          asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13693/artifact/patchprocess/patch-asflicense-problems.txt
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Max memory used 75MB
          Powered by Apache Yetus http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13693/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 11 new or modified test files. +1 mvninstall 8m 14s trunk passed +1 compile 0m 45s trunk passed with JDK v1.8.0_66 +1 compile 0m 45s trunk passed with JDK v1.7.0_85 +1 checkstyle 0m 17s trunk passed +1 mvnsite 0m 56s trunk passed +1 mvneclipse 0m 15s trunk passed +1 findbugs 2m 6s trunk passed +1 javadoc 1m 11s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 53s trunk passed with JDK v1.7.0_85 +1 mvninstall 0m 52s the patch passed +1 compile 0m 45s the patch passed with JDK v1.8.0_66 +1 javac 0m 45s the patch passed +1 compile 0m 47s the patch passed with JDK v1.7.0_85 +1 javac 0m 47s the patch passed +1 checkstyle 0m 17s the patch passed +1 mvnsite 0m 59s the patch passed +1 mvneclipse 0m 15s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 18s the patch passed +1 javadoc 1m 12s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 56s the patch passed with JDK v1.7.0_85 -1 unit 57m 48s hadoop-hdfs in the patch failed with JDK v1.8.0_66. -1 unit 53m 55s hadoop-hdfs in the patch failed with JDK v1.7.0_85. -1 asflicense 0m 19s Patch generated 58 ASF License warnings. 140m 37s Reason Tests JDK v1.8.0_66 Failed junit tests hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots JDK v1.7.0_85 Failed junit tests hadoop.hdfs.server.namenode.ha.TestEditLogTailer Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12774916/HDFS-9429.003.patch JIRA Issue HDFS-9429 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux ef2f02a59f7c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9b8e50b findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/13693/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/13693/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/13693/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt https://builds.apache.org/job/PreCommit-HDFS-Build/13693/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_85.txt JDK v1.7.0_85 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/13693/testReport/ asflicense https://builds.apache.org/job/PreCommit-HDFS-Build/13693/artifact/patchprocess/patch-asflicense-problems.txt modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Max memory used 75MB Powered by Apache Yetus http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/13693/console This message was automatically generated.
          Hide
          xiaochen Xiao Chen added a comment -

          The test failures are unrelated - The TestUpdatePipelineWithSnapshots failure of MiniDFSCluster#waitClusterUp is interesting, but this patch didn't touch that class at all.

          Show
          xiaochen Xiao Chen added a comment - The test failures are unrelated - The TestUpdatePipelineWithSnapshots failure of MiniDFSCluster#waitClusterUp is interesting, but this patch didn't touch that class at all.
          Hide
          cmccabe Colin P. McCabe added a comment -

          +1. Thanks, Xiao Chen.

          Show
          cmccabe Colin P. McCabe added a comment - +1. Thanks, Xiao Chen .
          Hide
          cmccabe Colin P. McCabe added a comment -

          Committed to 2.8. Thanks, Xiao Chen.

          Show
          cmccabe Colin P. McCabe added a comment - Committed to 2.8. Thanks, Xiao Chen .
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #8909 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8909/)
          HDFS-9429. Tests in TestDFSAdminWithHA intermittently fail with (cmccabe: rev 53e3bf7e704c332fb119f55cb92520a51b644bfc)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestNNWithQJM.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestMiniJournalCluster.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeRollback.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestEpochsAreUnique.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniQJMHACluster.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeMXBean.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #8909 (See https://builds.apache.org/job/Hadoop-trunk-Commit/8909/ ) HDFS-9429 . Tests in TestDFSAdminWithHA intermittently fail with (cmccabe: rev 53e3bf7e704c332fb119f55cb92520a51b644bfc) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestNNWithQJM.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestMiniJournalCluster.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeRollback.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestEpochsAreUnique.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniQJMHACluster.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeMXBean.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #657 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/657/)
          HDFS-9429. Tests in TestDFSAdminWithHA intermittently fail with (cmccabe: rev 53e3bf7e704c332fb119f55cb92520a51b644bfc)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeRollback.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestNNWithQJM.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestEpochsAreUnique.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniQJMHACluster.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeMXBean.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestMiniJournalCluster.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #657 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/657/ ) HDFS-9429 . Tests in TestDFSAdminWithHA intermittently fail with (cmccabe: rev 53e3bf7e704c332fb119f55cb92520a51b644bfc) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeRollback.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestNNWithQJM.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestEpochsAreUnique.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniQJMHACluster.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeMXBean.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestMiniJournalCluster.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks Colin for the review and commit! Also thanks Wei-Chiu and Zhe for the review.

          Show
          xiaochen Xiao Chen added a comment - Thanks Colin for the review and commit! Also thanks Wei-Chiu and Zhe for the review.
          Hide
          zhz Zhe Zhang added a comment -

          Colin P. McCabe Seems you forgot branch-2.8 commit?

          Show
          zhz Zhe Zhang added a comment - Colin P. McCabe Seems you forgot branch-2.8 commit?
          Hide
          cmccabe Colin P. McCabe added a comment -

          Thanks for the reminder, Zhe Zhang. It should be on branch-2.8 now.

          Show
          cmccabe Colin P. McCabe added a comment - Thanks for the reminder, Zhe Zhang . It should be on branch-2.8 now.

            People

            • Assignee:
              xiaochen Xiao Chen
              Reporter:
              xiaochen Xiao Chen
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development