Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-9891

CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException

    Details

      Description

      The instruction on how to start up a mini CLI cluster in CLIMiniCluster.md don't work -it looks like MiniYarnCluster isn't on the classpath

      1. HADOOP-9891.patch
        1 kB
        Darrell Taylor

        Issue Links

          Activity

          Hide
          stevel@apache.org Steve Loughran added a comment -

          (this is on a clean linux box, no env variables for Hadoop set up other than JAVA_HOME)

          hadoop-2.1.1-SNAPSHOT$ bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.1.1-SNAPSHOT-tests.jar minicluster -rmport 8096 -jhsport 8097
          

          the JAR file exists

          ls -l share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.1.1-SNAPSHOT-tests.jar
          -rw-rw-r-- 1 stevel stevel 1429647 Aug 20 21:49 share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.1.1-SNAPSHOT-tests.jar
          
          

          but the cluster doesn't come out to play

          13/08/20 22:03:22 INFO mapreduce.MiniHadoopClusterManager: Updated 0 configuration settings from command line.
          13/08/20 22:03:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
          Formatting using clusterid: testClusterID
          13/08/20 22:03:22 INFO namenode.HostFileManager: read includes:
          HostSet(
          )
          13/08/20 22:03:22 INFO namenode.HostFileManager: read excludes:
          HostSet(
          )
          13/08/20 22:03:22 WARN conf.Configuration: hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping
          13/08/20 22:03:22 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
          13/08/20 22:03:22 INFO util.GSet: Computing capacity for map BlocksMap
          13/08/20 22:03:22 INFO util.GSet: VM type       = 32-bit
          13/08/20 22:03:22 INFO util.GSet: 2.0% max memory = 494.9 MB
          13/08/20 22:03:22 INFO util.GSet: capacity      = 2^21 = 2097152 entries
          13/08/20 22:03:22 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
          13/08/20 22:03:22 INFO blockmanagement.BlockManager: defaultReplication         = 1
          13/08/20 22:03:22 INFO blockmanagement.BlockManager: maxReplication             = 512
          13/08/20 22:03:22 INFO blockmanagement.BlockManager: minReplication             = 1
          13/08/20 22:03:22 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
          13/08/20 22:03:22 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
          13/08/20 22:03:22 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
          13/08/20 22:03:22 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
          13/08/20 22:03:23 INFO namenode.FSNamesystem: fsOwner             = stevel (auth:SIMPLE)
          13/08/20 22:03:23 INFO namenode.FSNamesystem: supergroup          = supergroup
          13/08/20 22:03:23 INFO namenode.FSNamesystem: isPermissionEnabled = true
          13/08/20 22:03:23 INFO namenode.FSNamesystem: HA Enabled: false
          13/08/20 22:03:23 INFO namenode.FSNamesystem: Append Enabled: true
          13/08/20 22:03:23 INFO util.GSet: Computing capacity for map INodeMap
          13/08/20 22:03:23 INFO util.GSet: VM type       = 32-bit
          13/08/20 22:03:23 INFO util.GSet: 1.0% max memory = 494.9 MB
          13/08/20 22:03:23 INFO util.GSet: capacity      = 2^20 = 1048576 entries
          13/08/20 22:03:23 INFO namenode.NameNode: Caching file names occuring more than 10 times
          13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
          13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
          13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 0
          13/08/20 22:03:23 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
          13/08/20 22:03:23 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
          13/08/20 22:03:23 INFO util.GSet: Computing capacity for map Namenode Retry Cache
          13/08/20 22:03:23 INFO util.GSet: VM type       = 32-bit
          13/08/20 22:03:23 INFO util.GSet: 0.029999999329447746% max memory = 494.9 MB
          13/08/20 22:03:23 INFO util.GSet: capacity      = 2^15 = 32768 entries
          13/08/20 22:03:23 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1 has been successfully formatted.
          13/08/20 22:03:23 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2 has been successfully formatted.
          13/08/20 22:03:23 INFO namenode.FSImage: Saving image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2/current/fsimage.ckpt_0000000000000000000 using no compression
          13/08/20 22:03:23 INFO namenode.FSImage: Saving image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage.ckpt_0000000000000000000 using no compression
          13/08/20 22:03:23 INFO namenode.FSImage: Image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage.ckpt_0000000000000000000 of size 198 bytes saved in 0 seconds.
          13/08/20 22:03:23 INFO namenode.FSImage: Image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2/current/fsimage.ckpt_0000000000000000000 of size 198 bytes saved in 0 seconds.
          13/08/20 22:03:23 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
          13/08/20 22:03:23 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
          13/08/20 22:03:23 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
          13/08/20 22:03:23 INFO impl.MetricsSystemImpl: NameNode metrics system started
          13/08/20 22:03:23 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
          13/08/20 22:03:23 INFO http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
          13/08/20 22:03:23 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
          13/08/20 22:03:23 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
          13/08/20 22:03:23 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
          13/08/20 22:03:23 INFO http.HttpServer: dfs.webhdfs.enabled = false
          13/08/20 22:03:23 INFO http.HttpServer: Jetty bound to port 49811
          13/08/20 22:03:23 INFO mortbay.log: jetty-6.1.26
          13/08/20 22:03:23 INFO mortbay.log: Started SelectChannelConnector@localhost:49811
          13/08/20 22:03:23 INFO namenode.NameNode: Web-server up at: localhost:49811
          13/08/20 22:03:23 INFO namenode.HostFileManager: read includes:
          HostSet(
          )
          13/08/20 22:03:23 INFO namenode.HostFileManager: read excludes:
          HostSet(
          )
          13/08/20 22:03:23 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
          13/08/20 22:03:23 INFO util.GSet: Computing capacity for map BlocksMap
          13/08/20 22:03:23 INFO util.GSet: VM type       = 32-bit
          13/08/20 22:03:23 INFO util.GSet: 2.0% max memory = 494.9 MB
          13/08/20 22:03:23 INFO util.GSet: capacity      = 2^21 = 2097152 entries
          13/08/20 22:03:23 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
          13/08/20 22:03:23 INFO blockmanagement.BlockManager: defaultReplication         = 1
          13/08/20 22:03:23 INFO blockmanagement.BlockManager: maxReplication             = 512
          13/08/20 22:03:23 INFO blockmanagement.BlockManager: minReplication             = 1
          13/08/20 22:03:23 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
          13/08/20 22:03:23 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
          13/08/20 22:03:23 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
          13/08/20 22:03:23 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
          13/08/20 22:03:23 INFO namenode.FSNamesystem: fsOwner             = stevel (auth:SIMPLE)
          13/08/20 22:03:23 INFO namenode.FSNamesystem: supergroup          = supergroup
          13/08/20 22:03:23 INFO namenode.FSNamesystem: isPermissionEnabled = true
          13/08/20 22:03:23 INFO namenode.FSNamesystem: HA Enabled: false
          13/08/20 22:03:23 INFO namenode.FSNamesystem: Append Enabled: true
          13/08/20 22:03:23 INFO util.GSet: Computing capacity for map INodeMap
          13/08/20 22:03:23 INFO util.GSet: VM type       = 32-bit
          13/08/20 22:03:23 INFO util.GSet: 1.0% max memory = 494.9 MB
          13/08/20 22:03:23 INFO util.GSet: capacity      = 2^20 = 1048576 entries
          13/08/20 22:03:23 INFO namenode.NameNode: Caching file names occuring more than 10 times
          13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
          13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
          13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 0
          13/08/20 22:03:23 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
          13/08/20 22:03:23 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
          13/08/20 22:03:23 INFO util.GSet: Computing capacity for map Namenode Retry Cache
          13/08/20 22:03:23 INFO util.GSet: VM type       = 32-bit
          13/08/20 22:03:23 INFO util.GSet: 0.029999999329447746% max memory = 494.9 MB
          13/08/20 22:03:23 INFO util.GSet: capacity      = 2^15 = 32768 entries
          13/08/20 22:03:23 INFO common.Storage: Lock on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/in_use.lock acquired by nodename 13794@ubuntu
          13/08/20 22:03:23 INFO common.Storage: Lock on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2/in_use.lock acquired by nodename 13794@ubuntu
          13/08/20 22:03:23 INFO namenode.FileJournalManager: Recovering unfinalized segments in /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current
          13/08/20 22:03:23 INFO namenode.FileJournalManager: Recovering unfinalized segments in /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2/current
          13/08/20 22:03:23 INFO namenode.FSImage: No edit log streams selected.
          13/08/20 22:03:23 INFO namenode.FSImage: Loading image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage_0000000000000000000 using no compression
          13/08/20 22:03:23 INFO namenode.FSImage: Number of files = 1
          13/08/20 22:03:23 INFO namenode.FSImage: Number of files under construction = 0
          13/08/20 22:03:23 INFO namenode.FSImage: Image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage_0000000000000000000 of size 198 bytes loaded in 0 seconds.
          13/08/20 22:03:23 INFO namenode.FSImage: Loaded image for txid 0 from /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage_0000000000000000000
          13/08/20 22:03:23 INFO namenode.FSEditLog: Starting log segment at 1
          13/08/20 22:03:23 INFO namenode.NameCache: initialized with 0 entries 0 lookups
          13/08/20 22:03:23 INFO namenode.FSNamesystem: Finished loading FSImage in 99 msecs
          13/08/20 22:03:23 INFO ipc.Server: Starting Socket Reader #1 for port 58332
          13/08/20 22:03:24 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean
          13/08/20 22:03:24 INFO namenode.FSNamesystem: Number of blocks under construction: 0
          13/08/20 22:03:24 INFO namenode.FSNamesystem: Number of blocks under construction: 0
          13/08/20 22:03:24 INFO namenode.FSNamesystem: initializing replication queues
          13/08/20 22:03:24 INFO blockmanagement.BlockManager: Total number of blocks            = 0
          13/08/20 22:03:24 INFO blockmanagement.BlockManager: Number of invalid blocks          = 0
          13/08/20 22:03:24 INFO blockmanagement.BlockManager: Number of under-replicated blocks = 0
          13/08/20 22:03:24 INFO blockmanagement.BlockManager: Number of  over-replicated blocks = 0
          13/08/20 22:03:24 INFO blockmanagement.BlockManager: Number of blocks being written    = 0
          13/08/20 22:03:24 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 13 msec
          13/08/20 22:03:24 INFO hdfs.StateChange: STATE* Leaving safe mode after 0 secs
          13/08/20 22:03:24 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
          13/08/20 22:03:24 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
          13/08/20 22:03:24 INFO ipc.Server: IPC Server Responder: starting
          13/08/20 22:03:24 INFO ipc.Server: IPC Server listener on 58332: starting
          13/08/20 22:03:24 INFO namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:58332
          13/08/20 22:03:24 INFO namenode.FSNamesystem: Starting services required for active state
          13/08/20 22:03:24 INFO hdfs.MiniDFSCluster: Starting DataNode 0 with dfs.datanode.data.dir: file:/home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1,file:/home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2
          13/08/20 22:03:24 INFO impl.MetricsSystemImpl: DataNode metrics system started (again)
          13/08/20 22:03:24 INFO datanode.DataNode: Configured hostname is 127.0.0.1
          13/08/20 22:03:24 INFO datanode.DataNode: Opened streaming server at /127.0.0.1:47429
          13/08/20 22:03:24 INFO datanode.DataNode: Balancing bandwith is 1048576 bytes/s
          13/08/20 22:03:24 INFO http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
          13/08/20 22:03:24 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
          13/08/20 22:03:24 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
          13/08/20 22:03:24 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
          13/08/20 22:03:24 INFO datanode.DataNode: Opened info server at localhost:0
          13/08/20 22:03:24 INFO datanode.DataNode: dfs.webhdfs.enabled = false
          13/08/20 22:03:24 INFO http.HttpServer: Jetty bound to port 59754
          13/08/20 22:03:24 INFO mortbay.log: jetty-6.1.26
          13/08/20 22:03:24 INFO mortbay.log: Started SelectChannelConnector@localhost:59754
          13/08/20 22:03:24 INFO datanode.DataNode: Opened IPC server at /127.0.0.1:34353
          13/08/20 22:03:24 INFO ipc.Server: Starting Socket Reader #1 for port 34353
          13/08/20 22:03:24 INFO datanode.DataNode: Refresh request received for nameservices: null
          13/08/20 22:03:24 INFO datanode.DataNode: Starting BPOfferServices for nameservices: <default>
          13/08/20 22:03:24 INFO datanode.DataNode: Block pool <registering> (storage id unknown) service to localhost/127.0.0.1:58332 starting to offer service
          13/08/20 22:03:24 INFO ipc.Server: IPC Server Responder: starting
          13/08/20 22:03:24 INFO ipc.Server: IPC Server listener on 34353: starting
          13/08/20 22:03:24 INFO common.Storage: Lock on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/in_use.lock acquired by nodename 13794@ubuntu
          13/08/20 22:03:24 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1 is not formatted
          13/08/20 22:03:24 INFO common.Storage: Formatting ...
          13/08/20 22:03:24 INFO hdfs.MiniDFSCluster: Waiting for cluster to become active
          13/08/20 22:03:24 INFO common.Storage: Lock on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/in_use.lock acquired by nodename 13794@ubuntu
          13/08/20 22:03:24 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2 is not formatted
          13/08/20 22:03:24 INFO common.Storage: Formatting ...
          13/08/20 22:03:24 INFO common.Storage: Locking is disabled
          13/08/20 22:03:24 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current/BP-604112716-192.168.1.132-1377032603159 is not formatted.
          13/08/20 22:03:24 INFO common.Storage: Formatting ...
          13/08/20 22:03:24 INFO common.Storage: Formatting block pool BP-604112716-192.168.1.132-1377032603159 directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current/BP-604112716-192.168.1.132-1377032603159/current
          13/08/20 22:03:24 INFO common.Storage: Locking is disabled
          13/08/20 22:03:24 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current/BP-604112716-192.168.1.132-1377032603159 is not formatted.
          13/08/20 22:03:24 INFO common.Storage: Formatting ...
          13/08/20 22:03:24 INFO common.Storage: Formatting block pool BP-604112716-192.168.1.132-1377032603159 directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current/BP-604112716-192.168.1.132-1377032603159/current
          13/08/20 22:03:24 INFO datanode.DataNode: Setting up storage: nsid=355659070;bpid=BP-604112716-192.168.1.132-1377032603159;lv=-47;nsInfo=lv=-47;cid=testClusterID;nsid=355659070;c=0;bpid=BP-604112716-192.168.1.132-1377032603159
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Added volume - /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Added volume - /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Registered FSDatasetState MBean
          13/08/20 22:03:24 INFO datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1377035360956 with interval 21600000
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Adding block pool BP-604112716-192.168.1.132-1377032603159
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Scanning block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current...
          13/08/20 22:03:24 INFO hdfs.MiniDFSCluster: Waiting for cluster to become active
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Scanning block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current...
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-604112716-192.168.1.132-1377032603159 on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current: 16ms
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-604112716-192.168.1.132-1377032603159 on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current: 22ms
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-604112716-192.168.1.132-1377032603159: 22ms
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current...
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current: 0ms
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current...
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current: 1ms
          13/08/20 22:03:24 INFO impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
          13/08/20 22:03:24 INFO datanode.DataNode: Block pool BP-604112716-192.168.1.132-1377032603159 (storage id DS-1166679418-192.168.1.132-47429-1377032604876) service to localhost/127.0.0.1:58332 beginning handshake with NN
          13/08/20 22:03:24 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1, storageID=DS-1166679418-192.168.1.132-47429-1377032604876, infoPort=59754, ipcPort=34353, storageInfo=lv=-47;cid=testClusterID;nsid=355659070;c=0) storage DS-1166679418-192.168.1.132-47429-1377032604876
          13/08/20 22:03:25 INFO net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:47429
          13/08/20 22:03:25 INFO datanode.DataNode: Block pool Block pool BP-604112716-192.168.1.132-1377032603159 (storage id DS-1166679418-192.168.1.132-47429-1377032604876) service to localhost/127.0.0.1:58332 successfully registered with NN
          13/08/20 22:03:25 INFO datanode.DataNode: For namenode localhost/127.0.0.1:58332 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
          13/08/20 22:03:25 INFO datanode.DataNode: Namenode Block pool BP-604112716-192.168.1.132-1377032603159 (storage id DS-1166679418-192.168.1.132-47429-1377032604876) service to localhost/127.0.0.1:58332 trying to claim ACTIVE state with txid=1
          13/08/20 22:03:25 INFO datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-604112716-192.168.1.132-1377032603159 (storage id DS-1166679418-192.168.1.132-47429-1377032604876) service to localhost/127.0.0.1:58332
          13/08/20 22:03:25 INFO blockmanagement.BlockManager: BLOCK* processReport: Received first block report from 127.0.0.1:47429 after starting up or becoming active. Its block contents are no longer considered stale
          13/08/20 22:03:25 INFO BlockStateChange: BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-1166679418-192.168.1.132-47429-1377032604876, infoPort=59754, ipcPort=34353, storageInfo=lv=-47;cid=testClusterID;nsid=355659070;c=0), blocks: 0, processing time: 4 msecs
          13/08/20 22:03:25 INFO datanode.DataNode: BlockReport of 0 blocks took 1 msec to generate and 9 msecs for RPC and NN processing
          13/08/20 22:03:25 INFO datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@381a53
          13/08/20 22:03:25 INFO util.GSet: Computing capacity for map BlockMap
          13/08/20 22:03:25 INFO util.GSet: VM type       = 32-bit
          13/08/20 22:03:25 INFO util.GSet: 0.5% max memory = 494.9 MB
          13/08/20 22:03:25 INFO util.GSet: capacity      = 2^19 = 524288 entries
          13/08/20 22:03:25 INFO datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-604112716-192.168.1.132-1377032603159
          13/08/20 22:03:25 INFO datanode.DataBlockScanner: Added bpid=BP-604112716-192.168.1.132-1377032603159 to blockPoolScannerMap, new size=1
          13/08/20 22:03:25 INFO hdfs.MiniDFSCluster: Cluster is active
          13/08/20 22:03:25 INFO mapreduce.MiniHadoopClusterManager: Started MiniDFSCluster -- namenode on port 58332
          java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/server/MiniYARNCluster
          	at org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:170)
          	at org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:129)
          	at org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:314)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:616)
          	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
          	at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
          	at org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:115)
          	at org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:123)
          	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          	at java.lang.reflect.Method.invoke(Method.java:616)
          	at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
          Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.yarn.server.MiniYARNCluster
          	at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
          	at java.security.AccessController.doPrivileged(Native Method)
          	at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
          	at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
          	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
          	at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
          	... 16 more
          
          Show
          stevel@apache.org Steve Loughran added a comment - (this is on a clean linux box, no env variables for Hadoop set up other than JAVA_HOME) hadoop-2.1.1-SNAPSHOT$ bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.1.1-SNAPSHOT-tests.jar minicluster -rmport 8096 -jhsport 8097 the JAR file exists ls -l share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.1.1-SNAPSHOT-tests.jar -rw-rw-r-- 1 stevel stevel 1429647 Aug 20 21:49 share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.1.1-SNAPSHOT-tests.jar but the cluster doesn't come out to play 13/08/20 22:03:22 INFO mapreduce.MiniHadoopClusterManager: Updated 0 configuration settings from command line. 13/08/20 22:03:22 WARN util.NativeCodeLoader: Unable to load native -hadoop library for your platform... using builtin-java classes where applicable Formatting using clusterid: testClusterID 13/08/20 22:03:22 INFO namenode.HostFileManager: read includes: HostSet( ) 13/08/20 22:03:22 INFO namenode.HostFileManager: read excludes: HostSet( ) 13/08/20 22:03:22 WARN conf.Configuration: hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping 13/08/20 22:03:22 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 13/08/20 22:03:22 INFO util.GSet: Computing capacity for map BlocksMap 13/08/20 22:03:22 INFO util.GSet: VM type = 32-bit 13/08/20 22:03:22 INFO util.GSet: 2.0% max memory = 494.9 MB 13/08/20 22:03:22 INFO util.GSet: capacity = 2^21 = 2097152 entries 13/08/20 22:03:22 INFO blockmanagement.BlockManager: dfs.block.access.token.enable= false 13/08/20 22:03:22 INFO blockmanagement.BlockManager: defaultReplication = 1 13/08/20 22:03:22 INFO blockmanagement.BlockManager: maxReplication = 512 13/08/20 22:03:22 INFO blockmanagement.BlockManager: minReplication = 1 13/08/20 22:03:22 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 13/08/20 22:03:22 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 13/08/20 22:03:22 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 13/08/20 22:03:22 INFO blockmanagement.BlockManager: encryptDataTransfer = false 13/08/20 22:03:23 INFO namenode.FSNamesystem: fsOwner = stevel (auth:SIMPLE) 13/08/20 22:03:23 INFO namenode.FSNamesystem: supergroup = supergroup 13/08/20 22:03:23 INFO namenode.FSNamesystem: isPermissionEnabled = true 13/08/20 22:03:23 INFO namenode.FSNamesystem: HA Enabled: false 13/08/20 22:03:23 INFO namenode.FSNamesystem: Append Enabled: true 13/08/20 22:03:23 INFO util.GSet: Computing capacity for map INodeMap 13/08/20 22:03:23 INFO util.GSet: VM type = 32-bit 13/08/20 22:03:23 INFO util.GSet: 1.0% max memory = 494.9 MB 13/08/20 22:03:23 INFO util.GSet: capacity = 2^20 = 1048576 entries 13/08/20 22:03:23 INFO namenode.NameNode: Caching file names occuring more than 10 times 13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 0 13/08/20 22:03:23 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 13/08/20 22:03:23 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 13/08/20 22:03:23 INFO util.GSet: Computing capacity for map Namenode Retry Cache 13/08/20 22:03:23 INFO util.GSet: VM type = 32-bit 13/08/20 22:03:23 INFO util.GSet: 0.029999999329447746% max memory = 494.9 MB 13/08/20 22:03:23 INFO util.GSet: capacity = 2^15 = 32768 entries 13/08/20 22:03:23 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1 has been successfully formatted. 13/08/20 22:03:23 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2 has been successfully formatted. 13/08/20 22:03:23 INFO namenode.FSImage: Saving image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2/current/fsimage.ckpt_0000000000000000000 using no compression 13/08/20 22:03:23 INFO namenode.FSImage: Saving image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage.ckpt_0000000000000000000 using no compression 13/08/20 22:03:23 INFO namenode.FSImage: Image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage.ckpt_0000000000000000000 of size 198 bytes saved in 0 seconds. 13/08/20 22:03:23 INFO namenode.FSImage: Image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2/current/fsimage.ckpt_0000000000000000000 of size 198 bytes saved in 0 seconds. 13/08/20 22:03:23 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 13/08/20 22:03:23 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 13/08/20 22:03:23 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 13/08/20 22:03:23 INFO impl.MetricsSystemImpl: NameNode metrics system started 13/08/20 22:03:23 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 13/08/20 22:03:23 INFO http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 13/08/20 22:03:23 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 13/08/20 22:03:23 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 13/08/20 22:03:23 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 13/08/20 22:03:23 INFO http.HttpServer: dfs.webhdfs.enabled = false 13/08/20 22:03:23 INFO http.HttpServer: Jetty bound to port 49811 13/08/20 22:03:23 INFO mortbay.log: jetty-6.1.26 13/08/20 22:03:23 INFO mortbay.log: Started SelectChannelConnector@localhost:49811 13/08/20 22:03:23 INFO namenode.NameNode: Web-server up at: localhost:49811 13/08/20 22:03:23 INFO namenode.HostFileManager: read includes: HostSet( ) 13/08/20 22:03:23 INFO namenode.HostFileManager: read excludes: HostSet( ) 13/08/20 22:03:23 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 13/08/20 22:03:23 INFO util.GSet: Computing capacity for map BlocksMap 13/08/20 22:03:23 INFO util.GSet: VM type = 32-bit 13/08/20 22:03:23 INFO util.GSet: 2.0% max memory = 494.9 MB 13/08/20 22:03:23 INFO util.GSet: capacity = 2^21 = 2097152 entries 13/08/20 22:03:23 INFO blockmanagement.BlockManager: dfs.block.access.token.enable= false 13/08/20 22:03:23 INFO blockmanagement.BlockManager: defaultReplication = 1 13/08/20 22:03:23 INFO blockmanagement.BlockManager: maxReplication = 512 13/08/20 22:03:23 INFO blockmanagement.BlockManager: minReplication = 1 13/08/20 22:03:23 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 13/08/20 22:03:23 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 13/08/20 22:03:23 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 13/08/20 22:03:23 INFO blockmanagement.BlockManager: encryptDataTransfer = false 13/08/20 22:03:23 INFO namenode.FSNamesystem: fsOwner = stevel (auth:SIMPLE) 13/08/20 22:03:23 INFO namenode.FSNamesystem: supergroup = supergroup 13/08/20 22:03:23 INFO namenode.FSNamesystem: isPermissionEnabled = true 13/08/20 22:03:23 INFO namenode.FSNamesystem: HA Enabled: false 13/08/20 22:03:23 INFO namenode.FSNamesystem: Append Enabled: true 13/08/20 22:03:23 INFO util.GSet: Computing capacity for map INodeMap 13/08/20 22:03:23 INFO util.GSet: VM type = 32-bit 13/08/20 22:03:23 INFO util.GSet: 1.0% max memory = 494.9 MB 13/08/20 22:03:23 INFO util.GSet: capacity = 2^20 = 1048576 entries 13/08/20 22:03:23 INFO namenode.NameNode: Caching file names occuring more than 10 times 13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 13/08/20 22:03:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 0 13/08/20 22:03:23 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 13/08/20 22:03:23 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 13/08/20 22:03:23 INFO util.GSet: Computing capacity for map Namenode Retry Cache 13/08/20 22:03:23 INFO util.GSet: VM type = 32-bit 13/08/20 22:03:23 INFO util.GSet: 0.029999999329447746% max memory = 494.9 MB 13/08/20 22:03:23 INFO util.GSet: capacity = 2^15 = 32768 entries 13/08/20 22:03:23 INFO common.Storage: Lock on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/in_use.lock acquired by nodename 13794@ubuntu 13/08/20 22:03:23 INFO common.Storage: Lock on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2/in_use.lock acquired by nodename 13794@ubuntu 13/08/20 22:03:23 INFO namenode.FileJournalManager: Recovering unfinalized segments in /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current 13/08/20 22:03:23 INFO namenode.FileJournalManager: Recovering unfinalized segments in /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name2/current 13/08/20 22:03:23 INFO namenode.FSImage: No edit log streams selected. 13/08/20 22:03:23 INFO namenode.FSImage: Loading image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage_0000000000000000000 using no compression 13/08/20 22:03:23 INFO namenode.FSImage: Number of files = 1 13/08/20 22:03:23 INFO namenode.FSImage: Number of files under construction = 0 13/08/20 22:03:23 INFO namenode.FSImage: Image file /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage_0000000000000000000 of size 198 bytes loaded in 0 seconds. 13/08/20 22:03:23 INFO namenode.FSImage: Loaded image for txid 0 from /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/name1/current/fsimage_0000000000000000000 13/08/20 22:03:23 INFO namenode.FSEditLog: Starting log segment at 1 13/08/20 22:03:23 INFO namenode.NameCache: initialized with 0 entries 0 lookups 13/08/20 22:03:23 INFO namenode.FSNamesystem: Finished loading FSImage in 99 msecs 13/08/20 22:03:23 INFO ipc.Server: Starting Socket Reader #1 for port 58332 13/08/20 22:03:24 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean 13/08/20 22:03:24 INFO namenode.FSNamesystem: Number of blocks under construction: 0 13/08/20 22:03:24 INFO namenode.FSNamesystem: Number of blocks under construction: 0 13/08/20 22:03:24 INFO namenode.FSNamesystem: initializing replication queues 13/08/20 22:03:24 INFO blockmanagement.BlockManager: Total number of blocks = 0 13/08/20 22:03:24 INFO blockmanagement.BlockManager: Number of invalid blocks = 0 13/08/20 22:03:24 INFO blockmanagement.BlockManager: Number of under-replicated blocks = 0 13/08/20 22:03:24 INFO blockmanagement.BlockManager: Number of over-replicated blocks = 0 13/08/20 22:03:24 INFO blockmanagement.BlockManager: Number of blocks being written = 0 13/08/20 22:03:24 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 13 msec 13/08/20 22:03:24 INFO hdfs.StateChange: STATE* Leaving safe mode after 0 secs 13/08/20 22:03:24 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes 13/08/20 22:03:24 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 13/08/20 22:03:24 INFO ipc.Server: IPC Server Responder: starting 13/08/20 22:03:24 INFO ipc.Server: IPC Server listener on 58332: starting 13/08/20 22:03:24 INFO namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:58332 13/08/20 22:03:24 INFO namenode.FSNamesystem: Starting services required for active state 13/08/20 22:03:24 INFO hdfs.MiniDFSCluster: Starting DataNode 0 with dfs.datanode.data.dir: file:/home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1,file:/home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2 13/08/20 22:03:24 INFO impl.MetricsSystemImpl: DataNode metrics system started (again) 13/08/20 22:03:24 INFO datanode.DataNode: Configured hostname is 127.0.0.1 13/08/20 22:03:24 INFO datanode.DataNode: Opened streaming server at /127.0.0.1:47429 13/08/20 22:03:24 INFO datanode.DataNode: Balancing bandwith is 1048576 bytes/s 13/08/20 22:03:24 INFO http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 13/08/20 22:03:24 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 13/08/20 22:03:24 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 13/08/20 22:03:24 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 13/08/20 22:03:24 INFO datanode.DataNode: Opened info server at localhost:0 13/08/20 22:03:24 INFO datanode.DataNode: dfs.webhdfs.enabled = false 13/08/20 22:03:24 INFO http.HttpServer: Jetty bound to port 59754 13/08/20 22:03:24 INFO mortbay.log: jetty-6.1.26 13/08/20 22:03:24 INFO mortbay.log: Started SelectChannelConnector@localhost:59754 13/08/20 22:03:24 INFO datanode.DataNode: Opened IPC server at /127.0.0.1:34353 13/08/20 22:03:24 INFO ipc.Server: Starting Socket Reader #1 for port 34353 13/08/20 22:03:24 INFO datanode.DataNode: Refresh request received for nameservices: null 13/08/20 22:03:24 INFO datanode.DataNode: Starting BPOfferServices for nameservices: < default > 13/08/20 22:03:24 INFO datanode.DataNode: Block pool <registering> (storage id unknown) service to localhost/127.0.0.1:58332 starting to offer service 13/08/20 22:03:24 INFO ipc.Server: IPC Server Responder: starting 13/08/20 22:03:24 INFO ipc.Server: IPC Server listener on 34353: starting 13/08/20 22:03:24 INFO common.Storage: Lock on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/in_use.lock acquired by nodename 13794@ubuntu 13/08/20 22:03:24 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1 is not formatted 13/08/20 22:03:24 INFO common.Storage: Formatting ... 13/08/20 22:03:24 INFO hdfs.MiniDFSCluster: Waiting for cluster to become active 13/08/20 22:03:24 INFO common.Storage: Lock on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/in_use.lock acquired by nodename 13794@ubuntu 13/08/20 22:03:24 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2 is not formatted 13/08/20 22:03:24 INFO common.Storage: Formatting ... 13/08/20 22:03:24 INFO common.Storage: Locking is disabled 13/08/20 22:03:24 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current/BP-604112716-192.168.1.132-1377032603159 is not formatted. 13/08/20 22:03:24 INFO common.Storage: Formatting ... 13/08/20 22:03:24 INFO common.Storage: Formatting block pool BP-604112716-192.168.1.132-1377032603159 directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current/BP-604112716-192.168.1.132-1377032603159/current 13/08/20 22:03:24 INFO common.Storage: Locking is disabled 13/08/20 22:03:24 INFO common.Storage: Storage directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current/BP-604112716-192.168.1.132-1377032603159 is not formatted. 13/08/20 22:03:24 INFO common.Storage: Formatting ... 13/08/20 22:03:24 INFO common.Storage: Formatting block pool BP-604112716-192.168.1.132-1377032603159 directory /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current/BP-604112716-192.168.1.132-1377032603159/current 13/08/20 22:03:24 INFO datanode.DataNode: Setting up storage: nsid=355659070;bpid=BP-604112716-192.168.1.132-1377032603159;lv=-47;nsInfo=lv=-47;cid=testClusterID;nsid=355659070;c=0;bpid=BP-604112716-192.168.1.132-1377032603159 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Added volume - /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Added volume - /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Registered FSDatasetState MBean 13/08/20 22:03:24 INFO datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1377035360956 with interval 21600000 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Adding block pool BP-604112716-192.168.1.132-1377032603159 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Scanning block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current... 13/08/20 22:03:24 INFO hdfs.MiniDFSCluster: Waiting for cluster to become active 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Scanning block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current... 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-604112716-192.168.1.132-1377032603159 on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current: 16ms 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-604112716-192.168.1.132-1377032603159 on /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current: 22ms 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-604112716-192.168.1.132-1377032603159: 22ms 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current... 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data1/current: 0ms 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current... 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-604112716-192.168.1.132-1377032603159 on volume /home/stevel/hadoop-2.1.1-SNAPSHOT/build/test/data/dfs/data/data2/current: 1ms 13/08/20 22:03:24 INFO impl.FsDatasetImpl: Total time to add all replicas to map: 1ms 13/08/20 22:03:24 INFO datanode.DataNode: Block pool BP-604112716-192.168.1.132-1377032603159 (storage id DS-1166679418-192.168.1.132-47429-1377032604876) service to localhost/127.0.0.1:58332 beginning handshake with NN 13/08/20 22:03:24 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1, storageID=DS-1166679418-192.168.1.132-47429-1377032604876, infoPort=59754, ipcPort=34353, storageInfo=lv=-47;cid=testClusterID;nsid=355659070;c=0) storage DS-1166679418-192.168.1.132-47429-1377032604876 13/08/20 22:03:25 INFO net.NetworkTopology: Adding a new node: / default -rack/127.0.0.1:47429 13/08/20 22:03:25 INFO datanode.DataNode: Block pool Block pool BP-604112716-192.168.1.132-1377032603159 (storage id DS-1166679418-192.168.1.132-47429-1377032604876) service to localhost/127.0.0.1:58332 successfully registered with NN 13/08/20 22:03:25 INFO datanode.DataNode: For namenode localhost/127.0.0.1:58332 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000 13/08/20 22:03:25 INFO datanode.DataNode: Namenode Block pool BP-604112716-192.168.1.132-1377032603159 (storage id DS-1166679418-192.168.1.132-47429-1377032604876) service to localhost/127.0.0.1:58332 trying to claim ACTIVE state with txid=1 13/08/20 22:03:25 INFO datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-604112716-192.168.1.132-1377032603159 (storage id DS-1166679418-192.168.1.132-47429-1377032604876) service to localhost/127.0.0.1:58332 13/08/20 22:03:25 INFO blockmanagement.BlockManager: BLOCK* processReport: Received first block report from 127.0.0.1:47429 after starting up or becoming active. Its block contents are no longer considered stale 13/08/20 22:03:25 INFO BlockStateChange: BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-1166679418-192.168.1.132-47429-1377032604876, infoPort=59754, ipcPort=34353, storageInfo=lv=-47;cid=testClusterID;nsid=355659070;c=0), blocks: 0, processing time: 4 msecs 13/08/20 22:03:25 INFO datanode.DataNode: BlockReport of 0 blocks took 1 msec to generate and 9 msecs for RPC and NN processing 13/08/20 22:03:25 INFO datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@381a53 13/08/20 22:03:25 INFO util.GSet: Computing capacity for map BlockMap 13/08/20 22:03:25 INFO util.GSet: VM type = 32-bit 13/08/20 22:03:25 INFO util.GSet: 0.5% max memory = 494.9 MB 13/08/20 22:03:25 INFO util.GSet: capacity = 2^19 = 524288 entries 13/08/20 22:03:25 INFO datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-604112716-192.168.1.132-1377032603159 13/08/20 22:03:25 INFO datanode.DataBlockScanner: Added bpid=BP-604112716-192.168.1.132-1377032603159 to blockPoolScannerMap, new size=1 13/08/20 22:03:25 INFO hdfs.MiniDFSCluster: Cluster is active 13/08/20 22:03:25 INFO mapreduce.MiniHadoopClusterManager: Started MiniDFSCluster -- namenode on port 58332 java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/server/MiniYARNCluster at org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:170) at org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:129) at org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:314) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72) at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:115) at org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:123) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.yarn.server.MiniYARNCluster at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang. ClassLoader .loadClass( ClassLoader .java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang. ClassLoader .loadClass( ClassLoader .java:266) ... 16 more
          Hide
          gopalv Gopal V added a comment -

          Just adding the YARN mini cluster into class-path does not give an operational cluster, the AM code is missing from that jar & also has to be added to the classpath.

          Show
          gopalv Gopal V added a comment - Just adding the YARN mini cluster into class-path does not give an operational cluster, the AM code is missing from that jar & also has to be added to the classpath.
          Hide
          d4rr3ll Darrell Taylor added a comment -

          I'll have a go a fixing this as I'm trying to use it. Would anybody be able to give me any pointers towards where I should be looking to get the missing class into the jar?

          Show
          d4rr3ll Darrell Taylor added a comment - I'll have a go a fixing this as I'm trying to use it. Would anybody be able to give me any pointers towards where I should be looking to get the missing class into the jar?
          Hide
          d4rr3ll Darrell Taylor added a comment -

          This seems to be the solution. If I can make it work I'll update the docs.

          Show
          d4rr3ll Darrell Taylor added a comment - This seems to be the solution. If I can make it work I'll update the docs.
          Hide
          d4rr3ll Darrell Taylor added a comment -

          the above comment is about the related Jira I just linked...

          https://issues.apache.org/jira/browse/YARN-683

          Show
          d4rr3ll Darrell Taylor added a comment - the above comment is about the related Jira I just linked... https://issues.apache.org/jira/browse/YARN-683
          Hide
          hadoopqa Hadoop QA added a comment -



          +1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 2m 48s Pre-patch trunk compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 whitespace 0m 0s The patch has no lines that end in whitespace.
          +1 release audit 0m 20s The applied patch does not increase the total number of release audit warnings.
          +1 site 2m 53s Site still builds.
              6m 12s  



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12727940/HADOOP-9891.patch
          Optional Tests site
          git revision trunk / 91b97c2
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/6173/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 pre-patch 2m 48s Pre-patch trunk compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. +1 whitespace 0m 0s The patch has no lines that end in whitespace. +1 release audit 0m 20s The applied patch does not increase the total number of release audit warnings. +1 site 2m 53s Site still builds.     6m 12s   Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12727940/HADOOP-9891.patch Optional Tests site git revision trunk / 91b97c2 Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/6173/console This message was automatically generated.
          Hide
          aw Allen Wittenauer added a comment -

          +1 committed to trunk.

          Thanks!

          Show
          aw Allen Wittenauer added a comment - +1 committed to trunk. Thanks!
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #7911 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7911/)
          HADOOP-9891. CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #7911 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7911/ ) HADOOP-9891 . CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #211 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/211/)
          HADOOP-9891. CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #211 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/211/ ) HADOOP-9891 . CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #941 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/941/)
          HADOOP-9891. CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #941 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/941/ ) HADOOP-9891 . CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2139 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2139/)
          HADOOP-9891. CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5)

          • hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2139 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2139/ ) HADOOP-9891 . CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5) hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #199 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/199/)
          HADOOP-9891. CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5)

          • hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #199 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/199/ ) HADOOP-9891 . CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5) hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #209 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/209/)
          HADOOP-9891. CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #209 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/209/ ) HADOOP-9891 . CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2157 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2157/)
          HADOOP-9891. CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2157 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2157/ ) HADOOP-9891 . CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException (Darrell Taylor via aw) (aw: rev 4d8fb8c19c04088cf8f8e9deecb571273adeaab5) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm
          Hide
          jira.shegalov Gera Shegalov added a comment -

          Any objections to backporting this to branch-2 as well?

          Show
          jira.shegalov Gera Shegalov added a comment - Any objections to backporting this to branch-2 as well?
          Hide
          ajisakaa Akira Ajisaka added a comment -

          +1 for backporting this to branch-2.

          Show
          ajisakaa Akira Ajisaka added a comment - +1 for backporting this to branch-2.
          Hide
          jira.shegalov Gera Shegalov added a comment -

          Thanks, Akira Ajisaka! Committed to branch-2!

          Show
          jira.shegalov Gera Shegalov added a comment - Thanks, Akira Ajisaka ! Committed to branch-2!

            People

            • Assignee:
              d4rr3ll Darrell Taylor
              Reporter:
              stevel@apache.org Steve Loughran
            • Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development