Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-1561

mapreduce patch tests hung with "java.lang.OutOfMemoryError: Java heap space"

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • None
    • None
    • None
    • None

    Description

      http://hudson.zones.apache.org/hudson/view/Mapreduce/job/Mapreduce-Patch-h9.grid.sp2.yahoo.net/4/console

      Error form the console:

      [exec] [junit] 10/03/05 04:08:29 INFO datanode.DataNode: PacketResponder 2 for block blk_-3280111748864197295_19758 terminating
      [exec] [junit] 10/03/05 04:08:29 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:46067 is added to blk_-3280111748864197295_19758

      {blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:46067|RBW], ReplicaUnderConstruction[127.0.0.1:37626|RBW], ReplicaUnderConstruction[127.0.0.1:48886|RBW]]}

      size 0
      [exec] [junit] 10/03/05 04:08:29 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /tmp/hadoop-hudson/mapred/system/job_20100304162726530_3751/job-info is closed by DFSClient_79157028
      [exec] [junit] 10/03/05 04:08:29 INFO mapred.JobTracker: Job job_20100304162726530_3751 added successfully for user 'hudson' to queue 'default'
      [exec] [junit] 10/03/05 04:08:29 INFO mapred.JobTracker: Initializing job_20100304162726530_3751
      [exec] [junit] 10/03/05 04:08:29 INFO mapred.JobInProgress: Initializing job_20100304162726530_3751
      [exec] [junit] 10/03/05 04:08:29 INFO mapreduce.Job: Running job: job_20100304162726530_3751
      [exec] [junit] 10/03/05 04:08:29 INFO jobhistory.JobHistory: SetupWriter, creating file file:/grid/0/hudson/hudson-slave/workspace/Mapreduce-Patch-h9.grid.sp2.yahoo.net/trunk/build/contrib/raid/test/logs/history/job_20100304162726530_3751_hudson
      [exec] [junit] 10/03/05 04:08:29 ERROR mapred.JobTracker: Job initialization failed:
      [exec] [junit] org.apache.avro.AvroRuntimeException: java.lang.NoSuchFieldException: _SCHEMA
      [exec] [junit] at org.apache.avro.specific.SpecificData.createSchema(SpecificData.java:50)
      [exec] [junit] at org.apache.avro.reflect.ReflectData.getSchema(ReflectData.java:210)
      [exec] [junit] at org.apache.avro.specific.SpecificDatumWriter.<init>(SpecificDatumWriter.java:28)
      [exec] [junit] at org.apache.hadoop.mapreduce.jobhistory.EventWriter.<init>(EventWriter.java:47)
      [exec] [junit] at org.apache.hadoop.mapreduce.jobhistory.JobHistory.setupEventWriter(JobHistory.java:252)
      [exec] [junit] at org.apache.hadoop.mapred.JobInProgress.logSubmissionToJobHistory(JobInProgress.java:710)
      [exec] [junit] at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:619)
      [exec] [junit] at org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:3256)
      [exec] [junit] at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79)
      [exec] [junit] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      [exec] [junit] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      [exec] [junit] at java.lang.Thread.run(Thread.java:619)
      [exec] [junit] Caused by: java.lang.NoSuchFieldException: _SCHEMA
      [exec] [junit] at java.lang.Class.getDeclaredField(Class.java:1882)
      [exec] [junit] at org.apache.avro.specific.SpecificData.createSchema(SpecificData.java:48)
      [exec] [junit] ... 11 more
      [exec] [junit]
      [exec] [junit] Exception in thread "pool-1-thread-3" java.lang.OutOfMemoryError: Java heap space
      [exec] [junit] at java.util.Arrays.copyOf(Arrays.java:2786)
      [exec] [junit] at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
      [exec] [junit] at java.io.PrintStream.write(PrintStream.java:430)
      [exec] [junit] at org.apache.tools.ant.util.TeeOutputStream.write(TeeOutputStream.java:81)
      [exec] [junit] at java.io.PrintStream.write(PrintStream.java:430)
      [exec] [junit] at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
      [exec] [junit] at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
      [exec] [junit] at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
      [exec] [junit] at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
      [exec] [junit] at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
      [exec] [junit] at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
      [exec] [junit] at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
      [exec] [junit] at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
      [exec] [junit] at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
      [exec] [junit] at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)10/03/05 04:08:36 INFO raid.RaidNode: Triggering Policy Filter RaidTest1 hdfs://localhost:44624/user/test/raidtest
      [exec] [junit] 10/03/05 04:08:39 INFO raid.RaidNode: Trigger thread continuing to run...
      [exec] [junit] Exception in thread "org.apache.hadoop.raid.RaidNode$TriggerMonitor@5ebac9" 10/03/05 04:08:44 INFO security.Groups: Returning cached groups for 'hudso10/03/05 04:08:47 INFO ipc.Server: IPC Server handler 8 on 44624, call getException in thread "IPC Server handler 8 on 44624" java.lang.OutOfMemoryError: Java heap space10/03/05 04:08:53 INFO mapreduce.Job: map 0% reduce 0%

      Attachments

        Activity

          People

            cutting Doug Cutting
            gkesavan Giridharan Kesavan
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: