Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-6835

WARN io.ReadaheadPool: Failed readahead on ifile

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 2.7.3
    • None
    • None
    • None
    • OS: Ubuntu Desktop 16.04 LTS
      JVM : Oracle Java 8 update 111
      Hadoop: 2.7.3 on pseudo distributed environment
      Hardware: Virtual Machine
      Proceesor - 2 cores
      RAM - 4 Gb
      Swap - 1 Gb

    • EBADF; MapReduce; BadFileDescriptor

    Description

      Whenever a MapReduce job is executed, the warning of Bad file desciptor is displayed. This mostly happens when the Combiner/Reducer is running.
      This bug is resulting in incorrect results for the reducers where the warning is shown.

      The following is the entire job execution log.
      17/01/24 14:34:25 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
      17/01/24 14:34:25 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
      17/01/24 14:34:25 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
      17/01/24 14:34:26 INFO input.FileInputFormat: Total input paths to process : 1
      17/01/24 14:34:26 INFO mapreduce.JobSubmitter: number of splits:5
      17/01/24 14:34:26 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1576831426_0001
      17/01/24 14:34:26 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
      17/01/24 14:34:26 INFO mapreduce.Job: Running job: job_local1576831426_0001
      17/01/24 14:34:26 INFO mapred.LocalJobRunner: OutputCommitter set in config null
      17/01/24 14:34:26 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
      17/01/24 14:34:26 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
      17/01/24 14:34:26 INFO mapred.LocalJobRunner: Waiting for map tasks
      17/01/24 14:34:26 INFO mapred.LocalJobRunner: Starting task: attempt_local1576831426_0001_m_000000_0
      17/01/24 14:34:27 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
      17/01/24 14:34:27 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
      17/01/24 14:34:27 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/krishna/flightinfo/input/2000.csv:0+134217728
      17/01/24 14:34:27 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
      17/01/24 14:34:27 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
      17/01/24 14:34:27 INFO mapred.MapTask: soft limit at 83886080
      17/01/24 14:34:27 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
      17/01/24 14:34:27 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
      17/01/24 14:34:27 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
      17/01/24 14:34:27 INFO mapreduce.Job: Job job_local1576831426_0001 running in uber mode : false
      17/01/24 14:34:27 INFO mapreduce.Job: map 0% reduce 0%
      17/01/24 14:34:33 INFO mapred.LocalJobRunner: map > map
      17/01/24 14:34:33 INFO mapreduce.Job: map 9% reduce 0%
      17/01/24 14:34:35 INFO mapred.LocalJobRunner: map > map
      17/01/24 14:34:35 INFO mapred.MapTask: Starting flush of map output
      17/01/24 14:34:35 INFO mapred.MapTask: Spilling map output
      17/01/24 14:34:35 INFO mapred.MapTask: bufstart = 0; bufend = 17437459; bufvoid = 104857600
      17/01/24 14:34:35 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 20849028(83396112); length = 5365369/6553600
      17/01/24 14:34:35 INFO mapred.MapTask: Finished spill 0
      17/01/24 14:34:35 INFO mapred.Task: Task:attempt_local1576831426_0001_m_000000_0 is done. And is in the process of committing
      17/01/24 14:34:35 INFO mapred.LocalJobRunner: map
      17/01/24 14:34:35 INFO mapred.Task: Task 'attempt_local1576831426_0001_m_000000_0' done.
      17/01/24 14:34:35 INFO mapred.LocalJobRunner: Finishing task: attempt_local1576831426_0001_m_000000_0
      17/01/24 14:34:35 INFO mapred.LocalJobRunner: Starting task: attempt_local1576831426_0001_m_000001_0
      17/01/24 14:34:35 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
      17/01/24 14:34:35 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
      17/01/24 14:34:35 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/krishna/flightinfo/input/2000.csv:134217728+134217728
      17/01/24 14:34:36 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
      17/01/24 14:34:36 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
      17/01/24 14:34:36 INFO mapred.MapTask: soft limit at 83886080
      17/01/24 14:34:36 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
      17/01/24 14:34:36 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
      17/01/24 14:34:36 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
      17/01/24 14:34:36 INFO mapreduce.Job: map 100% reduce 0%
      17/01/24 14:34:41 INFO mapred.LocalJobRunner: map > map
      17/01/24 14:34:42 INFO mapred.LocalJobRunner: map > map
      17/01/24 14:34:42 INFO mapred.MapTask: Starting flush of map output
      17/01/24 14:34:42 INFO mapred.MapTask: Spilling map output
      17/01/24 14:34:42 INFO mapred.MapTask: bufstart = 0; bufend = 17438733; bufvoid = 104857600
      17/01/24 14:34:42 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 20848636(83394544); length = 5365761/6553600
      17/01/24 14:34:42 INFO mapreduce.Job: map 32% reduce 0%
      17/01/24 14:34:42 INFO mapred.MapTask: Finished spill 0
      17/01/24 14:34:42 INFO mapred.Task: Task:attempt_local1576831426_0001_m_000001_0 is done. And is in the process of committing
      17/01/24 14:34:42 INFO mapred.LocalJobRunner: map
      17/01/24 14:34:42 INFO mapred.Task: Task 'attempt_local1576831426_0001_m_000001_0' done.
      17/01/24 14:34:42 INFO mapred.LocalJobRunner: Finishing task: attempt_local1576831426_0001_m_000001_0
      17/01/24 14:34:42 INFO mapred.LocalJobRunner: Starting task: attempt_local1576831426_0001_m_000002_0
      17/01/24 14:34:42 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
      17/01/24 14:34:42 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
      17/01/24 14:34:42 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/krishna/flightinfo/input/2000.csv:268435456+134217728
      17/01/24 14:34:43 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
      17/01/24 14:34:43 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
      17/01/24 14:34:43 INFO mapred.MapTask: soft limit at 83886080
      17/01/24 14:34:43 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
      17/01/24 14:34:43 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
      17/01/24 14:34:43 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
      17/01/24 14:34:44 INFO mapreduce.Job: map 100% reduce 0%
      17/01/24 14:34:48 INFO mapred.LocalJobRunner: map > map
      17/01/24 14:34:49 INFO mapreduce.Job: map 52% reduce 0%
      17/01/24 14:34:49 INFO mapred.LocalJobRunner: map > map
      17/01/24 14:34:49 INFO mapred.MapTask: Starting flush of map output
      17/01/24 14:34:49 INFO mapred.MapTask: Spilling map output
      17/01/24 14:34:49 INFO mapred.MapTask: bufstart = 0; bufend = 17438941; bufvoid = 104857600
      17/01/24 14:34:49 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 20848572(83394288); length = 5365825/6553600
      17/01/24 14:34:50 INFO mapred.MapTask: Finished spill 0
      17/01/24 14:34:50 INFO mapred.Task: Task:attempt_local1576831426_0001_m_000002_0 is done. And is in the process of committing
      17/01/24 14:34:50 INFO mapred.LocalJobRunner: map
      17/01/24 14:34:50 INFO mapred.Task: Task 'attempt_local1576831426_0001_m_000002_0' done.
      17/01/24 14:34:50 INFO mapred.LocalJobRunner: Finishing task: attempt_local1576831426_0001_m_000002_0
      17/01/24 14:34:50 INFO mapred.LocalJobRunner: Starting task: attempt_local1576831426_0001_m_000003_0
      17/01/24 14:34:50 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
      17/01/24 14:34:50 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
      17/01/24 14:34:50 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/krishna/flightinfo/input/2000.csv:402653184+134217728
      17/01/24 14:34:50 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
      17/01/24 14:34:50 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
      17/01/24 14:34:50 INFO mapred.MapTask: soft limit at 83886080
      17/01/24 14:34:50 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
      17/01/24 14:34:50 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
      17/01/24 14:34:50 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
      17/01/24 14:34:51 INFO mapreduce.Job: map 100% reduce 0%
      17/01/24 14:34:56 INFO mapred.LocalJobRunner:
      17/01/24 14:34:56 INFO mapred.MapTask: Starting flush of map output
      17/01/24 14:34:56 INFO mapred.MapTask: Spilling map output
      17/01/24 14:34:56 INFO mapred.MapTask: bufstart = 0; bufend = 17284761; bufvoid = 104857600
      17/01/24 14:34:56 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 20896012(83584048); length = 5318385/6553600
      17/01/24 14:34:56 INFO mapred.LocalJobRunner: map > sort
      17/01/24 14:34:56 INFO mapred.MapTask: Finished spill 0
      17/01/24 14:34:56 INFO mapred.Task: Task:attempt_local1576831426_0001_m_000003_0 is done. And is in the process of committing
      17/01/24 14:34:56 INFO mapred.LocalJobRunner: map
      17/01/24 14:34:56 INFO mapred.Task: Task 'attempt_local1576831426_0001_m_000003_0' done.
      17/01/24 14:34:56 INFO mapred.LocalJobRunner: Finishing task: attempt_local1576831426_0001_m_000003_0
      17/01/24 14:34:56 INFO mapred.LocalJobRunner: Starting task: attempt_local1576831426_0001_m_000004_0
      17/01/24 14:34:56 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
      17/01/24 14:34:56 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
      17/01/24 14:34:56 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/krishna/flightinfo/input/2000.csv:536870912+33280701
      17/01/24 14:34:56 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
      17/01/24 14:34:56 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
      17/01/24 14:34:56 INFO mapred.MapTask: soft limit at 83886080
      17/01/24 14:34:56 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
      17/01/24 14:34:56 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
      17/01/24 14:34:56 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
      17/01/24 14:34:58 INFO mapred.LocalJobRunner:
      17/01/24 14:34:58 INFO mapred.MapTask: Starting flush of map output
      17/01/24 14:34:58 INFO mapred.MapTask: Spilling map output
      17/01/24 14:34:58 INFO mapred.MapTask: bufstart = 0; bufend = 4279717; bufvoid = 104857600
      17/01/24 14:34:58 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 24897564(99590256); length = 1316833/6553600
      17/01/24 14:34:59 INFO mapred.MapTask: Finished spill 0
      17/01/24 14:34:59 INFO mapred.Task: Task:attempt_local1576831426_0001_m_000004_0 is done. And is in the process of committing
      17/01/24 14:34:59 INFO mapred.LocalJobRunner: map
      17/01/24 14:34:59 INFO mapred.Task: Task 'attempt_local1576831426_0001_m_000004_0' done.
      17/01/24 14:34:59 INFO mapred.LocalJobRunner: Finishing task: attempt_local1576831426_0001_m_000004_0
      17/01/24 14:34:59 INFO mapred.LocalJobRunner: map task executor complete.
      17/01/24 14:34:59 INFO mapred.LocalJobRunner: Waiting for reduce tasks
      17/01/24 14:34:59 INFO mapred.LocalJobRunner: Starting task: attempt_local1576831426_0001_r_000000_0
      17/01/24 14:34:59 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
      17/01/24 14:34:59 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
      17/01/24 14:34:59 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@55bf46b
      17/01/24 14:34:59 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=334338464, maxSingleShuffleLimit=83584616, mergeThreshold=220663392, ioSortFactor=10, memToMemMergeOutputsThreshold=10
      17/01/24 14:34:59 INFO reduce.EventFetcher: attempt_local1576831426_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
      17/01/24 14:34:59 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1576831426_0001_m_000003_0 decomp: 17 len: 21 to MEMORY
      17/01/24 14:34:59 INFO reduce.InMemoryMapOutput: Read 17 bytes from map-output for attempt_local1576831426_0001_m_000003_0
      17/01/24 14:34:59 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 17, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->17
      17/01/24 14:34:59 WARN io.ReadaheadPool: Failed readahead on ifile
      EBADF: Bad file descriptor
      at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
      at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
      at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
      at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      17/01/24 14:34:59 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1576831426_0001_m_000002_0 decomp: 17 len: 21 to MEMORY
      17/01/24 14:34:59 INFO reduce.InMemoryMapOutput: Read 17 bytes from map-output for attempt_local1576831426_0001_m_000002_0
      17/01/24 14:34:59 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 17, inMemoryMapOutputs.size() -> 2, commitMemory -> 17, usedMemory ->34
      17/01/24 14:34:59 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1576831426_0001_m_000001_0 decomp: 17 len: 21 to MEMORY
      17/01/24 14:34:59 WARN io.ReadaheadPool: Failed readahead on ifile
      EBADF: Bad file descriptor
      at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
      at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
      at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
      at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      17/01/24 14:34:59 INFO reduce.InMemoryMapOutput: Read 17 bytes from map-output for attempt_local1576831426_0001_m_000001_0
      17/01/24 14:34:59 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 17, inMemoryMapOutputs.size() -> 3, commitMemory -> 34, usedMemory ->51
      17/01/24 14:34:59 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1576831426_0001_m_000004_0 decomp: 17 len: 21 to MEMORY
      17/01/24 14:34:59 INFO reduce.InMemoryMapOutput: Read 17 bytes from map-output for attempt_local1576831426_0001_m_000004_0
      17/01/24 14:34:59 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 17, inMemoryMapOutputs.size() -> 4, commitMemory -> 51, usedMemory ->68
      17/01/24 14:34:59 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1576831426_0001_m_000000_0 decomp: 17 len: 21 to MEMORY
      17/01/24 14:34:59 WARN io.ReadaheadPool: Failed readahead on ifile
      EBADF: Bad file descriptor
      at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
      at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
      at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
      at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)
      17/01/24 14:34:59 INFO reduce.InMemoryMapOutput: Read 17 bytes from map-output for attempt_local1576831426_0001_m_000000_0
      17/01/24 14:34:59 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 17, inMemoryMapOutputs.size() -> 5, commitMemory -> 68, usedMemory ->85
      17/01/24 14:34:59 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
      17/01/24 14:34:59 INFO mapred.LocalJobRunner: 5 / 5 copied.
      17/01/24 14:34:59 INFO reduce.MergeManagerImpl: finalMerge called with 5 in-memory map-outputs and 0 on-disk map-outputs
      17/01/24 14:34:59 INFO mapred.Merger: Merging 5 sorted segments
      17/01/24 14:34:59 INFO mapred.Merger: Down to the last merge-pass, with 5 segments left of total size: 50 bytes
      17/01/24 14:34:59 INFO reduce.MergeManagerImpl: Merged 5 segments, 85 bytes to disk to satisfy reduce memory limit
      17/01/24 14:34:59 INFO reduce.MergeManagerImpl: Merging 1 files, 81 bytes from disk
      17/01/24 14:34:59 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
      17/01/24 14:34:59 INFO mapred.Merger: Merging 1 sorted segments
      17/01/24 14:34:59 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 70 bytes
      17/01/24 14:34:59 INFO mapred.LocalJobRunner: 5 / 5 copied.
      17/01/24 14:34:59 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
      17/01/24 14:34:59 INFO mapred.Task: Task:attempt_local1576831426_0001_r_000000_0 is done. And is in the process of committing
      17/01/24 14:34:59 INFO mapred.LocalJobRunner: 5 / 5 copied.
      17/01/24 14:34:59 INFO mapred.Task: Task attempt_local1576831426_0001_r_000000_0 is allowed to commit now
      17/01/24 14:34:59 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1576831426_0001_r_000000_0' to hdfs://localhost:9000/user/krishna/flightinfo/flighttotaldistance/_temporary/0/task_local1576831426_0001_r_000000
      17/01/24 14:34:59 INFO mapred.LocalJobRunner: reduce > reduce
      17/01/24 14:34:59 INFO mapred.Task: Task 'attempt_local1576831426_0001_r_000000_0' done.
      17/01/24 14:34:59 INFO mapred.LocalJobRunner: Finishing task: attempt_local1576831426_0001_r_000000_0
      17/01/24 14:34:59 INFO mapred.LocalJobRunner: reduce task executor complete.
      17/01/24 14:35:00 INFO mapreduce.Job: map 100% reduce 100%
      17/01/24 14:35:00 INFO mapreduce.Job: Job job_local1576831426_0001 completed successfully
      17/01/24 14:35:00 INFO mapreduce.Job: Counters: 35
      File System Counters
      FILE: Number of bytes read=34276
      FILE: Number of bytes written=1980499
      FILE: Number of read operations=0
      FILE: Number of large read operations=0
      FILE: Number of write operations=0
      HDFS: Number of bytes read=2482554234
      HDFS: Number of bytes written=16
      HDFS: Number of read operations=61
      HDFS: Number of large read operations=0
      HDFS: Number of write operations=8
      Map-Reduce Framework
      Map input records=5683048
      Map output records=5683047
      Map output bytes=73879611
      Map output materialized bytes=105
      Input split bytes=625
      Combine input records=5683047
      Combine output records=5
      Reduce input groups=1
      Reduce shuffle bytes=105
      Reduce input records=5
      Reduce output records=1
      Spilled Records=10
      Shuffled Maps =5
      Failed Shuffles=0
      Merged Map outputs=5
      GC time elapsed (ms)=2068
      Total committed heap usage (bytes)=2224029696
      Shuffle Errors
      BAD_ID=0
      CONNECTION=0
      IO_ERROR=0
      WRONG_LENGTH=0
      WRONG_MAP=0
      WRONG_REDUCE=0
      File Input Format Counters
      Bytes Read=570167997
      File Output Format Counters
      Bytes Written=16

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              krishnakumarn Krishna Kumar Nanjundaprasad
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated: