Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-1672

Create test scenario for "distributed cache file behaviour, when dfs file is not modified"

    XMLWordPrintableJSON

Details

    • Test
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • test
    • None

    Description

      This test scenario is for a distributed cache file behaviour
      when it is not modified before and after being
      accessed by maximum two jobs. Once a job uses a distributed cache file
      that file is stored in the mapred.local.dir. If the next job
      uses the same file, then that is not stored again.
      So, if two jobs choose the same tasktracker for their job execution
      then, the distributed cache file should not be found twice.

      This testcase should run a job with a distributed cache file. All the
      tasks' corresponding tasktracker's handle is got and checked for
      the presence of distributed cache with proper permissions in the
      proper directory. Next when job
      runs again and if any of its tasks hits the same tasktracker, which
      ran one of the task of the previous job, then that
      file should not be uploaded again and task use the old file.

      Attachments

        Issue Links

          Activity

            People

              iyappans Iyappan Srinivasan
              iyappans Iyappan Srinivasan
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated: