Whirr
  1. Whirr
  2. WHIRR-146

Changing the mapred.child.java.opts value does not change the heap size from a default one.

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.3.0
    • Component/s: None
    • Labels:
      None
    • Environment:

      Amazon EC2, Amazon Linux images.

      Description

      Even if I change the value for mapred.child.java.opts the task is started with -Xmx200m.
      Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in order to have any effect.
      Unfortunately the /scripts/cdh/install and /scripts/apache/install which generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with this deprecation as a result we are not able to use mappers and reducers which does not fit in 200M heap size.

      How to reproduce:
      1. Start a cluster on large instances where we are using 64bit jvm and run a simple distcp, you will experience Child jvm crash.
      2. Or run a job with mappers or reducers which does not fit in 200M heap, it will experience OutOfMemoryError in child processes.

      1. whirr-146.patch
        2 kB
        Tibor Kiss
      2. WHIRR-146.patch
        3 kB
        Tom White

        Activity

        Hide
        Tom White added a comment -

        OK, I reverted that part of the patch. Thanks.

        Show
        Tom White added a comment - OK, I reverted that part of the patch. Thanks.
        Hide
        Tibor Kiss added a comment -

        I agree.

        Show
        Tibor Kiss added a comment - I agree.
        Hide
        Tom White added a comment -

        Looking at this more, the new properties aren't in Hadoop 0.20.2, so we should revert the part for apache/hadoop/post-configure. Tibor, do you agree?

        Show
        Tom White added a comment - Looking at this more, the new properties aren't in Hadoop 0.20.2, so we should revert the part for apache/hadoop/post-configure. Tibor, do you agree?
        Hide
        Tom White added a comment -

        I've just committed this. Thanks Tibor!

        Show
        Tom White added a comment - I've just committed this. Thanks Tibor!
        Hide
        Tibor Kiss added a comment -

        Thank you, Tom!

        Show
        Tibor Kiss added a comment - Thank you, Tom!
        Hide
        Tom White added a comment -

        I regenerated the patch following WHIRR-87.

        Show
        Tom White added a comment - I regenerated the patch following WHIRR-87 .
        Hide
        Tom White added a comment -

        +1 looks good

        > In order to JUnit test it, probably we would need to write a job which runs in integration tests.

        Adding jobs to the benchmark suites that will be introduced in WHIRR-92 is probably the way to do this.

        > I'm not sure if we are changing only the install scripts which are also changeable when you would like to personalize the setup

        Changing the install scripts is not very user friendly at the moment. WHIRR-55 will make this easier.

        Show
        Tom White added a comment - +1 looks good > In order to JUnit test it, probably we would need to write a job which runs in integration tests. Adding jobs to the benchmark suites that will be introduced in WHIRR-92 is probably the way to do this. > I'm not sure if we are changing only the install scripts which are also changeable when you would like to personalize the setup Changing the install scripts is not very user friendly at the moment. WHIRR-55 will make this easier.
        Hide
        Tibor Kiss added a comment -

        Here is a patch which works for me.

        In order to JUnit test it, probably we would need to write a job which runs in integration tests.
        I'm not sure if we are changing only the install scripts which are also changeable when you would like to personalize the setup, is it really necessary to overload the integration tests at all?

        Show
        Tibor Kiss added a comment - Here is a patch which works for me. In order to JUnit test it, probably we would need to write a job which runs in integration tests. I'm not sure if we are changing only the install scripts which are also changeable when you would like to personalize the setup, is it really necessary to overload the integration tests at all?

          People

          • Assignee:
            Tibor Kiss
            Reporter:
            Tibor Kiss
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development