Whirr
  1. Whirr
  2. WHIRR-165

Hadoop integration tests fail due to WHIRR-160 changes

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.3.0
    • Component/s: service/hadoop
    • Labels:
      None

      Description

      The problem (described in WHIRR-164) is that keys are never stored as FilePayloads and the Hadoop proxy fails.

        Activity

        Hide
        Tom White added a comment -

        I've just committed this.

        I verified that with this patch the Hadoop integration test passes on the machine that it was failing on (the one with pickier SSH settings). The case where there is no default keypair is covered by WHIRR-164.

        Show
        Tom White added a comment - I've just committed this. I verified that with this patch the Hadoop integration test passes on the machine that it was failing on (the one with pickier SSH settings). The case where there is no default keypair is covered by WHIRR-164 .
        Hide
        Tom White added a comment -

        E.g. on one machine I get

        @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
        @         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
        @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
        Permissions 0664 for '/tmp/hadoop2803372008213524011key' are too open.
        It is recommended that your private key files are NOT accessible by others.
        This private key will be ignored.
        bad permissions: ignore key: /tmp/hadoop2803372008213524011key
        Permission denied (publickey)
        

        This fix is just to make sure that a pre-existing private key is used. The permissions can be fixed in WHIRR-164.

        Show
        Tom White added a comment - E.g. on one machine I get @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0664 for '/tmp/hadoop2803372008213524011key' are too open. It is recommended that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: /tmp/hadoop2803372008213524011key Permission denied (publickey) This fix is just to make sure that a pre-existing private key is used. The permissions can be fixed in WHIRR-164 .
        Hide
        Tom White added a comment -

        > HadoopProxy should not fail if the key is not stored as a FilePayload.

        The problem is that the permissions on the generated private key file (and enclosing directory) are not strict enough for some SSH clients.

        Show
        Tom White added a comment - > HadoopProxy should not fail if the key is not stored as a FilePayload. The problem is that the permissions on the generated private key file (and enclosing directory) are not strict enough for some SSH clients.
        Hide
        Andrei Savu added a comment -
            File identity;
            if (clusterSpec.getPrivateKey().getRawContent() instanceof File) {
              identity = File.class.cast(clusterSpec.getPrivateKey().getRawContent());
            } else {
              identity = File.createTempFile("hadoop", "key");
              identity.deleteOnExit();
              Files.write(ByteStreams.toByteArray(clusterSpec.getPrivateKey().getInput()), identity);
            }
        

        HadoopProxy should not fail if the key is not stored as a FilePayload. It seems like it creates a temporary identity file when running ssh.

        Show
        Andrei Savu added a comment - File identity; if (clusterSpec.getPrivateKey().getRawContent() instanceof File) { identity = File.class. cast (clusterSpec.getPrivateKey().getRawContent()); } else { identity = File.createTempFile( "hadoop" , "key" ); identity.deleteOnExit(); Files.write(ByteStreams.toByteArray(clusterSpec.getPrivateKey().getInput()), identity); } HadoopProxy should not fail if the key is not stored as a FilePayload. It seems like it creates a temporary identity file when running ssh.
        Hide
        Andrei Savu added a comment -

        I'm planning to provide a patch for WHIRR-164 as soon as possible.

        Show
        Andrei Savu added a comment - I'm planning to provide a patch for WHIRR-164 as soon as possible.
        Hide
        Andrei Savu added a comment -

        I have been unable to replicate this issue. The Hadoop integration tests pass for me if I don't remove the key pair from ~/.ssh. If I remove the default key pair the integration tests fail with and without the patch.

        Show
        Andrei Savu added a comment - I have been unable to replicate this issue. The Hadoop integration tests pass for me if I don't remove the key pair from ~/.ssh. If I remove the default key pair the integration tests fail with and without the patch.
        Hide
        Tom White added a comment -

        This patch is the minimal fix so that the tests pass - the fuller fix for removing the dependency on the default keypair is covered by WHIRR-164.

        Show
        Tom White added a comment - This patch is the minimal fix so that the tests pass - the fuller fix for removing the dependency on the default keypair is covered by WHIRR-164 .

          People

          • Assignee:
            Tom White
            Reporter:
            Tom White
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development