Uploaded image for project: 'Ambari'
  1. Ambari
  2. AMBARI-8468

Value of fs.defaultFS predefined in GlusterFS stack doesn't make sense from GlusterFS perspective

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 1.6.1
    • None
    • ambari-server, stacks
    • HDP 2.1 on RHEL 6 with 2.1.GlusterFS stack.

    Description

      Default value of fs.defaultFS property as defined in core-site.xml of
      2.1.GlusterFS stack is not valid from GlusterFS perspective.

      from 2.1.GlusterFS/services/GLUSTERFS/configuration/core-site.xml
        <property>
          <name>fs.defaultFS</name>
          <value>glusterfs:///localhost:8020</value>
        </property>
      

      Leaving the current default value there may create problems for some use cases.
      Eg. Hive has a problem with that. Or see for example traceback from Ambari
      itself:

      Error: E0904 : E0904: Scheme [glusterfs] not supported in uri [glusterfs:///localhost:8020/user/ambari-qa/examples/apps/map-reduce]
      Invalid sub-command: Missing argument for option: info
      
      use 'help [sub-command]' for help details
      Invalid sub-command: Missing argument for option: info
      
      use 'help [sub-command]' for help details
      
      workflow_status=
      2014-11-14 14:11:51,400 - Error while executing command 'service_check':
      Traceback (most recent call last):
        File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 111, in execute
          method(env)
        File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/OOZIE/package/scripts/service_check.py", line 31, in service_check
          oozie_smoke_shell_file( smoke_test_file_name)
        File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/OOZIE/package/scripts/service_check.py", line 54, in oozie_smoke_shell_file
          logoutput = True
        File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
          self.env.run()
        File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
          self.run_action(resource, action)
        File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
          provider_action()
        File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 239, in action_run
          raise ex
      Fail: Execution of '/tmp/oozieSmoke2.sh redhat /etc/oozie/conf /etc/hadoop/conf ambari-qa False' returned 1. 14/11/14 14:11:23 INFO glusterfs.GlusterVolume: Initializing gluster volume..
      

      The obvious fix of this would be to remove the hostname and port from the
      default value, so that it would be just glusterfs:///.

      The problem is that this can't be done without consequences: some pre-install
      scripts would have a problem with that (because we share/reuse code with HDFS
      there).

      Current workaround is to manually change the value after installation, which is
      hardly suitable solution from the long term perspective.

      To sum it up: in current Ambari, we can't ship valid default configuration
      in GlusterFS stack without breaking pre-install scripts at the same time.

      Since this may be usefull for any HCFS (Hadoop Compatible File
      System), not just GlusterFS, I propose to change the code in ambari so that
      any HCFS can have valid configuration in the configuration templates without
      breaking given HCFS or Ambari scripts itself. This would make possible to have
      reasonable value of fs.defaultFS in GlusterFS stack.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              mbukatov Martin Bukatovic
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated: