Uploaded image for project: 'Ambari'
  1. Ambari
  2. AMBARI-14853

Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.4.0
    • Fix Version/s: 2.4.0
    • Component/s: None
    • Labels:
      None

      Description

      Three additional steps need to be done to to install Atlas 0.6 via Ambari.

      1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’

      atlas.notification.embedded" : false,
      atlas.kafka.data = /tmp
      atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
      atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
      atlas.kafka.hook.group.id = atlas
      atlas.kafka.entities.group.id = entities
      
      • Note:
        For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.
        The directory specified in “atlas.kaka.data” should exist.

      2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack

      export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}
      

      *Note:
      It is important that the atlas directories are prepended to the existing classpath.

      3. Restart the Atlas and Hive services after the cluster is fully provisioned

        Attachments

        1. AMBARI-14853.patch
          8 kB
          Tom Beerbower

          Issue Links

            Activity

              People

              • Assignee:
                tbeerbower Tom Beerbower
                Reporter:
                tbeerbower Tom Beerbower
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: