Uploaded image for project: 'Ranger'
  1. Ranger
  2. RANGER-5

Ability to write audit log in HDFS

    XMLWordPrintableJSON

    Details

    • Type: New Feature
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.4.0
    • Component/s: None
    • Labels:
      None

      Description

      Ability to write Logs into HDFS

      HdfsFileAppender is log4J appender used to log into hdfs the logs.
      Following are configuration parameters.

      1. HDFS appender
        #
        hdfs.xaaudit.logger=INFO,console,HDFSLOG
        log4j.logger.xaaudit=${hdfs.xaaudit.logger}
        log4j.additivity.xaaudit=false
        log4j.appender.HDFSLOG=com.xasecure.authorization.hadoop.log.HdfsFileAppender
        log4j.appender.HDFSLOG.File=/grid/0/var/log/hadoop/hdfs/argus_audit.log
        log4j.appender.HDFSLOG.HdfsDestination=hdfs://ec2-54-88-128.112.compute.1.amazonaws.com:8020:/audit/hdfs/%hostname%/argus_audit.log
        log4j.appender.HDFSLOG.layout=org.apache.log4j.PatternLayout
        log4j.appender.HDFSLOG.layout.ConversionPattern=%d\X
        Unknown macro: {ISO8601}

        %p %c

        Unknown macro: {2}

        : %m%n %X

        Unknown macro: {LogPath}

        HdfsFileRollingInterval -> Hdfs file Rollover time e.g 1min, 5min,... 1hr, 2hrs,.. 1day, 2days... 1week, 2weeks.. 1month, 2months...
        log4j.appender.HDFSLOG.HdfsFileRollingInterval=3min
        LocalFileRollingInterval -> Hdfs file Rollover time e.g 1min, 5min,... 1hr, 2hrs,.. 1day, 2days... 1week, 2weeks.. 1month, 2months..
        log4j.appender.HDFSLOG.FileRollingInterval=1min
        log4j.appender.HDFSLOG.HdfsLiveUpdate=true
        log4j.appender.HDFSLOG.HdfsCheckInterval=2min

      1) HdfsFileAppender will log into given HDFSDestination Path.
      2) Incase of unavailability of configured hdfs, a Local file in the given log4j parameter FILE will be created with extension. cache.
      3) This local .cache file will be rolled over based on the FileRollingInterval parameter.
      4) Once when the hdfs is available and ready, logging will be done in the HDFSDestination provided.
      5) Local .cache file will be moved into HDFSDestination.
      6) Log File created in the hdfs destination will be rolled over based on the HdfsFileRollingInterval parameter
      7) Parameter HdfsLiveUpdate = True mean when ever the hdfs is available appender will send the logs to hdfsfile. If False Local .cache file will be created and these files will be moved periodically into HDFSDestination
      8) Parameter HdfsCheckInterval is the interval to check for the availability of HDFS after the first failure. It that time local .cache file will hold the logs.

      Argus Audit Logging into HDFS:

      . For Audit logs “Policy Manager” should exclude the hdfs file Path from auditing to avoid recursive call that is there when logging the audit.
      . Configure log4j parameter in the xasecure-audit.xml. Make it Async.
      (Note that each agent will have its own xasecure-aduit.xml ) properties.
      . For Auditing Hdfs Agent, have the appender part of NameNode and SecondaryNamenode.
      . For Auditing Hbase Agent , have the appender part of Master and RegionServer.
      . For Auditing Hive Agent have it part of the HiverServer2

      Regular Logging Usage:

      For regular functionality of enabling the logging , do the same way other Log4J appenders are configured.

        Attachments

          Activity

            People

            • Assignee:
              rmani Ramesh Mani
              Reporter:
              sneethiraj Selvamohan Neethiraj
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: