Hadoop Common
  1. Hadoop Common
  2. HADOOP-8052

Hadoop Metrics2 should emit Float.MAX_VALUE (instead of Double.MAX_VALUE) to avoid making Ganglia's gmetad core


    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 1.0.0, 0.23.0
    • Fix Version/s: 1.0.1, 0.23.1
    • Component/s: metrics
    • Labels:


      Ganglia's gmetad converts the doubles emitted by Hadoop's Metrics2 system to strings, and the buffer it uses is 256 bytes wide.

      When the SampleStat.MinMax class (in org.apache.hadoop.metrics2.util) emits its default min value (currently initialized to Double.MAX_VALUE), it ends up causing a buffer overflow in gmetad, which causes it to core, effectively rendering Ganglia useless (for some, the core is continuous; for others who are more fortunate, it's only a one-time Hadoop-startup-time thing).

      The fix needed to Ganglia is simple - the buffer needs to be bumped up to be 512 bytes wide, and all will be well - but instead of requiring a minimum version of Ganglia to work with Hadoop's Metrics2 system, it might be more prudent to just use Float.MAX_VALUE.

      An additional problem caused in librrd (which Ganglia uses beneath-the-covers) by the use of Double.MIN_VALUE (which functions as the default max value) is an underflow when librrd runs the received strings through libc's strtod(), but the librrd code is good enough to check for this, and only emits a warning - moving to Float.MIN_VALUE fixes that as well.

      1. HADOOP-8052.patch
        3 kB
        Varun Kapoor
      2. HADOOP-8052-branch-1.patch
        3 kB
        Varun Kapoor
      3. HADOOP-8052.patch
        1 kB
        Varun Kapoor
      4. HADOOP-8052-branch-1.patch
        1 kB
        Varun Kapoor



          • Assignee:
            Varun Kapoor
            Varun Kapoor
          • Votes:
            0 Vote for this issue
            4 Start watching this issue


            • Created: