Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-4467

Segmentation fault in libhdfs while connecting to HDFS, in an application populating Hive Tables

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 1.0.4
    • None
    • libhdfs
    • None
    • Ubuntu 12.04 (32 bit), application in C++, hadoop 1.0.4

    • libhdfs

    Description

      Connecting to HDFS using the libhdfs compiled library gives a segmentation vault and memory leaks; easily verifiable by valgrind.

      Even a simple application program given below has memory leaks:

      #include "hdfs.h"
      #include <iostream>

      int main(int argc, char **argv) {

      hdfsFS fs = hdfsConnect("localhost", 9000);
      const char* writePath = "/tmp/testfile.txt";
      hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
      if(!writeFile)

      { fprintf(stderr, "Failed to open %s for writing!\n", writePath); exit(-1); }

      char* buffer = "Hello, World!";
      tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
      if (hdfsFlush(fs, writeFile))

      { fprintf(stderr, "Failed to 'flush' %s\n", writePath); exit(-1); }

      hdfsCloseFile(fs, writeFile);
      }

      shell>valgrind --leak-check=full ./sample

      ==12773== LEAK SUMMARY:
      ==12773== definitely lost: 7,893 bytes in 21 blocks
      ==12773== indirectly lost: 4,460 bytes in 23 blocks
      ==12773== possibly lost: 119,833 bytes in 121 blocks
      ==12773== still reachable: 1,349,514 bytes in 8,953 blocks

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              shubhangi Shubhangi Garg
              Votes:
              1 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated: