Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-7456

Connection with RemoteException is not removed from cached HashTable and cause memory leak

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 0.20.2
    • None
    • fs
    • None

    Description

      In a long running system like Oozie, we use hadoop client APIs, such as FileSystem.exists(), to check files exist on hdfs or not to kick off a user job. But in a production environment, user sometimes gives wrong or invalid format of file/directory paths. In that case, after server was up for couple days, we found around 80% of memory were taken away by hadoop ipc client connections. In one of the connections, there was a hashtable contains 200k entries. We cross-checked Hadoop code and found out that in org.apache.hadoop.ipc.Client.receiveResponse(), if state if fatal, the call object does not remove from the hashtable (calls) and keeps in the memory until system throws OutOfMemory error or crash. The code in question is here :

      • org.apache.hadoop.ipc.Client.receiveResponse()
        } else if (state == Status.FATAL.state) { // Close the connection markClosed(new RemoteException(WritableUtils.readString(in), WritableUtils.readString(in))); }

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              angeloh Angelo K. Huang
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: