Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-6237

Multiple mappers with DBInputFormat don't work because of reusing conections

    Details

    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      DBInputFormat.createDBRecorder is reusing JDBC connections across instances of DBRecordReader. This is not a good idea. We should be creating separate connection. If performance is a concern, then we should be using connection pooling instead.

      I looked at DBOutputFormat.getRecordReader. It actually creates a new Connection object for each DBRecordReader. So can we just change DBInputFormat to create new Connection every time? The connection reuse code was added as part of connection leak bug in MAPREDUCE-1443. Any reason for caching the connection?

      We observed this issue in a customer setup where they were reading data from MySQL using Pig. As per customer, the query is returning two records which causes Pig to create two instances of DBRecordReader. These two instances are sharing the database connection instance. The first DBRecordReader runs to extract the first record from MySQL just fine, but then closes the shared connection instance. When the second DBRecordReader runs, it tries to execute a query to retrieve the second record on the closed shared connection instance, which fail. If we set
      mapred.map.tasks to 1, the query will be successful.

        Attachments

        1. mapreduce-6237.patch
          6 kB
          Kannan Rajah
        2. mapreduce-6237.patch
          6 kB
          Kannan Rajah
        3. mapreduce-6237.patch
          7 kB
          Kannan Rajah

          Activity

            People

            • Assignee:
              rkannan82 Kannan Rajah
              Reporter:
              rkannan82 Kannan Rajah
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: