Sqoop
  1. Sqoop
  2. SQOOP-359

Import fails with Unknown SQL datatype exception

    Details

      Description

      To reproduce this, run an import using a query with number of mappers set to 1 and no boundary query specified. For example:

      $ sqoop import --connect jdbc:mysql://localhost/testdb --username test --password **** \
          --query 'SELECT TDX.A, TDX.B FROM TDX WHERE $CONDITIONS' \
          --target-dir /user/arvind/MYSQL/TDX1 -m 1
      

      This import will fail as follows:

      11/10/06 15:37:59 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-arvind/compile/190f858175a9f99756e503727c931450/QueryResult.jar
      11/10/06 15:37:59 INFO mapreduce.ImportJobBase: Beginning query import.
      11/10/06 15:38:00 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(null), MAX(null) FROM (SELECT TDX.A, TDX.B FROM TDX WHERE  (1 = 1) ) AS t1
      11/10/06 15:38:00 INFO mapred.JobClient: Cleaning up the staging area hdfs://localhost/opt/site/cdh3u1/hadoop/data/tmp/mapred/staging/arvind/.staging/job_201110061528_0004
      11/10/06 15:38:00 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Unknown SQL data type: -3
      	at com.cloudera.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:211)
      	at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:944)
      	at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:961)
      	at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
      	at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)
      	at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at javax.security.auth.Subject.doAs(Subject.java:396)
      	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
      	at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833)
      	at org.apache.hadoop.mapreduce.Job.submit(Job.java:476)
      	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:506)
      	at com.cloudera.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:123)
      	at com.cloudera.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:183)
      	at com.cloudera.sqoop.manager.SqlManager.importQuery(SqlManager.java:450)
      	at com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:384)
      	at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:455)
      	at com.cloudera.sqoop.Sqoop.run(Sqoop.java:146)
      	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
      	at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:182)
      	at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:221)
      	at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:230)
      	at com.cloudera.sqoop.Sqoop.main(Sqoop.java:239)
      
      

      The problem seems to be the bounding value query that is using a null column name for figuring out the datatype.

      1. SQOOP-359-1.patch
        5 kB
        Arvind Prabhakar
      2. SQOOP-359-2.patch
        6 kB
        Arvind Prabhakar

        Activity

        No work has yet been logged on this issue.

          People

          • Assignee:
            Arvind Prabhakar
            Reporter:
            Arvind Prabhakar
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development