Uploaded image for project: 'Sqoop'
  1. Sqoop
  2. SQOOP-3264

Import JDBC SQL date,time,timestamp to Hive as TIMESTAMP, BIGINT and TIMESTAMP

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Patch Available
    • Priority: Minor
    • Resolution: Unresolved
    • Affects Version/s: 1.4.6
    • Fix Version/s: 1.5.0
    • Component/s: hive-integration
    • Labels:
      None

      Description

      When importing JDBC SQL Types:

              public final static int DATE            =  91;
              public final static int TIME            =  92;
              public final static int TIMESTAMP       =  93;
      

      Sqoop currently uses the org.apache.sqoop.hive.HiveTypes.toHiveType method, where all of these types are mapped to STRING type.

      Given that in fact, the JDBC value returned is of type Long, let me propose we can output the type for Hive as:

      DATE -> TIMESTAMP
      TIME -> BIGINT
      TIMESTAMP -> TIMESTAMP
      

      This is also in line with org.apache.sqoop.manager.ConnManager.toAvroType, where the type is

          case Types.DATE:
          case Types.TIME:
          case Types.TIMESTAMP:
            return Type.LONG;
      

      Some of the connectors override the toJavaType:

      org.apache.sqoop.manager.SQLServerManager
      org.apache.sqoop.manager.oracle.OraOopConnManager
      

      which may indicate different handling.

      The SQLServerManager uses Java String as the output type, because of timezones.
      Same holds true for OraOopConnManager, although it has a separate configuration boolean value
      'oraoop.timestamp.string' which controls whether the import will use timezones and convert date types
      to Java String, or timezones are going to be dropped and import will behave the 'sqoop way'.

      Both of these connectors already handle these types as String by default, proposed change would not affect them.
      Other connectors are needed to be checked.

      Some of the connectors override the toHiveType:

      org.apache.sqoop.manager.oracle.OraOopConnManager
      

      This connector uses the 'sqoop way':

          String hiveType = super.toHiveType(sqlType);
      

      and only when not resolved, the type used is decided:

          if (hiveType == null) {
      
            // http://wiki.apache.org/hadoop/Hive/Tutorial#Primitive_Types
      
            if (sqlType == OraOopOracleQueries.getOracleType("BFILE")
                || sqlType == OraOopOracleQueries.getOracleType("INTERVALYM")
                || sqlType == OraOopOracleQueries.getOracleType("INTERVALDS")
                || sqlType == OraOopOracleQueries.getOracleType("NCLOB")
                || sqlType == OraOopOracleQueries.getOracleType("NCHAR")
                || sqlType == OraOopOracleQueries.getOracleType("NVARCHAR")
                || sqlType == OraOopOracleQueries.getOracleType("OTHER")
                || sqlType == OraOopOracleQueries.getOracleType("ROWID")
                || sqlType == OraOopOracleQueries.getOracleType("TIMESTAMPTZ")
                || sqlType == OraOopOracleQueries.getOracleType("TIMESTAMPLTZ")
                || sqlType == OraOopOracleQueries.getOracleType("STRUCT")) {
              hiveType = "STRING";
            }
      
            if (sqlType == OraOopOracleQueries.getOracleType("BINARY_FLOAT")) {
              hiveType = "FLOAT";
            }
      
            if (sqlType == OraOopOracleQueries.getOracleType("BINARY_DOUBLE")) {
              hiveType = "DOUBLE";
            }
          }
      

      This code is affected with proposed change. As the Hive TIMESTAMP is timezone-less, we have to change the handling in this method - respect the property 'oraoop.timestamp.string' - if true, output STRING hive type, if false, go with 'sqoop way'.

      The Hive Type is only used when generating the table ddl (create statement) and Hive can properly recognize the JDBC compliant java.sql.Timestamp format "YYYY-MM-DD HH:MM:SS.fffffffff", so no connector should be affected in a way, that Hive would not read the resulting column values.

      However, thorough testing should be done on all connectors before releasing any column type behavior changes.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                michal.klempa Michal Klempa
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated: