Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
1.99.3
-
None
-
None
Description
When record size is rowsPerBatch*N in HDFS, cannot export data to mysql.(default rowsPerBatch=100, N is positive Integer)
Source Code [GenericJdbcExportLoader.java]:
public static final int DEFAULT_ROWS_PER_BATCH = 100;
public static final int DEFAULT_BATCHES_PER_TRANSACTION = 100;
private int rowsPerBatch = DEFAULT_ROWS_PER_BATCH;
private int batchesPerTransaction = DEFAULT_BATCHES_PER_TRANSACTION;
..................................
..................................
int numberOfRows = 0;
int numberOfBatches = 0;
Object[] array;
while ((array = context.getDataReader().readArrayRecord()) != null) {
numberOfRows++;
executor.addBatch(array);
if (numberOfRows == rowsPerBatch) {
numberOfBatches++;
if (numberOfBatches == batchesPerTransaction)
else
{ executor.executeBatch(false); //no commit, only prepare preparedStatement } numberOfRows = 0;
}
}
if (numberOfRows != 0)
executor.endBatch();
Source Code [GenericJdbcExecutor.java]:
public void endBatch() {
try {
if (preparedStatement != null)
} catch (SQLException e)
{ throw new SqoopException(GenericJdbcConnectorError.GENERIC_JDBC_CONNECTOR_0002, e); } }
For example:
300 record in HDFS, rowsPerBatch is 100, 300%100=0,execute three times executor.executeBatch(false), without commit transaction, final numberOfRows=0, numberOfBatches=3, execute executor.endBatch(), but no commit in this method, no data will export to mysql.