Description
Kicking off many sqoop processes in different threads results in:
2014-08-01 13:47:24 -0400: INFO - 14/08/01 13:47:22 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Rename cannot overwrite non empty destination directory /tmp/hadoop-hadoop/mapred/local/1406915233073
2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:149)
2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:163)
2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
2014-08-01 13:47:24 -0400: INFO - at java.security.AccessController.doPrivileged(Native Method)
2014-08-01 13:47:24 -0400: INFO - at javax.security.auth.Subject.doAs(Subject.java:415)
2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186)
2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159)
2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239)
2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645)
2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415)
2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502)
2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
2014-08-01 13:47:24 -0400: INFO - at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
2014-08-01 13:47:24 -0400: INFO - at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
If two are kicked off in the same second. The issue is the following lines of code in the org.apache.hadoop.mapred.LocalDistributedCacheManager class:
// Generating unique numbers for FSDownload. AtomicLong uniqueNumberGenerator = new AtomicLong(System.currentTimeMillis());
and
Long.toString(uniqueNumberGenerator.incrementAndGet())),
Attachments
Attachments
Issue Links
- is duplicated by
-
MAPREDUCE-6685 LocalDistributedCacheManager can have overlapping filenames
- Resolved
-
MAPREDUCE-6992 Race for temp dir in LocalDistributedCacheManager.java
- Resolved
- relates to
-
MAPREDUCE-6766 Concurrent local job failures due to uniqueNumberGenerator = new AtomicLong(System.currentTimeMillis())
- Open
-
YARN-2624 Resource Localization fails on a cluster due to existing cache directories
- Closed