If the table under consideration is a very large file, it may lead to very large files on the mappers.
The job may never complete, and the files will never be cleaned from the tmp directory.
It would be great if the table can be placed in the distributed cache, but minimally the following should be added:
If the table (source) being joined leads to a very big file, it should just throw an error.
New configuration parameters can be added to limit the number of rows or for the size of the table.
The mapper should not be tried 4 times, but it should fail immediately.
I cant think of any better way for the mapper to communicate with the client, but for it to write in some well known
hdfs file - the client can read the file periodically (while polling), and if sees an error can just kill the job, but with
appropriate error messages indicating to the client why the job died.