Thanks for the patch, Sunil!
I think we should always update the job priority when sent the event. There's no harm in doing so, and it helps the AM look more consistent to the client in case the client queries the AM. For example, in the COMMITTING state the AM is still very much alive and able to respond to client requests – why wouldn't we update the priority in that state? We should just treat this like the diagnostic update transition and always process it.
Actually I'm wondering if we really need to handle this as full-blown event processing at all. It's a lot of boilerplate, object creation, queueing, dequeueing, and dispatch just to update an internal reference. The RMContainerAllocator already has a reference to the Job via getJob(), so we could just directly call getJob().setPriority rather than all the event overhead (and risk forgetting to process the event in some state). RMCommunicator is already doing this to set the queue name, for instance. If there are locking and concurrency concerns then just make the priority volatile and use it lock-free since it's read-only once set.
As for the new Job.getPriority method, is it really necessary? The priority is already available in the JobReport, and it appears the only reason this was added was to have the JobImpl call it when it was filling out the JobReport. It can just reference the jobPriority field directly.
As I mentioned in
MAPREDUCE-5870 I think the new TypeConverter functions are too generic. It's dangerous to assume that an int argument will always intended to be a JobPriority result. I think the name should be fromYarnPriority or someting like that to make it more clear what's going on and avoid conflicts with future needs to convert ints into things that aren't a JobPriority.
Nit: testJobPriorityUpdation s/b testJobPriorityUpdate