A few small suggestions from someone who hasn't through much of this but has done similar async setups in other systems in another lifetime...
1) on where the (core task) queues should live...
I'm still debating between having even the CoreAdmin to use zk (which means it'd only work in SolrCloud mode) or just have a local map of running taks.
I think it would be wise to keep them in ZK – if for no other reason then because the primary usecase you expect is for the async core calls to be made by the async overseer calls; and by keeping the async core queues in zk, the overseer can watch those queues directly for "completed" instead of needing ot wake up, poll every replica, go back to sleep.
However, a secondary concern (i think) is what should happen if/when a node gets rebooted – if the core admin tasks queues are in RAM then you could easily get in a situation where the overseer asks 10 replicas to do something, replicaA succeeds or fails quickly and then reboots, the overseer checks back once all replicas are done and finds that replicaA can't say one way or another whether it succeeded or failed – it's queues are totally empty.
2) on generating the task/request IDs.
in my experience, when implementing an async callback API like this, it can be handy to require the client to specify the magical id that you use to keep track of things – you just ensure it's unique among the existing async jobs you know about (either in the queue, or in the recently completed/failed queues). Sometimes single threaded (or centrally manged) client apps can generate a unique id easier then your distributed system, and/or they may already have a one-to-one mapping between some id they've already got and the task they are asking you to do, and re-using that id makes the client's life easier for debuging/audit-logs.
in the case of async collection commands -> async core commands, it would also mean the overseer could reuse whatever id the client passed in for the collection commands when talking to each of the replicas.