That seems reasonable. I think it's a given that we need to keep the original libhdfs for performance. Having a libhdfs-alike that goes over HTTP seems reasonable enough but not always preferable. To speak to each of the original points:
Compatibility - allows a single fuse client to work across server versions
We need to address compatibility for clients in general. Our Java client (and hence libhdfs) need this just as much as fuse.
Works with both WebHDFS and Hoop since they are protocol compatible
I guess this is an advantage, but given that libhdfs already wraps arbitrary hadoop filesystems, we already have this capability.
Removes the overhead related to libhdfs (forking a jvm)
fuse is a long-running client, so the fork overhead seems minimal. Recent improvements in libhdfs have also cut out most of the copying overhead.
Makes it easier to support features like security
Perhaps - but libhdfs needs security anyway, so I don't think it buys us much.