Currently, the hadoop runjar commands takes a single user jar as argument. However, our jar depends on other (custom) java libraries. So we get around this by this CLASSPATH hack. We could drop dependent jars into the hadoop lib file, but I'd rather not mix shipped hadoop code with our user code.
We could package all the dependent code into the same jar, but that seems unnecessary. A better alternative might be to set a CLASSPATH in the jar manifest, but I haven't thought very much about how that would work.
So unless there is another better yet simple method, we need some way to insert jar dependencies into hadoop. Your example of Tomcat is a bit different, since there is a well-defined mechanism for getting dependent jars into servlet containers (using the WEB-INF/lib directory).
It doesn't seem very worthwhile to rename CLASSPATH to HADOOP_CLASSPATH.