Patch attached. The project split means the libhdfs test needs to access both the hdfs and common respos. The test lives in and is run out of the hdfs repo, however it runs an instance of hdfs, and therefore needs access to the common repo's bin directory. test-libhdfs.sh runs the hdfs instance out of the common repo (build/test/libhdfs and sub-directories get created there) since hadoop-daemon.sh makes doing otherwise a pain.
This means the test now requires setting HADOOP_CORE_HOME. Once
HDFS-621 is checked in it would be nice to convert this test to run a MiniDFS cluster and no longer depend on a common repo. It doesn't seem like running one-daemon per process using the traditional startup scripts adds much additional coverage. Reasonable?
To run the test:
export HADOOP_CORE_HOME=<common repo dir>
ant -Dcompile.c++=true -Dlibhdfs=true test-c++-libhdfs
You may need to ant clean your common directory to remove old hdfs jar files in the root directory, build/ivy or the lib dirs.