Resolution: Not A Problem
El Capitan, Single cluster Hadoop, Python 3, Spark 1.6, Anaconda
The current directory structure for my test script is as follows:
I have attached map.py and test_map.py file with this issue.
When I run the nosetest in the test directory, the test fails. I get no module named "script" found error.
However when I modify the map_add function to replace the call to add within reduceByKey in map.py like this:
result = df.map(lambda x: (x.key, x.value)).reduceByKey(lambda x,y: x+y)
The test passes.
Also, when I run the original test_map.py from the project directory, the test passes.
I am not able to figure out why the test doesn't detect the script module when it is within the test directory.
I have also attached the log error file. Any help will be much appreciated.