We recently installed python on the Hadoop cluster with lot of data science python modules including xgboost , spicy , scikit learn , pandas
Using pyspark the data scientists are able to test there scoring models in the distributed mode on the Hadoop cluster. But with python - xgboost the pyspark job is not getting distributed and it is trying to run only on one instance.
we are trying to achieve the distributed mode when using python xgboost via pyspark.
It would be a great help if you can direct me on how to achieve this.