Cassandra has provided bulkdata's export/import via SSTable, which is very fancy for users. In some case we have TB-level data from HDFS to Cassandra, and we can use spark to generate SSTable files by distributed computation with codes like above. Unfortunately CQLSSTableWriter can only write data to local path, and sstableloader can only load from local path. So if we use CQLSSTableWriter in Spark or Hadoop MR program, we need to write other codes put local sstables distributed in distributed nodes to HDFS, then download all sstables from HDFS to the machine with sstableloader, bigdata stored and transferred between pysical machines will bring many reliability problems.
So we'd better let CQLSSTableWriter can write data to HDFS directly or have other writer which supports HDFS, and let sstableloader can load from HDFS path.