When using Spark with Hadoop 1.x (the version I tested is 1.2.0) and spark.sql.sources.outputCommitterClass is configured, spark.sql.parquet.output.committer.class will be overriden.
For example, if spark.sql.parquet.output.committer.class is set to FileOutputCommitter, while spark.sql.sources.outputCommitterClass is set to DirectParquetOutputCommitter, neither _metadata nor _common_metadata will be written because FileOutputCommitter overrides DirectParquetOutputCommitter.
The reason is that, InsertIntoHadoopFsRelation initializes the TaskAttemptContext before calling ParquetRelation2.prepareForWriteJob(), which sets up Parquet output committer class. In the meanwhile, in Hadoop 1.x, TaskAttempContext constructor clones the job configuration, thus doesn't share the job configuration passed to ParquetRelation2.prepareForWriteJob().
This issue can be fixed by simply switching these two lines.
Here is a Spark shell snippet for reproducing this issue:
Then check /tmp/foo, Parquet summary files are missing: