When fixing HIVE-16395, we decided that each new Spark task should clone the JobConf object to prevent any ConcurrentModificationException from being thrown. However, setting this variable comes at a cost of storing a duplicate JobConf object for each Spark task. These objects can take up a significant amount of memory, we should intern them so that Spark tasks running in the same JVM don't store duplicate copies.