Currently Spark supports gpu/fpga resource scheduling and specifically on YARN it knows how to map gpu/fpga to the YARN resource types yarn.io/gpu and yarn.io/fpga. YARN also supports custom resource types and in Hadoop 3.3.1 made it easier for users to plugin in custom resource types. This means users may create a custom resource type that represents a GPU or FPGAs because they want additional logic that YARN the built in versions don't have. Ideally Spark users still just use the generic "gpu" or "fpga" types in Spark. So we should add the ability to change the Spark internal mappings.