SparkContext makes a decision which scheduler to use according to the Master URI. How about running applications against a custom scheduler? Such a custom scheduler would just extend CoarseGrainedSchedulerBackend.
The custom scheduler would be created by a provided factory. Factories would be defined in the configuration like spark.scheduler.factory.<name>=<factory-class>, where name is the scheduler name. SparkContext, once it learns that master address is not for standalone, Yarn, Mesos, local or any other predefined scheduler, it would resolve scheme from the provided master URI and look for the scheduler factory with the name equal to the resolved scheme.
then Master address would be custom://192.168.1.1