Description
val df = spark.range(10000000000L) .filter('id > 1000) .orderBy('id.desc) .cache()
This triggers a job while the cache should be lazy. The problem is that, when creating `InMemoryRelation`, we build the RDD, which calls `SparkPlan.execute` and may trigger jobs, like sampling job for range partitioner, or broadcast job.
We should create the RDD at physical phase.