When Spark converts the DataFrame to LogicalRDD for some reason (e.g. foreachBatch sink), Spark just picks the RDD from the origin DataFrame and discards the (logical/physical) plan.
The origin logical plan can be useful for several use cases, including:
1. wants to connect the overall logical plan into one
2. inherits plan stats from origin logical plan