Hi niraj rai,
thanks for picking up this issue. Flink's DataSet API features DataSets which are built from regular Java collections. This is done via the ExecutionEnvironment as ExecutionEnvironment.fromCollection(myCollection). Under the hood, the Java collection is submitted to the executing Flink instance (cluster, local, YARN, ...) and the collection's data is processed.
This feature will use Flink's collection DataSets to process a file which is local on the user's client on a remote cluster. Instead of copying the small file into a file system or data store that can be accessed from the cluster, the client will be able to convert the file into a Java collection and use the collection as a DataSet in a Flink program. I would propose to read the local file by using Flink's regular InputFormats.
Please let me know if you have further questions,