FileStreamSource fetches the available files per batch which is a "heavy cost" operation.
(E.g. It took around 5 seconds to list leaf files for 95 paths which contain 674,811 files. It's not even in HDFS path - it's local filesystem.)
If "maxFilesPerTrigger" is not set, Spark would consume all the fetched files. After the batch has been completed, it's obvious for Spark to fetch per micro batch.
If "latestFirst" is true (regardless of "maxFilesPerTrigger"), the files to process should be updated per batch, so it's also obvious for Spark to fetch per micro batch.
Except above cases (in short, maxFilesPerTrigger is being set and latestFirst is false), the files to process can be "continuous" - we can "cache" the fetched list of files and consume until the list has been exhausted.