Every batch two spark jobs are created. Only the second one is associated with the streaming output operation and shows at batch page.
The first action rdd.count() is invoked by JobGenerator.generateJobs. Batch time and output op id are not available in spark context because they are set in JobScheduler later.
delegate dstream.getOrCompute to JobScheduler so that all rdd actions can run in spark context with correct local properties.