Details
-
Bug
-
Status: Closed
-
Critical
-
Resolution: Fixed
-
None
Description
- Hive streaming source HiveMapredSplitReader reuse row in a wrong way, if change reuse row instance, will loose partition fields.
- When converting Flink File split to Hadoop File split, length should not be -1.
- DirectoryMonitorDiscovery should convert DFS modificationTime to UTC time mills.
- HiveTableSource.createStreamSourceForNonPartitionTable should use local zone mills instead of UTC mills because ContinuousFileMonitoringFunction use local zone mills.
Attachments
Issue Links
- relates to
-
FLINK-18077 E2E tests manually for Hive streaming source
- Closed
- links to