Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-18232

Hive streaming source can not work when inserting new multiple records partition with start offset

    XMLWordPrintableJSON

Details

    Description

      • Hive streaming source HiveMapredSplitReader reuse row in a wrong way, if change reuse row instance, will loose partition fields.
      • When converting Flink File split to Hadoop File split, length should not be -1.
      • DirectoryMonitorDiscovery should convert DFS modificationTime to UTC time mills.
      • HiveTableSource.createStreamSourceForNonPartitionTable should use local zone mills instead of UTC mills because ContinuousFileMonitoringFunction use local zone mills.

      Attachments

        Issue Links

          Activity

            People

              lzljs3620320 Jingsong Lee
              lzljs3620320 Jingsong Lee
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: