Details
-
New Feature
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
Description
Sometimes, user need to write flink streaming data to carbon, required high concurrency and high throughput.
The write process is:
- Write flink streaming data to local file system of flink task node use flink StreamingFileSink and carbon SDK;
- Copy local carbon data file to carbon data store system, such as HDFS, S3;
- Generate and write segment file to ${tablePath}/load_details;
Run "alter table ${tableName} collect segments" command on server, to compact segment files in ${tablePath}/load_details, and then move the compacted segment file to ${tablePath}/Metadata/Segments/,update table status file finally.