Details
-
New Feature
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
Traditionally adding new data into Hive requires gathering a large amount of data onto HDFS and then periodically adding a new partition. This is essentially a “batch insertion”. Insertion of new data into an existing partition is not permitted. Hive Streaming API allows data to be pumped continuously into Hive. The incoming data can be continuously committed in small batches of records into an existing Hive partition or table. Once data is committed it becomes immediately visible to all Hive queries initiated subsequently.
This case is to add a PutHiveStreaming processor to NiFi, to leverage the Hive Streaming API to allow continuous streaming of data into a Hive partition/table.
Attachments
Attachments
Issue Links
- supercedes
-
NIFI-2448 Hive Processors depend on too recent a Hive version
- Resolved
- links to