I'd like to extend the hive-hcatalog-streaming API so that it also supports the writing of record updates and deletes in addition to the already supported inserts.
We have many Hadoop processes outside of Hive that merge changed facts into existing datasets. Traditionally we achieve this by: reading in a ground-truth dataset and a modified dataset, grouping by a key, sorting by a sequence and then applying a function to determine inserted, updated, and deleted rows. However, in our current scheme we must rewrite all partitions that may potentially contain changes. In practice the number of mutated records is very small when compared with the records contained in a partition. This approach results in a number of operational issues:
- Excessive amount of write activity required for small data changes.
- Downstream applications cannot robustly read these datasets while they are being updated.
- Due to scale of the updates (hundreds or partitions) the scope for contention is high.
I believe we can address this problem by instead writing only the changed records to a Hive transactional table. This should drastically reduce the amount of data that we need to write and also provide a means for managing concurrent access to the data. Our existing merge processes can read and retain each record's ROW_ID/RecordIdentifier and pass this through to an updated form of the hive-hcatalog-streaming API which will then have the required data to perform an update or insert in a transactional manner.
- Enables the creation of large-scale dataset merge processes
- Opens up Hive transactional functionality in an accessible manner to processes that operate outside of Hive.