Migration currently involves access to DataStore as its configured as part of repository.xml. However in complete migration actual binary content in DataStore is not accessed and migration logic only makes use of
- Dataidentifier = id of the files
- Length = As it gets encoded as part of blobId (
It would be faster and beneficial to allow migration without actual access to the DataStore. It would serve two benefits
- Allows one to test out migration on local setup by just copying the TarPM files. For e.g. one can only zip following files to get going with repository startup if we can somehow avoid having direct access to DataStore
- Provides faster (repeatable) migration as access to DataStore can be avoided which in cases like S3 might be slow. Given we solve how to get length
Have a DataStore implementation which can be provided a mapping file having entries for blobId and length. This file would be used to answer queries regarding length and existing of blob and thus would avoid actual access to DataStore.
Going further this DataStore can be configured with a delegate which can be used as a fallback in case the required details is not present in pre computed data set (may be due to change in content after that data was computed)